ChatGPT consistently makes up shit. It's difficult to tell when something is made up because it's a language model so it is supposed to sound confident as if it's any person telling a fact that they know.
It knows how to talk like a subject matter expert because that's usually what gets publicized most and thus that's what it's trained on, but it doesn't always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it's wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won't trust a glorified autocomplete.
Man this type of shit is why I'm getting rid of google Assistant and going to a FOSS home assistant setup. I don't want chat gpt. I want to add things to my calendar, my shopping list, turn off lights and open/close blinds. I want to mute speakers at a certain time with a routine that isn't broken every five minutes. I want timers that work reliably. I want to be able to make an announcement when Amazon is at the door. Why are they making this so difficult?
Assistant already reads off a paragraph when I'm just trying to turn off a light. No way do I need something that will recite the entire bibliography of the sources it used to find those controls.
I just want to be able to consistently make searches using what's on my phone screen. Is that too much to ask? The screen search button disappears every other month and I'm sick of it. I don't invoke the Assistant for any other reason.
Update: it's back again after another month long hiatus! Who knows when it'll be taken from me next!
Digital assistants are good for timers, turning on smart lights, and sometimes playing music. None of those things require a large language model to spit random text back at me.