AI doesn't know what's wrong or correct. It hallucinates every answer. It's up to the supervisor to determine whether it's wrong or correct.
Mathematically verifying the correctness of these algorithms is a hard problem. It's intentional and the trade-off for the incredible efficiency.
Besides, it can only "know" what it has been trained on. It shouldn't be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn't know how to use these models.
It is impossible to mathematically determine if something is correct. Literally impossible.
At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn't the same thing is consensus, because AI is not intelligent.
If the 'supervisor' has to determine if it is right and wrong, what is the point of AI as a source of knowledge?
Hallucination is also wildly misleading. The AI does not believe something that isn't real, it was incorrect in the words it guessed would be appropriate.
The funny thing is we hallucinate all our answers too. I don't know where these words are coming from and I am not reasoning about them other than construction of a grammatically correct sentence. Why did I type this? I don't have a fucking clue. 😂
We map our meanings onto whatever words we see fit. If I had a dollar for every time I've heard a Republican call Obama a Marxist still blows my mind.
Thank you for saying something too. Better than I could do. I've been thinking about AI since I was a little kid. I've watched it go from at best some heuristic pathfinding in video games all the way to what we have now. Most people just weren't ever paying attention. It's been incredible to see that any of this was even possible.
I watched Two Minute Papers from back when he was mostly doing light transport simulation (raytracing). It's incredible where we are, but baffling people can't see the tech as separate from good old capitalism and the owner class. It just so happens it takes a fuckton of money to build stuff like this, especially at first. This is super early.
Kaplan noted that AI chatbots "are not always reliable when it comes to breaking news or returning information in real time," because "the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained."
If you're expecting a glorified autocomplete to know about things it doesn't have in its training data, you're an idiot.
There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.
I just assumed that its bs at first, but I also once nearly went unga bunga caveman against a computer from 1978. So I probably have a deeper understanding of how dumb computers can be.
Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now...
Well, if the chatbot learned anything from Dementia Don the racist rapist with 34 felonies that can't complete a coherent sentence, it learned that you never tell the truth.