Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds
fortune.com Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds
The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.
3
comments
A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
0ReplyWait for the next version which will be trained on data that includes gpt generated word salad
-1ReplyNo that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.
-2Reply