Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...
ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right?
Here is my list of what I personally think is happening:
They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
They got actually scared of it's capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
At the start I used to use ChatGPT to help me write really rote and boring code but now it's not even useful for that. Half the stuff it sends me (very basic functions) LOOK correct but don't return the correct values or the parameters are completely wrong or something absolutely critical.
It's a machine learning chat bot, not a calculator, and especially not "AI."
Its primary focus is trying to look like something a human might say. It isn't trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.
It doesn't need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.
This paper is pretty unbelievable to me in the literal sense. From a quick glance:
First of all they couldn't even bother to check for simple spelling mistakes. Second, all they're doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.
But most importantly I don't believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there's something seriously wrong with how they collect/evaluate the answers.
And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.
I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that's what's happening here.
HMMMM. It's almost like it's not AI at all, but just a digital parrot. Who woulda thought?! /s
To it, everything is true and normal, because it understands nothing. Calling it "AI" is just for compromising with ignorant people's "knowledge" and/or for hype.
My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf
My (random user) opinion is that the answer is a mix between the required computational power being too expensive and thus reduced and somehow how they "fixed" the models so they cannot be "jailbroken"
Can someone explain why they don't take the approach where things are somewhat compartmentalized. So you have a image processing program, a math program, a music program, etc and like the human brain that has cross talk but also dedicated certain parts of your brain to do specific things.
I've asked it word problems before and it fails miserably, giving me insane answers that make no sense. For example, I was curious once how many stars you would expect to find in a region of the milky way with a radius of 650 light years, assuming an average of 4 light years per star. The first answer it gave me was like a trillion stars or something, and I asked it if that makes sense to it, a trillion stars in a subset of space known to only contain about a quarter of that number, and it gave me a wildly different answer. I asked it to check again and it gave me a third wildly different number.
It just occurred to me that one could purposely seed it with incorrect information to break its usefulness. I'm anti-AI so I would gladly do this. I might try it myself.