ChatGPT 5's PHd level intelligence is breathtaking \s
ChatGPT 5's PHd level intelligence is breathtaking \s
I saw this on reddit, thinking it’s a joke, but it’s not. GPT 5 mini (and maybe sometimes 5) gives this answer. You can check it yourself, or see somebody else’s similar conversation here: https://chatgpt.com/share/689a4f6f-18a4-8013-bc2e-84d11c763a99
For those just joining us: The problem isn't that it doesn't know. The problem is that it confidently asserts a falsehood.
...And that people take the bait and anthropomorphize it, believing it is "reasoning" and "thinking".
It seems like people want to believe it because it makes the world more exciting and sci-fi for them. Even people who don't find gpt personally useful, get carried away when talking about the geopolitical race to develop agi first.
And I sort of understand why, because the alternative (and I think real explanation) is so depressing - namely we are wasting all this money, energy and attention on fools' gold.
If these things could not be induced to confidently lie, they would not be the target of billions of dollars of investments.
I sincerely don't think this is true, but it's a nice narrative that fits well with one of Lemmy's. It'd still be worth the same or more if it hallucinated to a minimum because it would better match one of its ideal business applications: replacing human labor at a fraction of the cost. Unfortunately, this is only a convenient side effect for many who stand to benefit from creating propaganda and false information in bulk.
And yet there are still people in the thread claiming that 'oh ChatGPTs knowledge data cuts off at the end of 2024, this prompt is using ChatGPT wrong' completely missing your point.
If ChatGPT doesn't know something it just lies about it, all while being passed off as doctorate-level intelligence.
Inb4 defenders 'an AI can't lie it just asserts falsehoods as truth because it's having a scary dream/hallucination' as if semantics will save the day.