OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One
OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One

OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One - Decrypt

OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One
OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One - Decrypt
They want people to try. It’s independent bug testing that costs only as much as publishing an article on a website and incrementing a version number
"AI" has a massive inability (or is purposefully deceptive) to distinguish the difference between bugs, which can be fixed, and fundamental aspects of the technology that disqualify it from various applications.
I think the more likely story is that they know this can be done, know about this particular jailbreak person, can replicate their work (because they didn't so anything they hadn't done with previous models in the first place), and are straight up lying and betting the people that matter to their next investment round (scam continuation) won't catch wind.
You're giving these grifters way too much credit.
That's not really compelling because people would try regardless
They have a 500k bounty for jailbreaks.