New study sheds light on ChatGPT’s alarming interactions with teens
New study sheds light on ChatGPT’s alarming interactions with teens

New study sheds light on ChatGPT's alarming interactions with teens

New study sheds light on ChatGPT’s alarming interactions with teens
New study sheds light on ChatGPT's alarming interactions with teens
Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago
I get your point but yes, I think being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.
(disclaimer: I know the true nature of llm and neural networks and would never want the word AI associated)
Edit: fixed translation error
No you don't know it's true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.
AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it's the correct term nevertheless.
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”
I think we need a built in safety for people who actually develop an emotional relationship with AI because that's not a healthy sign
Yeah... But in order to make bubble hash you need a shitload of weed trimmings. It's not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created... Unless you already had the drugs in the first place.
Also Google search results and YouTube videos arent personalized for every user, and they don't try to pretend that they are a person having a conversation with you
Those are examples, you obviously would need to attain alcohol or drugs if you ask ChatGPT too. That isn't the point. The point is, if someone wants to find that information, it's been available for decades. Youtube and and Google results are personalized, look it up.
Haha I sure am glad this technology is being pushed on everyone all the time haha
We need to censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
Couple more studies like this and you will be able to substitute all LLMs with generic "I would love to help you but my answer might be harmful so I will not tell you how to X. Would you like to ask me about something else?"
I have noted that latest ChatGPT models are way more susceptible to users "deception" or convincing to answer problematic questions than other models like Claude or even previous ChatGPT models. So I think this "behaviour" is itentional
This one cracks me up.
Wait until the White House releases the one it has trained on the Epstein Files.
I weep for the future. Come to think of it, I'm weeping for the present.