AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.
We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.
I am happy to report I did my part on feeding it garbage. I only ever speak to chatGPT thru a pirate translator. And I only ever ask it for harry potter fan fic. Pay me if you want me to train it meaningfully.
It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.
As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.
We're trailer park trash with no higher education, believe in ghosts, angels and gods in the sky, refuse to ever believe we could be wrong .... and now we've just had a baby with no one to help us raise it.
We're going to raise a highly intelligent psychopath
I've found the sexism on Reddit to be on par with the racism. Goodness help you if you're a female of color, unless you've been working the same job for multiple decades, or don't want kids, then you'll be an inspiration to that community.
Reddit is, alas, not the only forum exhibiting such hate.
OpenAI definitely does not need to pay to scrape reddit. They are probably the world's most sophisticated web scraping company, disguised as an AI startup
We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.
While you're not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?
Similarly, it's probably safe to assume that the LLM's prediction isn't the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you "you were just discussing this" or "you can access the weather from here" is that all that different from "intelligence"?
At a given point, it's arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don't really know what true intelligence is.
how is this different to many existing techniques and compositional models that are used practically everywhere in tech?
It’s not. LLM is just a statistical model. Nothing special about it. Nothing different what we’ve already been doing for a while. This only validates my statement that we call just about anything “AI” these days.
We don’t even know what true intelligence is, yet we are quick to make claims that this is “AI”. There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct. Anyone who thinks otherwise is just fooling themselves.
It’s a buzz word to get people riled up. It’s completely disingenuous.
The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they're consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.
Even if we don't know what it is with certainty, it's valid to say that something isn't intelligence. For example, a rock isn't intelligent. I think everyone would agree with that.
Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.
A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.
For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I'm not aware of the "intelligent" process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.
If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?
If you look at efficacy though on academic tests or asking it some fact question and you compare that to asking a random person instead of always getting the 'right' answer, which we expect computers/calculators to do, would LLMs be comparable or better? Surely someone has some data on that.
The person that commented below kinda has a point. While I agree that there's nothing special about LLMs an argument can be made that consciousness (or maybe more ego?) is in itself an emergent mechanism that works to keep itself in predictable patterns to perpetuate survival.
Point being that being able to predict outcomes is a cornerstone of current intelligence (socially, emotionally and scientifically speaking).
If you were to say that LLMs are unintelligible as they operate to provide the most likely and therefore most predictable outcome then I'd agree completely.
The ability to make predictions is not sufficient for evidence of consciousness. Practically anything that's alive can do that to one degree or another.
In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.
Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.
On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.
“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.
It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.
Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.
The original article contains 519 words, the summary contains 150 words. Saved 71%. I'm a bot and I'm open source!
“It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”
Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it's deep!
But it's a ChatGPT answer to the question "What is a computer?"
Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.
This article doesn’t even list a claim of what their settings were nor try to recreate anything.
Whole fucking article is a he said she said bullshit.
If I set the top_p setting to 0.2 I too can make the model say wild psychotic shit.
If I set the temp to a high setting I too can make the model seem delusional but still understandable.
With a system level prompt I too can make the model act and speak however I want (for the most part)
More bullshit articles designed to keep regular people away from newly formed power. Not gonna let these people try and scare y’all away. Stay curious.
Bare in mind depending on your instance, you won’t see the same comments as others do.
With that said, top comment here for me is talking about how this was because they’re training their models on user input.
As if the leaders in fucking AI development don’t know what they’re doing, especially for a concept that’s covered in every intro level AI course in college. 🙄
Then again not everyone went to college I guess and would rather make arm chair assumptions and pray at the alter of google despite complaining about how AI is ruining everything and google being one of the first people to do shit like this with their search engine for “better results” (not directed at you of course, thanks for being respectful and just asking a simple question rather than making assumptions)
I mean, OpenAI themselves acknowledged there was an issue and said they were working on it,
“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”. “We’re continuing to monitor the situation,” the latest update read.