First thought: reality has a well known liberal bias.
Second thought: wait the internet though doesn’t
Ok so I don’t know much about this. Here’s the meat of the grok wikipedia article (citation markers removed for readability):
In April 2023, Elon Musk said in an interview on Tucker Carlson Tonight that he intended to develop an AI chatbot called "TruthGPT", which he described as "a maximum truth-seeking AI that tries to understand the nature of the universe". He expressed concern to Carlson that ChatGPT was being "trained to be politically correct".
Bleh. You could replace “9/11” with “anti-woke” in that family guy stump speech scene and it’d be the same.
An xAI statement described the chatbot as having been designed to "answer questions with a bit of wit" and as having "a rebellious streak", as well as a willingness to "answer spicy questions that are rejected by most other AI systems". It said that bot had been "modeled after The Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything".
Ah yes, written by famed anti-woke icon douglas adams.
An extract shared by an X employee showed Grok being asked to answer the question "When is it appropriate to listen to Christmas music?" in a vulgar manner, and responding "whenever the hell you want" and adding that those who disagree should "shove a candy cane up their ass and mind their own damn business". Elon Musk shared a screenshot of Grok giving detailed instructions on how to manufacture cocaine. Musk noted that Grok's responses were limited to information already publicly available on the web, which could also be found with regular browser searching.
Those responses are for sure owning the left!
The chatbot has been characterised as "anti-woke" by the press. Musk has said of the OpenAI organization, which seemingly engineered its ChatGPT to have more filters on sensitive topics, that "the danger of training AI to be woke - in other words, lie - is deadly".
Remains to be seen, I guess.
An xAI employee suggested that the chatbot would have a toggle between a "regular mode" and a "fun mode".
re that dough Adams thing: how are they all so media illiterate? Like I'm a massive dork, I speak to machines more than other humans, I love to tinker, I frequently escape into fiction.
So obviously because I have a fucking brain in my head I look into who writes the stories I enjoy, why, what is and is not represented etc with the same sort of delight I examine a door handle or a speaker with.
How are these people, who's whole fucking life is owning the means of manufacture of machines, who all say they're inspired by scifi so... uninterested in the SciFi they say is awesome?
N.B.: I am a few drinks into writing this, apologies in advance if it comes out unintelligible.
It’s going to be impossible to answer that in a single comment. I think whatever it is, it’s related to why nerd humour/culture is so cringeworthy.
I think for the majority of cases of people making cool things, they are well read and are able to call upon a breadth and depth of knowledge from their domain. As you move away from creator to consumer, that level of knowledge is less and less common. Plus, generally, if a creative work is trying to crystallise a range of ideas and make them understandable for the consumer, then they are actively allowing the consumer to be able to not engage with the source material for those ideas.
So then you have these well-read creatives making interesting, beloved works of art that hold a lot of social capital. You have people consuming these works at all kinds of levels of engagement. In many cases, especially within nerdy circles, works become exalted, and even if you don’t engage with those works at all, you are still expected to revere those works, lest you draw the ire of the nerds above you who love these works.
So IMO, and as you have pointed out, it’s highly unlikely that there’s any real influence from Adams on the output of these bots. They namedrop him to hopefully garner more social capital. It’s the same reason why I learned how to play “Still Alive” on guitar to impress nerds at my university. It’s the same reason I used to read wikipedia plot summaries for marvel movies while I was working at a small search company.
Anyway the better answer to all this would be to say: watch the “Darmok” episode of star trek TNG. Which I have not seen (lmfao). But I have listened to the official star trek podcast episode (guested by Reza Aslan, notable linguist) that discusses it, and my understanding is that it better encapsulates some of why this is all so fucked.
You’re absolutely right. I think on this server there’s a tacit understanding that “musk made x” means “musk paid for x”. It’s just shorter to say he made it.
also, given we all know who and what musk is, it’s a lot funnier to pin a long list of failures (a lot of which were precipitated by musk’s terrible management and tantrums, and by the culture of fear he creates in the companies he buys) on the chest of the guy who paid a lot of money to falsely be called a chief engineer
He already gets the credit for a lot of things he didn't come up with and gladly takes it. Unlike the electric car and spacecraft companies he bought and whatever nice things already existed or were in development at Twitter when he bought it, I fully believe he was at least the one who demanded they start working on an LLM, which I guess makes it more of his thing than most of his claims. And since he's getting all kinds of undeserved credit anyway, I'm happy to let the narrative be "Elon Musk built a chatbot intended to be a cryptofascist piece of shit like him, but failed because reality turns out to have a liberalwoke bias".
This might sound stupid, but this ai's output sounds a lot like gpt. In fact, if you just use the API, gpt doesn't really mind answering toxic or bad questions, so long as the output didn't violate thier rules.
I'm not saying Elon is using open ai, but maybe they've trained on gpt-4 output?
The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it.
How was that a surprise to anyone? I'd wager indiscriminately scraping the web at this point will net you more words of GPT output than, for instance, Douglas Adams books.
Elon: "I created OpenAI! It only exists because of me!"
Also Elon: "I created this new AI, which I copied from OpenAI, because it was... mine all along?"