I'm old enough to remember the dotcom bubble. Even at my young age back then, I found it easy to spot many of the "bubbly" aspects of it. Yet, as a nerd, I was very impressed by the internet itself and was showing a little bit of youthful obsession about it (while many of my same-aged peers were still hesitant to embrace it, to be honest).
Now with LLMs/generative AI, I simply find myself unable to identify any potential that is even remotely similar to the internet. Of course, it is easy to argue that today, I am simply too old to embrace new tech or whatever. What strikes me, however, is that some of the worst LLM hypemongers I know are people my age (or older) who missed out on the early internet boom and somehow never seemed to be able to get over that fact.
As I mentioned before, some spammers and scammers might actually need the tech to remain competitive in their markets from now on, I guess. And I think they might be the only ones (except for a few addicts) who would either be willing to pay full price or start running their own slop generators locally.
This is pretty much the only reason I could imagine why "AI" (at least in its current form) might be "here to stay".
On the other hand, maybe the public will eventually become so saturated with AI slop that not even criminals will be able to use it to con their victims anymore.
I don't understand. Everybody keeps telling me that LLMs are easily capable of replacing pretty much every software developer on this planet. And now they complain that $71 a day (or even $200 a month) is too much for such amazing tech?
/s
In my experience, copy that "sells" must evoke the impression of being unique in some way, while also conforming to certain established standards. After all, if the copy reads like something you could read anywhere else, how could the product be any different from all the competing products? Why should you pay any attention to it at all?
This requirement for conformity paired with uniqueness and originality requires a balancing act that many people who are not familiar with the task of copywriting might not understand at all. I think to some extent, LLMs are capable of creating the impression of conformity that clients expect from copywriters, but they tend to fail at the "uniqueness" part.
maybe they’ll figure a way to squeeze suckers out of their money in order to keep the charade going
I believe that without access to generative AI, spammers and scammers wouldn't be able to successfully compete in their respective markets anymore. So at the very least, the AI companies got this going for them, I guess. This might require their sales reps to mingle in somewhat peculiar circles, but who cares?
It's almost as if teachers were grading their students' tests using a dice, and then the students tried manipulating the dice (because it was their only shot at getting better grades), and the teachers got mad about that.
This is, of course, a fairly blatant attempt at cheating. On the other hand: Could authors ever expect a review that's even remotely fair if reviewers outsource their task to a BS bot? In a sense, this is just manipulating a process that would not have been fair either way.
To me, the idea of using market power as a key argument here seems quite convincing, because if there was relevant competition in the search engine market, Google would probably have had much more difficulty imposing this slop on all users.
I disagree with the last part of this post, though (the idea that lawyers, doctors, firefighters etc. are inevitably going to be replaced with AI as well, whether we want it or not). I think this is precisely what AI grifters would want us to believe, because if they could somehow force everyone in every part of society to pay for their slop, this would keep stock prices up. So far, however, AI has mainly been shoved into our lives by a few oligopolistic tech companies (and some VC-funded startups), and I think the main purpose here is to create the illusion (!) of inevitability because that is what investors want.
Yes, they will create security problems anyway, but maybe, just maybe, users won’t copy paste sensitive business documents into third party web pages?
I can see that. It becomes kind of a protection racket: Pay our subscription fees, or data breaches are going to befall you, and you will only have yourself (and your chatbot-addicted employees) to blame.
reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’
True (even if this answer sounds like something a chatbot would generate). I have come across a few human slop generators/bots in my life myself. However, making up entire titles of books or papers appears to be a specialty of AI. Humans would not normally go to this trouble, I believe. They would either steal text directly from their sources (without proper attribution) or "quote" existing works without having read them.
So what kind of story can you tell? A movie that perhaps has a lot of dream sequences? Or a drug trip?
Maybe something like time travel, because then it might be okay if the protagonists kept changing their appearance to some degree. But even then, there wouldn't be enough consistency, I guess.
This has become a thought-terminating cliché all on its own: "They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced."
I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.
I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don't know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.
First, we are providing legal advice to businesses, not individuals, which means that the questions we are dealing with tend to be even more complex and varied.
Additionally, I am a former professional writer myself (not in English, of course, but in my native language). Yet, even I find myself often using complicated language when dealing with legal issues, because matters tend to be very nuanced. "Dumbing down" something without understanding it very, very well creates a huge risk of getting it wrong.
There are, of course, people who are good at expressing legal information in a layperson's way, but these people have usually studied their topic very intensively before. If a chatbot explains something in “simple” language, their output usually contains serious errors that are very easy for experts to spot because the chatbot operates on the basis of stochastic rules and does not understand its subject at all.
Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead
Exactly. It is also a new technology that requires far fewer skills to use than previous new technologies. The skills are needed to critically scrutinize the output - which in this case leads to less lazy people being more reluctant to accept the technology.
On top of this, AI fans are being talked into believing that their prompting as such is a special “skill”.
That's why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don't believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is "smart" in any meaningful way.
I'm old enough to remember the dotcom bubble. Even at my young age back then, I found it easy to spot many of the "bubbly" aspects of it. Yet, as a nerd, I was very impressed by the internet itself and was showing a little bit of youthful obsession about it (while many of my same-aged peers were still hesitant to embrace it, to be honest).
Now with LLMs/generative AI, I simply find myself unable to identify any potential that is even remotely similar to the internet. Of course, it is easy to argue that today, I am simply too old to embrace new tech or whatever. What strikes me, however, is that some of the worst LLM hypemongers I know are people my age (or older) who missed out on the early internet boom and somehow never seemed to be able to get over that fact.