Skip Navigation

Some Off-The-Cuff Predictions about the Next AI Winter

It’s pretty much a given that we’re in for an AI winter once this bubble bursts - the only thing we can argue on at this point is exactly how everything will shake out. So, let’s beat this dead horse and make some random predictions before it inevitably gets sent to the glue factory. I’ve hardly got anything better to do.

The Death of “Value-Neutral” AI

Before this bubble, artificial intelligence was generally viewed as value-neutral. It was generally viewed as a tool, capable of good or evil, bringing about a futuristic utopia or a Terminator-style apocalypse.

Between the large-scale art theft/plagiarism committed to build the datasets (through coercion, deception, ignoring the victim’s refusal, spamming new scrapers, et cetera), the abused and underpaid workers who classified the datasets, the myriad harms brought by the LLMs themselves (don’t get me fucking started), and the utterly ghoulish acts of the CEOs and AI bros involved (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera), that “value neutral” notion is dead and fucking buried.

Going forward, I expect artificial intelligence to be viewed not as a tool or a technology, but as an enemy (of sorts), built to perpetrate evil, and capable only of evil. As for its users (assuming it still has users), I expect them to be viewed as tech assholes, class traitors, incompetent dipshits, “prompt goblins” craving approval, and generally worthy only of mockery or condemnation.

Confidence: Near-certain. Ali Alkhatib’s “Defining AI” (which called for redefining AI as an ideological project to more effectively resist it) and Matthew Hughes’ “People Are The Point” (a manifesto which opposes AI on principle, calling it “an expression of contempt towards people”) have already provided crystal-clear examples of AI being treated as an evil unto itself, and the links in the previous paragraph already show use of AI being treated as a moral failing as well.

Side-Order of Tech Crash

It’s no secret that the tech industry has put a horrific amount of cash into this AI bubble - every major AI corp burns billions in VC cash with no end in sight, Microsoft performed mass layoffs to throw money at AI (mass layoffs of people making the company money, mind you), NVidia is blowing billions on AI money-burners (to keep making a killing off of selling shovels in this AI gold rush), the fucking works. And all in pursuit of a Hail Mary pass intended to keep the tech industry’s Endless Growth™ going for just a few years more.

(Going by David Gerard, previous AI springs were primarily funded by the Department of Defense, with winter setting in whenever their patience for burning cash ran out.)

With all the billions upon billions thrown into AI, and revenue from said AI being somewhere between Jack and Shit (barring the profits of shovel-sellers like NVidia, as mentioned before), this AI winter will likely kick off with a very wide-ranging tech crash that takes a chunk out of the entire industry, and causes some serious economic woes for good measure.

Confidence: Very high. Ed Zitron’s gone into punishing detail about the utterly fucked economics of basically everyone involved in this bubble, and I’d be here all day if I went over everything he’s written about. Picking just a single article, here’s him talking about OpenAI being a systemic risk to tech.

Scrapers Need Not Apply

Before the AI bubble, scrapers/crawlers were a normal, accepted part of the Internet ecosystem - there was no real incentive to block crawlers by default, since the vast majority were well-behaved and followed robots.txt, and search engine crawlers specifically were something you wanted to welcome, since those earned you traffic from search results.

Come the AI bubble, this status quo would be completely undermined, for three main reasons.

First, and most obviously, there’s the theft - far from having any benevolent purposes, the crawlers employed by AI corps are created to outright steal data off your blog/website, then use it to create a slop generator that claims your work as its own and/or tries to put you out of business, making AI crawlers an long-term existential threat to whatever endeavours you go into.

Second, AI Summary™ services (like Google’s) created through the aforementioned theft have utterly cratered search engine traffic, taking the main upside to allowing crawlers to scrape your site and turning it into a severe downside.

Last, but not least, are the AI crawlers themselves - thanks to how they DDoS whatever sites or FOSS infrastructure they decide to scrape, and the dirty tricks employed in said scraping (ignoring robots.txt, lying about their user agent, spamming new scrapers, using botnets, etcetera), doing anything short of blocking scrapers on sight is not just a long-term liability to you, but an immediate liability to your website as well.

As a response to these crawlers, a cottage industry of anti-scraping solutions cropped up providing a variety of ways to fight back. Between dedicated bot-blockers like Anubis, tarpits like Iocaine and Nepenthes, and media-poisoning tools like Glaze and Nightshade, scrapers of all stripes now face an ever-present risk of being blocked from data (especially high quality data), or force-fed misleading data intended to waste their time and poison their datasets.

As the cherry on top of this anti-scraper shit sundae, the rise of generative AI has flooded the ‘Net with AI slop, which is difficult to identify, near-impossible to avoid, and outright useless (if not dangerous) to scrape. Unless you’re limiting yourself to sources made before 2022 (commonly known as low-background media), chances are you’re gonna have to deal with your dataset getting contaminated.

Given all this, I expect scraper activity in general (malicious or otherwise) to steeply drop during the AI winter, as all scrapers get treated as guilty (of AI fuckery) until proven innocent, and non-malicious scraper activity drops off as developers deem running them to be not worth the hassle.

Confidence: Moderate. I already know of one scraper-based project (wordfreq, to be specific) which shut down as a consequence of the AI bubble - I wouldn’t be shocked to see more cases crop up down the line.

Condemnation and Mockery

For the past two years, the AI bubble has been inescapable for the public at large.

On one front, they’ve spent the past two years being utterly inundated with AI hype of every stripe - AI bros hyping up AI as The Future™, wild and spurious claims of Incoming Superintelligence™, rigged tests and cheated benchmarks made directly by the AI corps, and relentless anthropomorphisation of spicy autocompletes and signal-shaped noise generators.

Especially anthropomorphisation - whether it be painting hallucinations as lies, presenting AI as deceptive or coercive, or pretending they can feel pain, there has been a horrendous amount of time and money spent on trying to deceive the public into believing LLMs are sentient, if not humanlike in their actions.

On another front, the public has bore witness to a wide variety of harms as a direct consequence of AI’s creation.

Local environmental catastrophe, global water loss and sky high emissions, widespread job loss, academic misconduct, nonstop hallucinations and misinformation, voice-cloning scams, programming disasters, damaged productivity, psychosis, outright suicide (on multiple occasions), the list goes on and on and on and on and on.

All of this has been thoroughly burned into the public consciousness over these past two or three years, ensuring AI will retain a major (and deeply negative) presence there, and ensuring AI as a concept will face widespread mockery and condemnation from the public, until long after the bubble bursts.

Giving some more specifics:

Confidence: Completely certain. I’m basically “predicting” something that’s already happening right now, and has a very good chance of continuing months, if not years, down the road.

Arguably, I’m being a bit conservative with this prediction - given the cultural rehabilitation of the Luddites, and the rise of a new Luddite movement in 2024, I could easily argue that the bubble’s started a full-blown resistance movement against the tech industry as a whole.

You're viewing a single thread.

2 comments