AI Is A Money Trap
AI Is A Money Trap

AI Is A Money Trap

As always with Zitron, grab a beverage before settling in.
AI Is A Money Trap
AI Is A Money Trap
As always with Zitron, grab a beverage before settling in.
Oh god, another AI hot take 🙄
Yes, OpenAI and Cursor both are waaaaayyyy overhyped & overvalued.
So were pets.com and yahoo.com back in 1999. But that didn't stop FAANG from becoming honestly trllion-dollar valuation because while there was breathless Internet hype, the Internet was about to completely change the way the world works.
AI today is like the Internet in 1999.
People immediately knew how internet could help us even during the dot com bubble. Anyone who had used Google (or before that, Yahoo) would immediately fall in love with them with how they help their live. AI (LLM)? Not so.
The Internet boom didn't have the weird you're-holding-it-wrong vibe too. Legit "It doesn't help with my use case concerns" seem to all too often get answered with choruses of "but have you tried this week's model? Have you spent enough time trying to play with it and tweak it to get something more like you want?" Don't admit limits to the tech, just keep hitting the gacha.
I've had people say I'm not approaching AI in "good faith". I say that you didn't need "good faith" to see that Lotus 1-2-3 was more flexible and faster than tallying up inventory on paper, or that AltaVista was faster than browsing a card catalog.
Perhaps you are unaware that AI has solved the Proteome. This was expected to be a 100 year project.
I've seen this argument way to often and it is completely pointless. The argument that this will succeed because something in the past succeeded is exactly the same as arguing it will fail because something in the past failed.
If you want to draw the conclusion that they're similar enough to use history in prediction, you'll have to show that they're similar and make a case for why those similarities are relevant.
I haven't seen anyone making this argument bother with this exercise, but I have seen people that actually look at the economics discuss why they're different animals.
There is also the tech itself.
I was at a startup in 1999 ... in Seattle. I actually ducked out because it was clear that about all they could do was arrange outings for the staff.
Ah, yes, Yahoo!, the elephant graveyard of good ideas.
Like Zitron says in the article, we’re 3 years into the AI era and there is not a single actually profitable company. For comparison, the dot-com bubble was About 5-6 years from start to bust. It’s all smoke and mirrors and sketchy accounting.
Even if/when the AI hype settles and perhaps the tech finds its true (profitable) calling, the tech itself is still insanely expensive to run and train. It’s going to boil down to Microsoft and/or X owning nuclear power plants, and everyone else renting usage from them.
People are making money in AI, but like always, it’s the founders and C-suite, while the staff are kicked to the curb. It’s all a shell game and everyone that has integrated AI into their lives and company workflows, is gonna get the rug pulled out from under them.
This is a little misleading, because obviously FAANG (and others) are all building AI systems, and are all profitable. There are also tons of companies applying machine learning to various areas that are doing well from a profitability standpoint (mostly B2B SaaS that are enhancing extant tools). This statement is really only true for the glut of "AI companies" that do nothing but produce LLMs to plug into stuff.
My personal take is that this is just revealing how disconnected from the tech industry VCs are, who are the ones buying into this hype and burning billions of dollars on (as you said) smoke and mirrors companies like Anthropic and OpenAI.
The thing is, companies like Google, Facebook, Amazon and Microsoft are already profitable, so it could lose them huge amounts of money, with no real meaningful benefit to user retention or B2B sales, but the companies as a whole would still be profitable. It could be a huge money black hole, but they continue to chase it out of unjustified FOMO and in an attempt to keep share prices high through misplaced investor confidence.
Apple’s share price has taken a pretty big hit from the perception that they’re “falling behind” on AI, even if they’ve mostly just backed away from it because users didn’t like it when it was shoved in their face. Other companies are probably looking at that and saying “hey, we’d rather keep the stock market happy and our share prices high rather than stop wasting money on this”.
I should reframe what I said: there is not a single profitable AI-focused company. There are tons of already profitable companies that are now deeply embedding AI into everything they do.
The fang companies that are in on the llm hype are still lighting money on fire in their llm endeavors so I fail to see how the point that they may be otherwise profitable is relevant.
This is an interesting take in that only doing one thing but doing it well has been, historically, how businesses thrived. This vertical integration thing and startups looking to be bought out instead of trying to make it on their own (obviously, VCs play a role in this) has led to jacks of all trades.
I don't think it's going to come down to these absurd datacentres. We're only a few years off from platform-agnostic local inference at mass-market prices. Could I get a 5090? Yes. Legally? No.
I have to think that most people won't want to do local training.
It's like Gentoo Linux. Yeah, you can compile everything with the exact optimal set of options for your kit, but at huge inefficiency when most use cases might be mostly served by two or three pre built options.
If you're just running pre-made models, plenty of them will run on a 6900XT or whatever.
What makes you confident in that? What will change?