Microsoft and Alphabet both reported mostly strong results Tuesday, but the disparate reactions from investors showed that Wall Street only cares about AI now.
As long as LLM AI models are prone to hallucinating and there is no way to audit how they derive results (eg to verify accuracy), relying on them will have roadblocks/limitations. Once they solve this issue though, that will be a whole different story, I agree. As for other AIs such as image or video generation, I don't have enough experience to tell...
It's not quite blockchain. It is incredibly useful in a broad range of applications, and has genuinely changed how millions of people work. Sure it's not the magic bullet wall street thinks it is, but my work has been improved immensely through the use of generative AI. Especially with uniquely challenging software problems and niche questions.
I think it'll be similar to VR. Extremely useful and interesting, but over-hyped and not going to penetrate our lives as much as most people think.
My mom never used VR, but she happily talks to GPT4. From that perspective I think mindshare in the broader population will be significantly higher than VR (even if it doesn't live up to the hype VC/Wallstreet machine).
Not in total. But I also don't think it is the king maker these investors are making it out to be. Just like Crypto, it is a tool that can enhance and improve things when applied to the right things in the right ways. It is not some magic bullet that is the easy button that the investor snake that is currently eating its tail speculating over it would have everyone believe.
Those rich fuckers are putting a lot of bets on AI. They're keeping us busy over working struggling to pay to exist, while they perfect our replacement so that they can be rid of us.
Getting rid of us doesn't make sense. We circulate the money. They need us to generate the things that we then buy. Without that they'd need to actually spend money and they won't.
Blows my mind that hard-left people would resist AI since it both allows far more opportunity for direct control of means of production (as it is a time/scale equalizer) and also is the best path toward UBI.
UBI sounds silly to some, but that changes when there's a hundred million people out of work. Especially since AI will carve out from the middle, not from the bottom. That's a LOT of reliably-voting spending power eroding.
AI is another one of those tech fabs that'll fade away, but not like blockchain and NFTs. There's money to be made because people are excited, but it's use-cases are much more complicated.
AI is a fantastic tool for creators, including programmers with Copilot. But it isn't a full-blown replacement for workers quite yet. A lot of capitalists are really excited to slash their workforce in half, sure, but they're utterly ignoring the true potential of AI. It's a tool, not a replacement (yet).
If they really wanted to slash their workforce, middle management has been able to be automated for over a decade. They don't want to fire their friends and their kids.
Those capitalists also do not understand that if these tools can replace workers everyone can, through FOSS projects, own these tools and let them work for themselves.
When did we "have the chance"? You seriously believe Occupy had the "chance" to destroy Wall Street? I'm down with the spirit of it but please tell me you don't genuinely believe this was possible.
Putting aside the ethics of destroying the thing that most people's retirement is tied too without any workable alternative in place (among countless other negative consequences for the average person), you understand the buildings aren't horcruxes, right?
I'm pretty sure the thousands of deaths of major traders would have had some impact. You know, what with all of the fire and burning and melting of human flesh.
But nah, let's just do some chanting bro. Totally works, right?
We are in the infancy of generative AI. For you it has already replaced an entire sector of the workforce: artists. For others it has replaced them wholesale. For others it just assists. Hollywood was trying to legally own actors voices and likenesses to replace them.
This technology is not standing still. It will be great at a lot of things in the future. It could be next month. It could be next year. It could be in a decade. Whenever it arrives for your job it will be cheaper than you. There will be no going backward on this technology.
I totally agree that we're just scratching the surface of what AI can do. But I don't think it's what Wall Street thinks it is. It's not too terribly difficult to spin up an LLM, which means it's going to be difficult to set up chokepoints to extract rent.
Though I bet they'll get the government's help with that by regulating AI for "safety." The big guys won't have a problem but anyone else will have illegal programs running.
Especially where image generation is concerned, the infancy part can't be understated. It's growing so, so fast. A year ago, people would be dismissing AI art as "you can always tell", it largely couldn't do hands, and text was right out. But current cutting edge models can semi-reliably generate imperceptible works, needing only some fairly trivial manual curation to pick the best output. There's also some models that are now able to do basic text. Just comparing a couple of years worth of progress side by side makes it very clear that it's advancing rapidly and there's no signs yet that it's plateaued.
The big barrier to image generation, though, is profit. The images that it creates are useful, but current understanding is that they can't be copyrighted and there's ongoing legal challenges that make it very murky. I don't think these companies can stay in business from regular people who'll pay for some tokens to generate art. They need to be usable by commercial companies, and the legal issues will scare many of those away, at least for now.
I do think there's some use for AI in its current form (especially AI art as a tool for developing other works, like movies and video games), but I find it bizarre just how much investors value the current form of AI.
As cool as I find AI art, I'm not yet sure about it's commercial viability, given the serious legal issues it's facing. So why do investors, who are supposed to care about commercial viability, value it so much?
And for generative text, I have an even more negative stance. My understanding is that the cost to train and run those AIs is ludicrous. Sure, some companies will use it to make blog spam articles or replace their basic support staff with it, but is that really gonna make it profitable?
And I emphasized "current form" because the current AI is basically just predictive text. It's severely limited and this is extremely evident if you try to ask even basic math problems. It's not capable of actual intelligence, which is what has me very skeptical of it on the long term. Maybe these companies will come up with a new, better form of AI. Or maybe they won't. But it doesn't seem like "just increase the size of the model" is sustainable nor will frankly get closer to strong(ish?) AI.
That's a lot of people not on the payroll anymore. No health insurance costs, no vacations. Just using the software.
Think of a lot of analytics jobs that ai can replace. You ever spend a day or two making a spreadsheet do whatever you need it to? That's probably a lot of people's jobs. AI can make those people more efficient (as long as a human checks it later), so companies can fire most of the team. That's a lot more people off the payroll.
And there are companies working on general ai. That will replace.... So many jobs.
And I emphasized “current form” because the current AI is basically just predictive text.
This is a program I use daily at work. Costs me like $250/year on my budget - literally less than. one hotel stay for a work trip. I spent more on food last trip than this will cost my company.
It's a big step away from "predictive text." This is the AI revolution in action. There are dozens of products you don't know about shaking up professions you barely ever think about.
I don't have to build a Content Gen team because of this software, probably ever.
My buddy, meanwhile, is on a team building an "AI" for a major property insurance company to help them sift data. Small changes, incrementally, permeating through the system. That's strong adoption and worth investment.
Doesn't generative AI need a whole other layer of technology to become reliable?
The AI needs to control some domain-specific model (like a poser skeleton for pictures of humans) that enforces the rules for how each modelled concept can actually behave, instead of trying to guess the output directly.