There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
You could have said the same for factories in the 18th century. But instead of the reactionary sentiment to just reject the new, we should be pushing for ways to have it work for everyone.
I don't see how rejecting 18th century-style factories or exploitative neural networks is a bad thing. We should have the option of saying "no" to the ideas of capitalists looking for a quick buck. There was an insightful blog post that I can't find right now...
Lets not forget all the exploitation that happened in that period also. People, even children, working for endless hours for nearly no pay, losing limbs to machinery and simply getting discarded for it. Just as there is a history of technology, there is a history of it being used inequitably and even sociopathically, through greed that has no consideration for human well-being. It took a lot of fighting, often literally, to get to the point we have some dignity, and even that is being eroded.
I get your point, it's not the tech, it's the system, and while I lost all excitement for AI I don't think that genie can't be put back in the bottle. But if the whole system isn't changing, we should at least regulate the tech.
But AI will eliminate so many jobs that it will affect a lot of people, and strain the whole system even more. There isn't a "just become a programmer" solution to AI, because even intellectually-oriented jobs are now on the line for elimination. This won't create more jobs than it takes away.
Which shows why people are so fearful of this tech. Freeing people from manual labor to go to intellectual work was overall good, though in retrospect even then it came at a cost of passionate artisans. But now people might be "freed" from being artists to having to become sweatshop workers, who can't outperform machines so their only option is to undercut them. Who is being helped by this?
Yes, I know about the exploitation that happened during early industrialization, and it was horrible. But if people had just rejected and banned factories back then, we'd still be living in feudalism.
I know that I don't want to work a job that can be easily automated, but intentionally isn't just so I can "have a purpose".
What will happen if AI were to automate all jobs? In the most extreme case, where literally everyone lost their job, then nobody would be able to buy stuff, but also, no company would be able to sell products and make profit. Then, either capitalism would collapse - or more likely, it will adapt by implementing some mechanism such as UBI. Of course, the real effect of AI will not be quite that extreme, but it may well destabilize things.
That said, if you want to change the system, it's exactly in periods of instability that can be done. So I'm not going to try to stop progress and cling to the status quo out of fear what those changes might be - and instead join a movement that tries to shape them.
we should at least regulate the tech.
Maybe. But generally on Lemmy I see sooo many articles about "Oh, no, AI bad". But no good suggestions on what exactly regulations should we want.
If the technology actually existed to replace human workers, the human workers could chip in and buy the means of production and replace the company owners as well.
Top quality luddite opinions right here. Plenty of fear and oprobium being directed against the technology, while taking the kleprocratic capitalism and kakistocracy as a given that can't be challenged.
Yes, it is incompatible with the status quo. That's a good thing. The status quo is unsustainable. The status quo is on course to kill us all.
The only real danger AI brings is it will let our current corrupt leaders and corrupt institutions be more efficient in their corruption. The problem there is not the AI; it's the corruption.
Extra spicy take: The Luddites were right. They were really always about opposing unethical use of technology, people who use their name as an insult were always all about "progress over people", and you should never feel bad for being called a Luddite.
There are definitely people who are harmed by FUD like this. For example the current writers strike, which has 11,000 people putting down tools... indefinitely shutting down global movie productions that employ millions of people and leaving them unemployed for who knows how long.
These are easily avoidable problems. There are always reputable authors on topics and why would a self published foraging book by some random person be better than an AI one? You buy books written by experts, especially when it’s about life or death.
taking the kleprocratic capitalism and kakistocracy as a given that can’t be challenged.
It's literally baked into the models themselves. AI will reinforce kleptocratic capitalism and kakistocracy as you so aptly put it because the very data it's trained on is a slice of the society it resembles. People on the internet share bad, racist opinions and the bots trained on this data do the same. When AI models are put in charge of systems because it's cheaper than putting humans in place, the systems themselves become entrenched in status-quo. The problem isn't so much the technology itself, but how the technology is being rolled out, driven by capitalistic incentives, and the consequences that brings.
There is a name for this debating technique where you go "sure, there was nothing good about Hitler - except he cared about dogs!". Can't remember. Is it strawman?
I think we all understand that capitalism is mostly bad for humans, and really good for corporations and their owners. AI and robots will be exploited to replace people since they are massively more powerful and much cheaper.
A few things will be better I guess, but most will be worse. People already are not actually needed to work this much anymore, and as soon as they can be replaced with something cheaper and more efficient they will. That is capitalism.
A strawman argument is where you ignore what was said by the other person and instead respond with something distorted. That's not what I did - the core premise of Drew's argument is that AI will not "make the world better" and I provided a crystal clear example of how it makes the world better.
It was just one example, and obviously not the complete picture, but what choice do I have? It's such a broad topic I couldn't possibly list everything AI will impact without writing an entire book.
I think we all understand that capitalism is mostly bad for humans, and really good for corporations and their owners.
No I disagree. Corporations exist exclusively to benefit their human owners them. Which means anything that's "good for corporations" is good for a select small number of humans.
Don't blame "capitalism" for wealth inequality. Blame the actual humans (e.g. Donald Trump, Elon Musk) who have made it their life's work to drive the global economy even harder into a world that benefits the fiew and ignores the struggles of the many.
Also - not all corporations are bad. Some of them do great work that truly benefits the world and I would personally put OpenAI in that category. Their mandate is not to make a profit - and in fact the amount of profit they can legally make has been limited. Their mission is literally "to ensure that artificial general intelligence benefits all of humanity". I hope they succeed, and I think they will. Drew is wrong.
I also have no idea who he is and I also missed the point. It's just another "AI bad" article, even if the message this time is "AI bad, but not as bad as you think."
More balanced articles are not necessarily better though. I'd dather read two conflicting opinions that are well thought out than a mild compromise with unknown bias.
which previously failed since ads and SoC were the driver of the Web, not information.
Can you elaborate on why you think the ads wouldn't sneak in again? The semantic web is a fantastic concept, but I don't immediately see the AI connection. AI doesn't magically pay for authored content and there is still an incentive to somehow get ads into LLM answers.
I have no desire to enter an AI arms race. I spent too much time as it is already tinkering with my privacy stuff to deal with Google and other malicious actors. Why the hell would I want to have to maintain yet another front in that obnoxious, daily battle? That is not a reason to have AI, it’s just selling the cure to a problem we are creating ourselves.
Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
Unlikely to replace the "most" competent humans, but probably the lower 80% (Pareto principle), where "crappy" is "good enough".
What's really troubling, is that it will happen all across the board; I'm yet to find a single field where most tasks couldn't be replaced by an AI. Used to think 3D design would take the longest, but no, there are already 3D design AIs.
You just said the same thing the comment responding to did, though. He pointed out that AI can replace the lower 80%, and you said the AI can write some code but that it might have trouble doing the expert work of proving the code meets the safety criteria. That's where the 20% comes in.
Also, it becomes easier to recognize the possibility for AI contribution when you widen your view to consider all the work required for critical application development beyond just the particular task of writing code. The company surrounding that task has a lot of non-coding work that gets done that is also amenable to AI replacement.
Fashion designers are being replaced by AI.
Investment capitalists are starting to argue that C-Suite company officers are costing companies too much money.
Our Ouroboros economy hungers.
C-Suites can get replaced by AIs... controlled by a crypto DAO replacing the board. And now that we're at it, replace all workers by AIs, and investors by AI trading bots.
Why have any humans, when you can put in some initial capital, and have the bot invert in a DAO that controls a full-AI company. Bonus points if all the clients are also AIs.
Unfortunately everything AI does is kind of shitty. Sure you might have a query for which the chosen AI works well but you might as well not.
It you accept that it sometimes just doesn't work at all sure AI is your revolution. Unfortunately there are not too many use cases where this is helpful.