Lemmy be like
Lemmy be like
Lemmy be like
I mean, it is objectively bad for life. Throwing away millions to billions of gallons of water all so you can get some dubious coding advice.
Wouldn't the opposite of artificial intelligence be natural stupidity?
natural is a superfluous word that could mean multiple things. genuine means specific and vetted.
ie: "Natural Flavors"
Much love
Veri smol
Whether intentional or not, this is gaslighting. "Here's the trendy reaction those wacky lemmings are currently upvoting!"
Getting to the core issue, of course we're sick of AI, and have a negative opinion of it! It's being forced into every product, whether it makes sense or not. It's literally taking developer jobs, then doing worse. It's burning fossil fuels and VC money and then hallucinating nonsense, but still it's being jammed down our throats when the vast majority of us see no use-case or benefit from it. But feel free to roll your eyes at those acknowledging the truth...
it's literally making its users nuts, or exacerbating their existing mental illness. not hyperbole, according to psychologists. and this isn't conjecture:
https://futurism.com/openai-investor-chatgpt-mental-health
https://futurism.com/chatgpt-psychosis-antichrist-aliens
What is the gaslighting here? A trend, or the act of pointing out a trend, do not seem like gaslighting to me. At most it seems like bandwagon propaganda or the satire thereof.
For the second paragraph, I agree we (Lemmings) are all pretty against it and we can be echo-chambery about it. You know, like Linux!
But I would also DISagree that we (population of earth) are all against it.
It seems like the most immature and toxic thing to me to invoke terms like "gaslighting," ironically "toxic," and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i'm just going mad exaggerating, and it's just the typical over- and mis-using of words.
Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.
EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term "antisocial," saying we're eroding and polluting all genuine and very serious uses of the term. Hm, yeah it's probably just that same old thing. Not wrong for going ballistic over it, though.
Are you honestly claiming a shitpost is gaslighting?
What a world we live in.
It's just a joke bro.
The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I'm very intrigued by the stuff that's happening in the accessibility field.)
I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren't good enough for "real" work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.
And of course, the social impact was just not what we were ready for. "Move fast and break things" may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.
I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it's marred with controversy for decades to come.
If we develop them into tools that help assist work.
Spoilers: We will not
I believe the AI business and the tech hype cycle is ultimately harming the field.
I think this is just an American way of doing business. And it's awful, but at the end of the day people will adopt technology if it makes them greater profit (or at least screws over the correct group of people).
But where the Americanized AI seems to suffer most is in their marketing fully eclipsing their R&D. People seem to have forgotten how DeepSeek spiked the football on OpenAI less than a year ago by making some marginal optimizations to their algorithm.
The field isn't suffering from the hype cycle nearly so much as it suffers from malinvestment. Huge efforts to make the platform marketable. Huge efforts to shoehorn clumsy chat bots into every nook and cranny of the OS interface. Vanishingly little effort to optimize material consumption or effectively process data or to segregate AI content from the human data it needs to improve.
Spoilers: We will not
Generative inpainting/fill is enormously helpful in media production.
The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don't be treating it like it's some sort of irrational bandwagon.
For those who know
I need to watch that video. I saw the first post but haven’t caught up yet.
it's just slacktivism no different than all the other facebook profile picture campaigns.
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Do you really need to have a list of why people are sick of LLM and Ai slop?
We don't need a collection of random 'AI bad' articles because your entire premise is flawed.
In general, people are not 'sick of LLM and Ai slop'. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.
Here is Stanford's report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
My dude, it sounds like you need to go out into the environment a bit more.
We don’t need a collection of random ‘AI bad’ articles because your entire premise is flawed.
god forbid you have evidence to support your premise. huh.
My dude, it sounds like you need to go out into the environment a bit more.
oh you have a spare ecosystem in the closet for when this one is entirely fucked huh? https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution
stop acting like it's a rumor. the problem is real, it's already here, they're already crashing to build the data centers - so what, we can get taylor swift grok porn? nothing in that graph supports your premise either.
That's stanford graph is based on queries from 2022 and 2023 - it's 2025 here in reality. Wake up. Times change.
Gish gallop
Ai is literally making people dumber:
And books destroyed everyone's memory. People used to have fantastic memories.
They are a massive privacy risk:
No different than the rest of cloud tech. Run your AI local like your other self hosting.
Are being used to push fascist ideologies into every aspect of the internet:
Hitler used radio to push fascism into every home. It's not the medium, it's the message.
And they are a massive environmental disaster:
AI uses a GPU just like gaming uses a GPU. Building a new AI model uses the same energy that Rockstar spent developing GTA5. But it's easier to point at a centralized data center polluting the environment than thousands of game developers spread across multiple offices creating even more pollution.
Stop being a corporate apologist
Run your own AI! Complaining about "corporate AI" is like complaining about corporate email. Host it yourself.
Run your own AI!
Oh sure, let me just pull a couple billion out of the couch cushions to spin up a data center in the middle of the desert.
You’re repeating debunked claims that are being pushed by tech giants to lobby for laws to monopolize AI control.
I’d rather read AI crap than this idiocy.
Are being used to push fascist ideologies into every aspect of the internet:
Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks
They are a massive privacy risk:
I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.
Ai is literally making people dumber: https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
We surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically. Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows. To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.
I would not say "can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving" equals to "literally making people dumber". A sample size of 319 isn't really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn't sound too bad if done correctly. They associate critical thinking with verifying everything ("Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort"), not sure I agree on this.
This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They had to write an essay in 20 minutes... obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you're just fixing stuff instead on inventing.. well yea, you use your brain less. Doesn't make you dumb? It's a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.
Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!
Not clicking on a substack link. Fucking Nazi promoting shit website
Do you really need to have a list of why people are sick of LLM and Ai slop?
With the number of times that refrain is regurgitated here ad nauseum, need is an odd way to put it. Sick of it might fit sentiments better. Done with this & not giving a shit is another.
Evil must be fought as long as it exists.
Lol I didn't know that the anarchists over at lemmy.dbzer0.com are being corporate apologists. /sarcasm
Weird ... It looks like there's nothing stopping me from signing up for an account on dbzer0 even though I'm not actually an anarchist.
This can be called not a technology, but a weapon for killing in my opinion.
If you ever take a flight for holiday, or even drive long distance and cry about AI being bad for the environment then you're a hypocrite.
Same goes for if you eat beef, or having a really powerful gaming rig that you use a lot.
There are plenty of valid reasons AI is bad, but the argument for the environment seems weak, and most people using it are probably hypocrites. It's barely a drop in the bucket compared to other things
Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.
This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.
Ahh so are you going to acknowledge the privacy invasion and brain rotting cause by Ai or are you just going to focus on dismissing the environmental concerns? Cause I linked more than just the environmental impacts.
This echo chamber isn't ready for this logical discussion yet unfortunately lol
You're getting downvoted for speaking the truth to an echo chamber my guy.
Hypocrisy can be called the primitive nature of man who chooses what is easier because he is designed that way. Human is like a cancerous tumor for the planet.
The problem isn't AI. The problem is Capitalism.
The problem is always Capitalism.
AI, Climate Change, rising fascism, all our problems are because of capitalism.
Wrong.
The problem are humans, the same things that happen under capitalism can (and would) happen under any other system because humans are the ones who make these things happen or allow them to happen.
Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,... these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.
While you aren't wrong about human nature. I'd say you're wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.
Can, would... and did. The list of environmental disasters in the Soviet is long and intense.
Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart... So alas we will always be in complete shit until we disappear.
The fittest survive. The problem is creating systems where the best fit are people who lack empathy and and a moral code.
A better solution would be selecting world leaders from the population at random.
That's a pathetic, defeatist world view. Yeah, we're victims of our circumstances, but we can make the world a better place than what we were raised in.
Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.
The problem is that it's cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it's not "good" in your eyes.
Making a cheap or efficient AI doesn't help the end user in any way.
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.
I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.
Wow i'm sure this comment section is full of respectful and constructive discussion /s. Lemme go pop some popcorn.
Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.
Nuance is the thing.
Thinking AI is the devil, will kill your grandma and shit in your shoes is equally as dumb as thinking AI is the solution to any problem, will take over the world and become our overlord.
The truth is, like always, somewhere in between.
Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.
Yeah, that would be the solution, but it's never happening.
For labor people don't like doing, sure. I can't imagine replacing a friend of mine with a conversation machine that performs the same or better, though.
Distributed platform owned by no one founded by people who support individual control of data and content access
Majority of users are proponents of owning what one makes and supporting those who create art and entertainment
AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit
Buddy, do you know what you're here for?
EDIT: removed bot accusation, forgot to check user history
Or are you yet another bot lost in the shuffle?
Yes, good job, anybody with opinions you don't like is a bot.
It's not like this was even a pro-AI post rather than just pointing out that even the most facile "ai bad, applause please" stuff will get massively upvoted
Yes, good job, anybody with opinions you don't like is a bot.
I fucking knew it!
Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.
Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.
Maybe there's some truth to it then. Have you considered that possibility?
How people dare not like the automatic bullshit machine pushed down their troat...
Seriously, genrative AI acomplishment are :
Yes. AI can be used for spam, job cuts, and creepy surveillance, no argument there, but pretending it’s nothing more than a corporate scam machine is just lazy cynicism. This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors, giving deaf people real-time conversations through instant transcription, translating entire languages on the fly, mapping wildfire and flood zones so first responders know exactly where to go, accelerating scientific breakthroughs from climate modeling to space exploration, and cutting out the kind of tedious grunt work that wastes millions of human hours a day. The problem isn’t that AI exists, it’s that a lot of powerful people use it selfishly and irresponsibly. Blaming the tech instead of demanding better governance is like blaming the printing press for bad propaganda.
This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors
Not the same kind of AI. At all. Generative AI vendors love this motte-and-bailey.
Arent those different types of AI?
I dont think anyone hating AI is referring to the code that makes enemies move, or sort things into categories
We should ban computers since they are making mass surveillance easier. /s
we should allow lead in paint its easier to use /s
You are deliberatly missing my point which is : gen AI as an enormous amount of downside and no real world use.
One could have said many of the same thigs about a lot of new technologies.
The Internet, Nuclear, Rockets, Airplanes etc.
Any new disruptive technology comes with drawbacks and can be used for evil.
But that doesn't mean it's all bad, or that it doesn't have its uses.
Give me one real world use that is worth the downside.
As dev I can already tell you it's not coding or around code. Project get spamed with low quality nonsensical bug repport, ai generated code rarely work and doesn't integrate well ( on top on pushing all the work on the reviewer wich is already the hardest part of coding ) and ai written documentation is ridled with errors and is not legible.
And even if ai was remotly good at something it still the equivalent of a microwave trying to replace the entire restaurant kitchen.
Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.
Can you imagine if after WW2, Werner Von Braun came to the USA and then just like... Gave every man woman and child a rocket, with no training? Good and evil wouldn't even come into, it'd be chaos and destruction.
Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.
It's not a matter of good and evil, it's a matter of harm.
Absolutely brain dead to compare the probability engine "AI" with no fundamental use beyond marketed value with a wide variety of truly useful innovations that did not involve marketing in their design.
I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone's skull in with it. But no one's putting the hammer in jail.
Yeah, except it's a tool that most people don't know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.
So... It's not a good tool for the average layperson to be using.
Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you're a vegan as well.
Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you're home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.
Do you feel you've become more stupid? Do you think it's possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?
Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them? If we had the measuring instrument needed to actually make categorizations like "People are losing intelligence." Psychologists wouldn't still be arguing over the exact definition of intelligence.
Stop thinking of AI as a boogieman inside people's heads. It is a machine. People using the machine to achieve a mundane goal, it doesn't mean the machine created the goal or is responsible for everything wrong with humanity.
Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they're using. AI can't be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don't even use it, the less you know, the better. If you never engage with AI technology, you'll believe everything I say about how evil it is.
Seriously, the AI hate gets old fast. Like you said it's a tool, gey get over it people.
gey over it
👁️👄👁️🤖 🏳️🌈
A hammer doesn't consume exorbitant amounts of power and water.
What about self hosting? I can run a local GenAI on my gaming PC with relative ease. This isn't consuming mass amounts of power.
Do you think hammers grow out of the ground? Or that the magically spawn the building materials to work on?
Everything we do has a cost. We should definitely strive for efficiency and responsibile use of resources. But to use this as an excuse, while you read this in a device made of metals mined by children, is pretty hypocritical.
No consumption is ehical under capitalism, take responsibility instead for what you do with that consumption.
Neither does an algorithm.
Extreme oversimplification. Hammers don't kill the planet by simply existing.
And neither does AI? The massive data centers are having negative impacts on local economies, resources and the environment.
Just like a massive hammer factory, mines for the metals, logging for handles and manufacturing for all the chemicals, paints and varnishes have a negative environmental impact.
Saying something kills the planet by existing is an extreme hyperbole.
“Guns don’t kill people, people kill people”
Edit:
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.
...
The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.
...
Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.
Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.
Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.
...
Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.
If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!
And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.
A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.
You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!
So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
My skull-crushing hammer that is made to crush skulls and nothing else doesn't crush skulls, people crush skulls
In fact, if more people had skull-crushing hammers in their homes, i'm sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer
Guns don’t kill people. People with guns kill people.
Ftfy
We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.
One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?
But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.
Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.
Bad faith comparison.
The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That's literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.
AI is a tool, not a weapon. This is kind of melodramatic.
Yet gun control works.
Same idea.
I don't hate the concept as is, I hate how it is being marketed and shoved everywhere and into everything by sheer hype and the need for returns on the absurd amounts of money that were thrown at it.
Companies use it to justify layoffs, create cheap vibed up products, delegate responsibilities to an absolutely not sentient or intelligent computer program. Not even mentioning the colossal amount of natural and financial resources being thrown down this drain.
I read a great summary yesterday somewhere on here that essentially said "they took a type of computer model made to give answers to very specific questions it has been trained on, and then trained it on everything to make a generalist". Except that doesn't work, the broader the spectrum the model is covering the less accurate it will be.
Identifying skin cancer? Perfect tool for the job.
Giving drones the go ahead on an ambiguous target? Providing psychological care to people in distress? FUCK NO.
And here the question is: should we laugh out of despair or just cry?
You'd also get a ton of upvotes for saying "Trump bad", but you wouldn't be wrong. Shit is just shit.
It would still be a performative post though.
What we need is a circlejerk@ community
He's made the World wake up to the fact that they can't trust the US, so that can be seen as good?
AI isn't that black and white, just like any big technology it can be used for good or bad.
Just like Airplanes
Good lord stop comparing LLMs to airplanes in your replies. This is why you think "AI bad" is an unserious statement.
Reddit too
Yeah it is bad
It's actually a real problem
How dare you?
It Is true thou, ai bad
lemmycirclejerk ☺️
There are valid reasons for disliking AI (rather, how it’s being used) and I’ll upvote when a relevant, informed argument is made against it. Otherwise I’ll mentally filter out the low-effort comments that just say “fuck AI” with dozens of upvotes.
+1
I'd welcome actual AI. What is peddled everyday as "AI" is just marketing bullshit. There's no intelligence in it. Language shapes perception and we should take those words back and use them according to their original and inherent meaning. LLMs are not AI. Stable diffusion is not AI. Neural networks trained for a singular task are not AI.
Define "intelligence"
https://en.m.wikipedia.org/wiki/Intelligence
Take your pick from anything that isn't recent and by computer scientists or mathematicians, to call stuff intelligent that clearly isn't. According to some modern marketing takes I developed AI 20 years ago (optimizing search problems for agentic systems); it's just that my peers and I weren't stupid enough to call the results intelligent.
Yes, any further questions?
Why is the other arrow also pointing up?
Because I used AI slop to create this shitpost lol. So naturally it would make mistake.
There are other mistakes in the image too
Makes for a confusing cartoon. I browsed too many of the comments thinking everyone knew what 3251 means except me. I thought a route 3252 road sign fell on him.
You used AI to make a stickfigure comic? Damn.
Apple = bad is also an instant karma farm 😁
I prefer the fine vintage of a M$ = bad post, myself.
Or perhaps even a spicy little Ubuntu = bad post.
ai is pretty bad tho
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
That would seem hypocritical if you're completely blind to poetic irony, yes.
it doesnt seem hypocritical. It is
Wouldn't posting articles about AI making up bullshit support their claim that AI makes up bullshit?
You would be right if they weren't posting the article using grok as the source for the main claim.
The articles were "grok claims it was suspended from x for accusing isreal of genocide" thats fine. It is hypocritical when you post that article to every news, politics and tech community. There were a few communities where people commented that grok is full of shit but way to many communities treated it as if it was solid evidence.
I’m sure AI would be great if we actually had it. LLMs are not AI.
They factually are. ML is AI. I think you mean AGI maybe?
GTFO with your nuanced and engaged understanding. This is Lemmy.
AI > ML > DL > GenAi.
AI is a generic term for any LLM followed by Machine Learning, Deep Learning and Generative AI.
The hate against AI is hilariously misinformed.
Could be, I get confused by the alphabet soup of acronyms. I mean these glorified predictive text machines that for some reason marketers are trying to push as having some sort of ability to "think".
Not all that glitters is gold. 🤷
An AI could be demonstrably 30 times more accurate than a human in diagnosing a cancer on a scan Lemmy would still shit on it because it's an AI :D.
On Reddit I knew that the subject of gun control was not allowed to be talked about. Now I embraced Lemmy and I can't talk no matter what about AI. It's just a taboo subject. Apparently some people want to reject the tech entirely and think it will somehow just magically stay out of their lives. A very naive dream.
So yeah Lemmy. Refuse the conversation, look away, I'm sure it will be fine.
Legitimately useful applications, like in the medical field, are actually brought up as examples of the "right kind" of use case for this technology.
Over and over again.
It's kind annoying, because both the haters of commercial LLM in All The Things and defenders of the same will bring up these exact same use cases as examples of good ai use.
May I ask for a link ? Never saw that in the communities I consult. Never. Or at least not above 5 downvotes.
Thats because platforms like Lemmy and Reddit utilize the bandwagon effect. The upvote/downvote system is inherently flawed because there is no accountability as to why one votes the way they do.
In this particular case people are just ignorant as to how these new technologies function for example they continue to call them AI when they're not AI they're llms... They have no clue how the technology functions or how it should function and simply go by whatever they read on their feed which on lemmy as you know is nothing good.
In this particular case people are just ignorant as to how these new technologies function for example they continue to call them AI when they're not AI they're llms
You're my people 👏
An AI could be demonstrably 30 times more accurate than a human in diagnosing a cancer on a scan Lemmy would still shit on it because it’s an IA
I think this is an exaggeration.
Think about your argument for a minute.
I know you think this will harm you and everyone you know, but it'll be much better if you just stay quiet instead of vocally opposing it
When has that ever been good advice?
So everything related to AI is negative ?
If so do you understand why we can't have any conversation on the subject ?
Where are you seeing more than 200 upvotes on any post?
Does this count? https://sopuli.xyz/post/1138547
ah, mid 2023, the honeymoon times
The front page?
Literally this post lol
Doh. I'm always sorted to new, so things don't have this many votes. I should revisit every so often
Lots of pro AI astroturfing and whataboutisms in these comments... 🤢
Only because most of the people here don't have the faintest idea what AI is or how it works.
"Anyone I disagree with must be a bot or government agent"
It's a response to the overwhelming FOMO "ngmi" shilling on every other platform.
AI BAD, TRUMP BAD, CLIMATE CHANGE BAD, GENOCIDE BAD, CAPITALISM BAD, LIBERALS BAD, FASCISTS BAD, NAZIS BAD, INCELS BAD, CEOs BAD, LANDLORDS BAD, VIOLENCE BAD, CHILD SEX TRAFFICKING BAD.
Now where are my upvotes?
You forgot CEOs and landlords.
And the call for violence, because it splat as it's your team* doing it.
fixed 😅
Yes
AI is bad and people who use it should feel bad.
When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.
AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it's just math, and the equations have been in use for almost as long as we've had computers. It's no more good or bad than calculus or trigonometry.
No hope commenting like this, just get ready getting downvoted with no reason. People use wrong terms and normalize it.
So cancer cell detection is now bad and those doing it should feel bad?
The world isn't black'n white.
Me to burn victims: "You know, without fire, we couldn't grill meat. Right? You should think more about what you say."
Don't be obtuse, you walnut. I'm obviously not equating medical technology with 12-fingered anime girls and plagiarism.
You mean a subset of LLM that are trained on bad human behaviours
Would love an explanation on how I'm in the wrong on reducing my work week from 40 hours to 15 using AI.
Existing in predatory capitalistic system and putting the blame on those who utilize available tools to reduce the predatory nature of our system is insane.
Because when your employer catches on, they'll bring you back up to 40 anyway.
And probably because those 15 hours now produce shit quality.
So is eating meat, flying, gaming, going om holiday, basically if you exist you should feel bad
How does one feel bar?
Ek is mal oor jou gebruikersnaam. Ek ook.
Dankie!! Jy is die eerste wat agter kom.
As jy nog nie weet van !southafrica@piefed.social weet nie gaan loer daar rond. Ek probeer `n bietjie van 'n gemeenskap daar bou.
Ek sal dit nagaan, dankie vir die voorstel! Ek het van Suid-Afrika af geïmmigreer toe ek 4 was, so ek kan dit redelik goed praat, maar my lees- en skryfbegrip is nie-bestaande, so ek gebruik Google Translate om te help lol.
I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."
One of these topics is about class consciousness, those other is about human rights.
An AI is not a person.
Someone with they/them pronouns is a person.
They have no business being compared to one another!
It's a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word "pronouns."
Calling AI not a person is going to be a slur in the future, you insensitive meatbag
I'm a lot more sick of the word 'slop' than I am of AI. Please, when you criticize AI, form an original thought next time.
Yes! Will people stop with their sloppy criticisms?
It feels like the author of the post thinks superficially and doesn’t delve into the essence. Although, considering that killing children can also be funny, I don't even know what to laugh at and what to cry about. It's hard to understand someone else's humor sometimes.
Back in my day, PAC-Man ghosts stayed perfectly still, exhibiting no behaviour at all and that’s how we Lemmy people liked it!
Pfft, as if a post on lemmy would ever get more than 1K upvotes.
WTF is "AI"? You mean LLM?
Edit: lol, lmao even
Yeah. I hate the naming of it too. It's not AI in the sense how science fiction saw it. History repeats itself in the name of marketing. I'm still very annoyed with these marketers destroying the term "hover board".
AI includes a lot of things
The way ghosts in pacman chase you is AI
Either Al Capone or Weird Al.
Reminder that Lemmy is typically older and older people are usually more conservative about things. Sure, politcially, Lemmy leans left, but technologically, Lemmy is very conservative.
Like for example, you see people on Lemmy say they'll switch to a dumbphone, but that's probably even more insecure, and they could've just used Lineage OS or something and it would be far more private.
Why does being progressive and into tech mean being into ai all of a sudden? It has never meant that, its the conservative mfs pushing ai for a reason. You think any sort of powerful ai is about to be open source and usable by the ppl? Not expensive af to run with hella regulations behind who can use it?
Im progressive and in to tech, I dont like fking generative ai tho its the worst part of tech to me, ai can be great in the medical field, It can be great as a supplementary tool but mfs dont use it that way. They just wanna sit on their asses and get rich off other ppls work
You think any sort of powerful ai is about to be open source and usable by the ppl?
There's a huge open source community running local models!
True. Now shut up and take my upvote! Jo need for arguments; all has already been said.
"B-But you don't understand, AI DESTROYS le epic self employed artists and their bottom line! Art is a sacred thing that we all do for fun and not for work, therefore AI that automates the process is EVIL!"
Actual thought process of some people
AI does do this to a subsection. Claiming that everyone is overreacting is just as stupid and lacks the same amount of nuance as claiming AI is going to ruin all self employed artists.
Also this ignores AI companies stealing blatnatly copyrighted material to feed their AI. As an artist I rather not have some randoms steal my stuff so some mid-tier corporation can generate their logos and advertisements without paying for it
Not claiming that everyone is overreacting, but how stupid a lot of anti-AI arguments are. Artists drawing art for a living gets painted not as a job, but as some sort of a fun recreational activity ignoring that artists have to do commissions or draw whatever's popular with their fan base that pays their bills via patreon, which in other words is the process of commodifying oneself aka work.
Also this ignores AI companies stealing blatnatly copyrighted material to feed their AI.
Not saying that you're necessarily one of those people, but this argument often pops up in leftist spaces who previously were anti-IP, which is a bit hypocritical. One moment people are against intellectual property, calling it abusable, restrictive, etc, but once small artists start getting attacked then the concept has to be defended.
As an artist
womp womp you'll have to get a real job now /s
Um, have you tried practicing? Just draw a stick figure or hire an artist, this will easily solve all of your problems. You're welcome.
synthophobes are easily manipulated
You could have taken a screenshot from Spielberg's A.I. Artificial Intelligence.
It's funny how much that movie got right. I don't think it was meant to be predictive. Many Lemmy users will probably think it is the greatest comedy ever made.
I didn't like that movie back then, I thought it was too on the nose and weird.
But wow, this has aged like fine wine, that clip was amazing
When are we going to have actual violence against androids
AI is exactly as bad as mechanised weaving looms.
You mean it’s going to outsource the labour to children in third world countries?
More or less
i’m pro-AI (with huuuuge caveats) but i disagree with this… AI reduces certain jobs in a similar way, but it also enables large scale manipulation and fucks with our thought processes on a large scale
i’d say it’s like if a mechanised weaving loom also invented the concept of disinformation and propaganda
.. but also, mechanised weaving loom effected a single industry: modern ML has the potential to effect the large majority of people: it’s on a different scale than the disruption of the textile industry
Agree it's on a different scale (everything is relative to 200 years ago).
One of the main "benefits" of mechanised factory machinery in the early 1800s was that shifted the demand side of labour, such that capitalists had far more control over it. I reckon that counts as a kind of large scale manipulation (but yeah, probably not as pervasive of other domains of life).
But like... Good.
No but see we need machibes to do all the art for us, and averaging machibes to tell us wjat is and isnt true!
Remember when the internet was treated like AI when it first dropped? People up in arms about internets influence on young people/kids.
This all seems really familiar.
Honestly low-key technophobia has always been a major sentiment in otherwise tech-focused parts of the internet and it has always fascinated me.
The new thing about this new technology is that it’s taking our jobs away without creating any new ones. I fear the day some stupid higher up decides that a chatbot can do a better job than me.
I mean they were right about the internet. The corporations turned most of it into a cluster of competing megamalls that make people anxious while selling them things.
Never mind that even just open a port 443 anywhere in the world will open you up to roving bands of OC scrapers, which feed the next evolution of corp power.
It's not like a bunch of basement nerds made LLMs and now corps are trying to muscle in (like with the internet). This new technology is theirs, from its R&D to now its aggressive rollout across all sectors of the Internet they control.
I don't have much hope for the commercial AI or LLM services that are available to most consumers. I think they're going to enshittify so bad that we're going to need a new word for it.
Arguably, they may have been right given the decade or so.
Nobody is going to libraries anymore, The internet is killing books and jobs 🤬
What kind of selfish, emotionless psychopath do you have to be to legitimately think that libraries being unused, forgotten, and closed is a good thing?
You ever thought about this: maybe if you visited your library in person more often, you'd actually have more friends.
Peak misunderstanding between AI and LLM
The LLM shills have made "AI" refer exclusively to LLMs. Honestly the best ad campaign ever.
The LLM shills have made "AI" refer exclusively to LLMs.
Yes, I agree and it's unacceptable for me. Now most people here are also falling in the same hole. I'm here not to promote/support/standing with LLM or Gen-AI, I want to correct what is wrong. You can hate something but please, be objective and rational.
Why the hell are you being downvoted? You are completely right.
People will look back at this and "hover boards" and will think "are they stupid!?"
Mislabeling a product isn't great marketing, it's false advertisement.
AI is an umbrella term that holds many thing. We have been referring to simple path finding algorithms in video games as AI for two decades, llms are AIs.
IDK LMAO, that's what I really hate about Reddit/Lemmy, the voting system. People downvote but don't tell where I'm wrong in their opinion. I mean, at least argue — say out loud your (supposedly harmless) opinion. I even added a disclaimer there that I don't promote LLM and such stuff. I don't really care either, I stand with correctness and do what I can to correct what is wrong. I totally agree with @sentient_loom@sh.itjust.works tho.
Wait to power tripper db0 sees this. Crying that their ai photos in all their coms are cringe