On Exceptions
On Exceptions
Source (Bluesky)
On Exceptions
Source (Bluesky)
I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.
But also:
Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.
Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.
I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.
You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.
Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can't properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.
This makes me quite uncomfortable because that's the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can't or won't say explicitly isn't tech bros but immigrants and queer people.
(as a reverse dictionary, for example)
Thanks for putting a name on that! That's actually one of the few useful purposes I've found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don't know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I've found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!
Using chatGPT to recall the word 'verisimilar' is an absurd waste of time, energy, and in no way justifies the use of AI.
90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.
I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.
This work has already saved thousands of peoples lives.
But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.
Nobody has a problem with this, it's generative AI that's demonic
Generative AI uses the same technology. It learns when trained on a large data set.
Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.
Corporate enshittification is what's demonic. When you say fuck AI, you should really mean "fuck Sam Altman"
All this is being stoked by OpenAI, Anthropic and such.
They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”
For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.
And that AI has been trained on data that has been stolen, taking away the livelihood of thousands more. Further, the environmental destruction will have the capacity to destroy millions more.
I'm not lost on the benefits; it can be used to better society. However, the lack of policy around it, especially the pandering to corporations by the American judicial system, is the crux here. For me, at least.
nobody is trashing Visual Machine Learning to assist in medical diagnostics
cool strawman though, i like his little hat
Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.
Frfr
My issues are fundsmentally two fold with gen AI:
I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers' house of cards.
When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to. The problem, as is always the case, is capitalism immediately turned into a tool of theft and abuse. The theft of training data, the power requirements, selling it for profit, competing against those whose creations were used for training without permission or attribution, the unreliability and untrustworthiness, so many ethical and technical problems.
I still don’t have a problem with using the corpus of all human knowledge for machine learning, in theory, but we’ve ended up heading in a horrible, dystopian direction that will have no good outcomes. As we hurtle toward corporate controlled AGI with no ethical or regulatory guardrails, we are racing toward a scenario where we will be slavers or extinct, and possibly both.
When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to.
Except, of course, you aren't doing anything. You are no more writing, making music, or producing art than is an art director at an ad agency is. You're telling something else to make (really shitty) art on your behalf.
You really take no issue with how they were all trained?
*Not op but still gonna reply. Not really? The notion that someone can own (and be entitled to control) a portion of culture is absurd. It's very frustrating to see so many people take issue with AI as "theft" as if intellectual property were something that we should support and defend instead of being the actual tool for stealing artists work ("Property is theft" and all such). And obviously data centers are not built to be environmentally sustainable (not an expert, but I assume this could be done if they cared to do so). That said, using AI to do art so humans can work is the absolute peek of a stupid fucking ideas.
Solving points 1 and 2 will also address many ethical problems people create with AI.
I believe that information should be accessible to all. My issue is not with them training in the way they did, but their monopoly on this process. (In the very same vein as Sci-Hub makes pay-walled whitepapers accessible, cutting out the profiteering publishers.)
It must be democratized and distributed, not centralized and monetized!
The way they were trained is the way they were trained.
I dont mean to say that the ethics dont matter, but you are talking as though this isnt already present tense.
The only way to go back is basically a global EMP.
What so you actually propose that is a realistic response?
This is an actual question. To this point the only advice I've seen to come from the anti-ai crowd is "dont use it. Its bad!" And that is simply not practical.
You all sound like the people who think we are actually able to get rid of guns entirely.
I sure am glad that we learned our lesson from the marketing campaigns in the 90's that pushed consumers to recycle their plastic single-use products to deflect attention away from the harm caused by their ubiquitous use in manufacturing.
Fuck those AI users for screwing over small creators and burning down the planet though. I see no problem with this framing.
Are people expected not to follow anyone they disagree with?
Reading other opinions? On my echo chamber platform of choice?! /s
Follow to expose yourself to different perspectives? Sure.
But it sounds like the users in question are following with the intent to reply “you’re wrong” to everything the OP puts out.
Which… I do, sadly, expect. But I wouldn’t wish for it.
Well deserved. The OOP is wrong, and it sounds like they know it and are just trolling.
Why would you follow someone you disagree with?
Edit: I'm convinced, guys. I should follow racist, Nazi, psychopaths because even if I disagree their words hold value.
I'm not saying that we should rage-follow but it's also unreasonable to believe it's possible to agree with every single opinion of another person let alone another community as a whole.
AI is whatever, but man, has social media been mind poison.
I say we burn it all down, honestly. Including this place.
Occasional disagreement isn’t a bad thing. Provided that the opinions expressed aren’t toxic or dangerous, what’s wrong with hearing an opinion that differs from your own? You don’t have to endorse it, share it, or even comment about it.
No two people are going to agree 100% on everything. Listening to those who disagree with you means having opportunities to learn something new, and to maybe even improve yourself based on new information.
keeps you informed, and it shows open-mindedness
I should follow racist, Nazi, psychopaths
False equivalency and strawman, nice
You follow them because you're interested in their posts and you generally agree on most things. If I follow someone and they start saying FF14 is a good game im not going to unfollow just because I disagree.
Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.
I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
why would you follow someone you agree with?
if you want to learn, you search discord.
I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd
I'm just sick of all this because we gave to "AI" too much meaning.
I don't like Generative AI tools like LLMs, image generators, voice, video etc because i see no interests in that, I think they give bad habits, and they are not understood well by their users.
Yesterday again i had to correct my mother because she told me some fun fact she had learnt by chatGPT, (that was wrong), and she refused to listen to me because "ChatGPT do plenty of researches on the net so it should know better than you".
About the thing that "it will replace artists and destroy art industry", I don't believe in that, (even if i made the choice to never use it), because it will forever be a tool. It's practical if you want a cartoony monkey image for your article (you meanie stupid journalist) but you can't say "make me a piece of art" and then put it on a museum.
Making art myself, i hate Gen AI slop from the deep of my heart but i'm obligated to admit that. (Let's not forget how it trains on copirighted media, use shitton of energy, and give no credits)
AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won't give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.
TL,DR AI in general is a tool. Gen AI is bad as a powerful tool for everyone's use like it is bad to give to everyone an helicopter (even if it improves mobility). AI is nonetheless a very nice tool that can save lifes and help disabled peoples IF used and understood correctly and fairly.
AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won't give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.
I think the generative AI tech bros have deliberately contributed to a lot of confusion by calling all machine learning algorithms "AI".
I mean, you have some software which both works and is socially beneficial, like translation and speech recognition software.
You have some software that works, and is incredibly dangerous because it works, like facial recognition and all the horrible ways authoritarian governments can exploit it.
And then you have some software that "works" to produce socially detrimental bullshit, like generative AI.
All three of these categories use machine learning algorithms, trained on data sets to recognize and produce patterns. But they aren't the same in any other meaningful sense. Calling them all "AI" does nothing but confuse the issue.
I spent an hour talking photographs on the drive home the other night (the wife was driving and a storm have us great clouds). I was mostly playing with angles and landscape but it was fun. The kind of stuff it would take entire weeks to do thirty years ago, and I was done in an hour. I got a mediocre shot at best, but it was real dammit.
this post is no man's land
Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I'm not a copyright lawyer.
Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It's equivalent to pirating a movie to watch at home.
But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you're failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.
Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.
What a weird distinction. So if I get a prompt to make a particular scene in a particular artist's distinct style: not stealing. But if I share that prompt (and maybe even some seed info) to a friend, is that stealing? If I take a picture of the generated content, stealing? If someone takes it off my laptop without my knowledge are they stealing from me or the artist?
My viewpoint is that information wants to be free, and trying to restrict it is a losing battle (as shown by Ai training). The concept of IP is tenuous at best but I do recognize that artists need to eat in our capitalist reality. But once you make something and set it free to the world you inherently lose some ownership of it. Getting mad at the tech itself for the economic injustice is silly, there are plenty more important things to worry about in our hell scape.
Copyright law is more or less always formulated as limits on the rights to redistribute content, not how it is used. Hence, it isn't a particularly strange position to take that one should be allowed to do whatever one wants with gen AI in the private confines of ones home, and it is only at the moment you start to redistribute content we have to start asking the difficult questions: what is, and what is not, a derivative work of the training data? What ethical limitations, if any, should apply when we use an algorithm to effortlessly copy "a style" that another human has spent lots of effort to develop?
AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.
They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.
I think the public domain would be fair game as well, and the fact that AI companies don't limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.
That’s what my workplace does since 1985!
Do y'all hate chess engines?
If yes, cool.
If no, I think you hate tech companies more than you hate AI specifically.
The post is pretty clearly* about genAI, I think you're just choosing to ignore that part. There's plenty of really awesome machine learning technology that helps with disabilities, doesn't rip off artists and isn't environmentally deleterious.
The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.
So is trying to bucket them based on copyright violation: there are very powerful, open dataset, more or less reproducible LLMs trained and runnable on a trivial amount of electricity you can run on your own PC right now.
Same with use cases. One can use embeddings models or tiny resnets to kill. People do, in fact, like with Palantir's generative free recognition models. At the other extreme, LLMs can be totally task focused and useless at anything else.
The distinction is corporate/enshittified vs not. Like Reddit vs Lemmy.
Yup, as always, none of these problems are inherent to AI itself, they're all problems with capitalism.
I lose to stockfish 2. Yes.
So I'll be honest. I use GPT to write Python scripts for my research. I'm not a coder and I don't want to be one, but I do need to model data sometimes and I find it incredibly useful that I can tell it something in English and it can write modeling scripts in Python. It's also a great way to learn some coding basics. So please tell me why this is bad and what I should do instead.
Didn't you read the post? You're bad and should feel bad.
I think sometimes it is good to replace words to reevaluate a situation.
Would "I don't want to be one" be a good argument for using ai image generation?
I'd say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person's use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what's wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about "innovation" and beating China at another dick-measuring contest.
The other concern is that ChatGPT's ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model's training. As the adage goes, "AI allows wealth to access talent, while preventing talent from accessing wealth." But since a ridiculous amount of data goes into these models, it's an amorphous ethical issue that's understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.
By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we'll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).
As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.
I do use AI (mostly like Google), but I don't think it's justified or OK, lol - I'm the problem, and I know it.
Yeah I do plenty of shit I know is a problem. Most of it just passively from living in a consumerist society.
yes, a lot of my immoral actions are because it's hard or against the grain to be more moral (e.g. being a strict vegan even when traveling or not easily accommodated, or using cars when technically I could bicycle, but on dangerous roads and long distances).
I have definitely spent most of my adult life going against the grain in extreme ways to be a "better" person, but I have been left victimized and disabled for it, so I'm trying to learn to be more moderate and not take big social problems as entirely my personal responsibility. Obviously it's not one extreme or the other, it's an interplay between personal and social / structural.
I've said it before and I'll say it again, one of my favorite things is the AI rp chatbots. They're stories written by me and an AI, for me, however the fuck I want to write them.
I used to do it with other people over the web - including my bestie who Ive been writing with for 20+ years now - but I don't write with other humans anymore.
AI solves the ghosting issue, the "life got in the way" issues, the "I'm just not into it anymore" issues, and the "Oh you wanna make this smutty please for the love of god I hope you're not lying about being 26" issue, and finally, the biggest issue for me: "Please I told you I'm happily married please stop asking for me socials or email. I just wanna write fun angsty romance stories with you."
So I'm with you. I'm also the problem, its me. But you know what? When I discovered these AI chatbots in February of this year, my doomscrolling was cut down to a third of what it was, and I all of a sudden was sleeping better and less angry.
I'm not gonna stop.
Uh huh.
I use it to help me solve tech and code issues, but only because searching the web for help has become so bad. LLM answers are almost always better, and I hate it.
Everything is bullshit. Everything sucks. Capitalism has ruined everything.
It’s so surreal when someone posts a meme about That Guy™ doing That Thing™ and then all of a sudden That Guy™ shows up in the comments, doing That Thing™
Like, can I get your autograph? You’re famous, bro!
the fact that it is theft
There are LLMs trained using fully open datasets that do not contain proprietary material... (CommonCorpus dataset, OLMo)
the fact that it is environmentally harmful
There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave...
the fact that it cuts back on critical, active thought
This is a usecase problem. LLMs aren't suitable for critical thinking or decision making tasks, so if it's cutting back on your "critical, active thought" you're just using it wrong anyway...
The OOP genuinely doesn't know what they're talking about and are just reacting to sensationalized rage bait on the internet lmao
Saying it uses less power that a toaster is not much. Yes, it uses less power than a thing that literally turns electricity into pure heat… but that’s sort of a requirement for toast. That’s still a LOT of electricity. And it’s not required. People don’t need to burn down a rainforest to summarize a meeting. Just use your earballs.
Yeah man, guess show much energy it would take to draw the 4k graphics on your phone screen in 1995?
Saying it uses less power that a toaster is not much
Yeah but we're talking a fraction of 1%. A toaster uses 800-1500 watts for minutes, local LLM uses <300 watts for seconds. I toast something almost every day. I'd need to prompt a local LLM literally hundreds of times per day for AI to have a higher impact on the environment than my breakfast, only considering the toasting alone. I make probably around a dozen-ish prompts per week on average.
That’s still a LOT of electricity.
That's exactly my point, thanks. All kinds of appliances use loads more power than AI. We run them without thinking twice, and there's no anti-toaster movement on the internet claiming there is no ethical toast and you're an asshole for making toast without exception. If a toaster uses a ton of electricity and is acceptable, while a local LLM uses less than 1% of that, then there is no argument to be made against local LLMs on the basis of electricity use.
Your argument just doesn't hold up and could be applied to literally anything that isn't "required". Toast isn't required, you just want it. People could just stop playing video games to save more electricity, video games aren't required. People could stop using social media to save more electricity, TikTok and YouTube's servers aren't required.
People don’t need to burn down a rainforest to summarize a meeting.
Strawman
You're implying the edge cases you presented are the majority being used?
No, and that's irrelevant. Their post is explicitly not about the majority, but about exceptions/edge cases.
I am responding to what they posted (I even quoted them), showing that the position that "there is no ethical use for generative AI" and that there are no exceptions is provably false.
I didn't think it needed to be said because it's not relevant to this discussion, but: the majority of AI sucks on all fronts. It's bad for intellectual property, it's bad for the environment, it's bad for privacy, it's bad for people's brains, and it's bad at what it's used for.
All of these problems are not inherent to AI itself, and instead are problems with the massive short-term-profit-seeking corporations flush with unimaginable amounts of investor cash (read: unimaginable expectations and promises that they can't meet) that control the majority of AI. Once again capitalism is the real culprit, and fools like the OOP will do these strawman mental gymnastics and spread misinformation to defend capitalism at all costs.
Honestly I have nothing to add
Do people who self-host count? Like ollama? It's not like my PC is going to drain a lake.
Ethics and morality aside.
Yes, they count, the process of making and continuing to update the underlying LLM is also what drains the lakes, they are all made on pirated info (all the big ones for sure, I've not heard of a widely available, usable model trained 100% on legally obtained data, but I suppose it could exist).
To that person, yeah self hosting still counts.
I feel this way about people who eat meat.
This is extreme
I have same rant with "this is the only funny AI meme' shit.
I like to read the anti ai stuff, because ultimativly a lot of criticism is valid. But by god is there a lot of adolescent whining and hyperbole.
What bluesky client is that?
It is the webpage on Firefox.
I use LLMs in a way that reduces social anxiety from my autism, I give it details of a strange social interaction that I could not parse on my own and ask if I should worry about it, or if I should make any kind of amends or inquiries, or if I'm over thinking something and leave it alone.
I use LLMs to bounce my own ideas off of that I'm not comfortable bouncing off someone I know IRL.
I use LLM's to role play. (all kinds)
I use LLM's to find things that I can't find via conventional research methods.
And you know what, my perspective on using it for "productive/generative" usage is nuanced. I get why artists and writers are upset, however there is nothing magical about human's and their artistic abilities and in terms of material economic impacts automation of various kinds has screwed working people in the past and generally I've seen a lot less push back.
I do think that generated images and writing is pretty bland and near worthless though without a ton of human done work atm anyway. Like, sure I could generate a video of a cat dancing on a moving bus while a nuclear bomb is going off in the background or whatever wacky shit with a simple prompt but what exactly am I even going to do with that?
Highly directed AI content that includes a lot of human work tends to actually be pretty amazing IMO.
And even though all the outrage pertains to intellectual work, this technology is going to likely result in a lot of blue collar work being automated via "embodied" neural network AI's. In fact, it may be that it was needed for this kind of automation to really take off at all. Its not just white collar work. We aren't just automating slop content and corporate purposed art. The day is coming when stuff like laundry, factory/warehouse work, and kitchen work, etc. is also all being done by robots.
I'd say this is also a bit about extremism. I mean it's not wrong to be entirely against AI. I don't think I am. For example if we managed to do it ethically, I wouldn't have much of an issue with assistance systems in cars, smart home voice assistants and machine translation. I'm more opposed the more it gets towards generative AI. And because we do it the opposite of ethical in practice. I'm not necessarily opposed because of the thing itself or towards the science behind it, but because of all the bad consequences it comes with. But people like me aren't allowed a more nuanced opinion or to draw the line somewhere unless it's a perfect 0% or 100% and I feel people expect me to take some super extreme position. I still consider myself part of the anti-AI community overall, but both sides frequently misunderstand me. So I'm still subscribed to your posts and put up with the personal hate.
(Edit: Of course the take in the screenshot is stupid, though. There are a lot of compelling arguments against AI. And whether it fixes your bike or computer code isn't a matter of opinion, and it might benefit someone but that has nothing to do with justifying cost and side-effects of AI on other people.)
It's basically "I have no creative talent or skill so I'll use this because it'll teach those artists a lesson for acting so superior." It's a completely delusional disconnect with the reality of being an actually creative person in any way. Especially one when it comes to creatives trying to earn a living while must people just want their shit for free. Go harass billionaires for their shit and leave there starving artists alone.
so you think that artists, "actually creative people", don't use genAI?
🤔
Slopvangelical LLM thumpers.
Edit: is this being misinterpreted as a pro-AI statement?
Sorry but you can deny and hate all you want, it’s not going anywhere
Neither is climate change, but we should still combat it where possible.
Funny, that. Fighting against AI could be seen as fighting against climate change, considering the large carbon footprint it has.
I think it's going to contract massively. Like nft's.
Texans are facing a water shortage due to over 900 MILLION gallons of water being used to cool AI datacenters. Do you think that's sustainable?
doesn't Texas exist because somebody invented air conditioning?
can't we cut down on texans if they use so much water?
Do you want me to list the techbrodude technologies that were "not going anywhere" in past decades that have effectively died outside of tiny die-hard communities still living a delusion?
Remember when the Metaverse was the next great thing that wasn't going anywhere? Remember when cryptocurrency was going to wipe out banking forevermore? Remember when NFTs were going to revolutionize artists getting paid for their work? Segway or its somehow-lamer cousin "hoverboards"? Augmented Reality? 3D TVs? Theranos? Google Wave?
Hell, just go visit the Google Graveyard for a list of "hot" technologies that withered and died on the vine. (And quite a few lame technologies that shouldn't have ever even been on the vine.)
Remember all that?
But this time the techbrodudes have it right, despite there not being a viable business model; despite every AI vendor in the world burning through money faster than dumping that same cash into a forest fire. It's not going anywhere!
Every grift has two parties: the grifter and the sucker. You're not the former.
sadly. I don't have enough money to turn this shit-hose off.
Gen AI is neat, and I use it for personal processes including code, image gen, llm/chat; but it is sooooo faaaar awaaaay from being a real game changer - while all the people poised to profit off it claim it is - that it's just insane to claim it's the next wave. evidence: all the creative (photo/art/code/etc) people who are adamantly against it and have espoused reasoning.
There's another story on my feed about a 10-year-old refactoring a code base with a LLM. Go look at the comments from actual experts that take into account things like unit tests, readability, manageability, security. Humans have more context than any AI will.
LLMs are not intelligent. They are patently not. They make shit up constantly, since that is exactly what they do. Sometimes, maybe even most of the time, the shit they make up is mostly accurate... but do you want to rely on them?
When a doctor prescribes you the wrong drug, you can sue them as a recourse. When a software company has a data breach, there is often a class-action (better than nothing) as a recourse. When an AI tells you to put glue on your pizza to hold the toppings, there is no recourse, since the AI is not a legal thing and the company disclaims all liability for its output. When an AI denies your health insurance claim because of inscrutable reasons, there is no recourse.
In the first two, there is a penalty for being wrong, which is in effect an incentive to be correct -- to be accurate, to be responsible.
In the last, as an AI llm/agent/fuckingbuzzword, there is no penalty and no incentive. The AI just is as good as its input, and half the world is fucking stupid, so if we average out all the world's input, we get "barely getting by" as a result. A coding AI is at least partially trained on random stackoverflow posts asking for help. The original code there is wrong!
Sadly, it's not going anywhere. But people who rely on it will find short-term success for long-term failure. And a society relying on it is doomed. AI relies on the creative works that already exist. If we don't make any new things, AI will stagnate and die. Where will we be then?
There are places AI/LLM/Machine-Learning can be used successfully and helpfully, but they are niche. The AI bros need to be figuring out how to quickly meet a specific need instead of trying to meet all needs at the same time. Think the early 2000-s Folding at Home, how to convince republicans to wear a fucking mask during covid, why we shouldn't just eat the billionaires*.
*Hermes-3 says cannibalism is "barbaric" in most cultures, but otherwise doesn't give convincing arguments.
Sex bots are taking work away from sex workers. Turning genuine sensuality into some kind of horrible mimicry of a genuine connection. How am I supposed to pay my bills
Marx would have been pro AI and anti capitalist
AI saved my pets life. You won't convince me it's 100% all bad and there's no "right" way to use it.
The way it is trained isnt intellectual theft imo.
It only becomes intellectual theft if it is used to generate something that then competes with and takes away profits from the original creators.
Thus the intellectual theft only kicks in at generation time, but the onus is still on the AI owners for not preventing it
However if I use AI to generate anything that doesn't "compete" with anyone, then "intellectual theft" doesn't matter.
For example, when I used it to assist with diagnosing a serious issue my pet was having 2 months ago that was stumping even our vet and it got the answer right, which surprised our vet when we asked them to check a very esoteric possibility (which they dubious checked and then they were shocked to find something there.
They asked us how on earth we managed to guess to check that place of all things, how could we have known. As a result we caught the issue very early when it was easy to treat and saved our pets life
It was a gallbladder infection, and her symptoms had like 20 other more likely causes individually.
But when I punched all her symptoms into GPT, everytime, it asserted if was likely the gallbladder. It had found some papers on other animals and mammals and how gallbladder infections cause that specific combo of symptoms rarely, and encouraged us to check it out.
If you think "intellectual theft" still applies here, despite it being used to save an animals life, then you are the asshole. No one "lost" profit or business to this, no one's intellectual property was infringed, and consuming the same amount of power it takes to cook 1 pizza in my oven to save my pets life is a pretty damn good trade, in my opinion.
So, yes. I think I used AI ethically there. Fight me.
Regular search could have also surfaced that information
Not at tremendously less of a power cost anyways. My laptop draws 35W
5 minutes of GPT is genuinely less power consumption than several hours of my laptop being actively used to do the searching manually. Laptops burn non trivial amounts of power when in use. Anyone who has held a laptop on their lap can attest to the fact they aren't exactly running cold.
Hell even a whole day of using your mobile phone is non trivial in power consumption, they also use 8~10W or so.
Using GPT for dumb shit is arguably unethical, but only in the sense that baking cookies in the oven is. You gonna go and start yelling at people for making cookies? Cooking up one batch of cookies burns WAAAY more energy than fucking around with GPT. And yet I don't see people going around bashing people for using their ovens to cook things as a hobby.
There's no good argument against what I did, by all metrics it genuinely was the ethical choice.
The singular of data is not anecdote.
For every claim people make of how "Gen AI saved my
<insert whatever>
" you can find a dozen stories of people being actively harmed by Gen AI.Stopped watch and all that jazz.
Lmfao, this is the most childish take I could possibly imagine.
You cannot avoid the problems with something by sticking your head in the sand and pretending like it doesn't exist and will go away.
paint is poison. I don't see people making anti-print posts. Not to diss on your antiAI zealotry ( i am your asshole, you see; i love antiAI.) I just would like to see more antiPrint posts, if the environment is your concern.
When it comes to intellectual property… are you mouthing the corporations that profit from it? Is intellectual property the solution to keep creative people alive in an exploitive economy?
I just would like to see more antiPrint posts
Sounds like a niche just waiting for you to fill it, my friend.
:)
The environment is more my area of interest, so I'm going to focus on that part.
Before I looked into AI's environmental impacts, I too had thought it might be overfocusing a bit on the wrong areas, but I didn't realize how much the order of magnitudes had changed. Before the large AI models we're seeing now, data centers weren't a major source of change in energy consumption. Overall power consumption in places like the US had been mostly level for the previous 10-20 years (up until 2020). But AI is not like most past datatcenter workloads, it is constantly high power usage. Especially for model training, it's using the equipment at full utilization for almost the entire time. It's using higher energy chips and far more chips overall. Besides training, typical datacenter workloads before high AI usage weren't super high energy per request, but that isn't true of AI either. The rapid increase in the energy consumption from it is what's driving the issue
It's causing us to delay closing of fossil fuel plants. It's making previous declining datacenter energy stop declining and go the opposite direction and projected to increase datacenter energy usage go up by 165% by 2030
In Europe too, a data center-led surge in power demand is under way, after 15 years of decline in the power sector. Having surveyed utilities across the continent, Goldman Sachs Research found that the number of connection requests received by power distribution operators (a leading indicator of future demand) has risen exponentially over the past couple of years, mostly driven by data centers.
If we were talking about water usage of AI and someone brought up agriculture's (especially animal agriculture) more dominant use, that would be fair to mention and talk about. But that doesn't excuse AI's water usage, just pose another area to also focus on
First of all, intellectual property rights do not protect the author. I'm the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.
Secondly, your personal carbon footprint is bullshit.
Thirdly, everyone in the picture is an asshole.