Is It Just Me?
Is It Just Me?
Is It Just Me?
My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
At least knowingly. It seems some customer service stuff feeds it direct to AI before any human gets involved.
I once asked a "customer service rep" to write a python script. It did.
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.
First post on a lemmy server, by the way. Hello!
Reminds me of the way NFTs were pushed. I don’t think any regular person cared about them or used them, it was just astroturfed to fuck.
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
Classic Torment Nexus moment over and over again really
It did help me make a basic script and add it to task scheduler so it runs and fixes my broken WiFi card so I don't have to manually do it. (or better said, helped me avoid asking arrogant people that feel smug when I tell them I haven't opened a command prompt in ten years)
I feel like I would have been able to do that easily 10 years ago, because search engines worked, and the 'web wasn't full of garbage. I reckon I'd have near zero chance now.
I actually ended up switching to Kagi for this exact reason. Google is basically AI at the start usually spouting nonsense then sponsor posts and then a bunch of SEO optimized BS.
Thankfully paying for search circumvents the ads and it hasn’t been AI by default (it has it but it’s off) and the results have been generally closer to 2010s Google.
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
Anyone in engineering knows the 90% of your goal is the easy bit. You’ll then spend 90% of your time on the remainder. Same for AI and getting past the uncanny valley with art.
it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product
Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.
And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.
It’s a goal. Somehow. It’s utter insanity.
The goal is "[Replace you money-needing meatsacks with] AI" but the suits don't want to say it that clearly.
The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can't trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
I am way less hostile to Genai (as a tech) than most and even I've grown to hate this scenario. I am a subject matter expert on some things and I've still had people trying to waste my time to prove their AI hallucinations wrong.
Do you also check if they listen to Joe Rogan? Fox news? Nobody can be trusted. AI isn't the problem, it's that it was trained on human data -- of which people are an unreliable source of information.
AI also just makes things up. Like how RFKJr's "Make America Healthy Again" report cites studies that don't exist and never have, or literally a million other examples. You're not wrong about Fox news and how corporate and Russian backed media distorts the truth and pushes false narratives, and you're not wrong that AI isn't the problem, but it is certainly a problem and a big one at that.
Joe Rogan doesn't tell them false domain kowledge 🤷
I feel the same way. I was talking with my mom about AI the other day and she was still on the "it's not good that AI is trained on stolen images, how it's making people lazy and taking jobs away from ppl" which is good, but I had to explain to her how much one AI prompt costs in energy and resources, how many people just mindlessly make hundreds of prompts a day for largely stupid shit they don't need and how AI hallucinates, is actively used by bad actors to spread mis- and disinformation and how it is literally being implemented into search engines everywhere so even if you want to avoid it as a normal person, you may still end up participating in AI prompting every single fucking time you search for anything on Google. She was horrified.
There definitely are some net positives to AI, but currently the negatives outweigh the positives and most people are not using AI responsibly at all. I have little to no respect for people who use AI to make memes or who use it for stupid everyday shit that they could have figured out themselves.
The most dystopian shit I have seen recently was when my boyfriend and I went to watch Weapons in cinema and we got an ad for an AI assistent. The ad is basically this braindead bimbo at a laundry mat deciding to use AI to tell her how to wash her clothes instead of looking at the fucking flips on her clothes and putting two and two together. She literally takes a picture of the flip and has the AI assistent tell her how to do it and then going "thank you so much, I could have never done this without you".
I fucking laughed in the cinema. Laughed and turned to my boyfriend and said: this is so fucking dystopian, dude.
I feel insane for seeing so many people just mindlessly walking down this path of utter retardation. Even when you tell them how disastrous it is for the planet, it doesn't compute in their heads because it is not only convenient to have a machine think for you. It's also addictive.
You are not correct about the energy use of prompts. They are not very energy intensive at all. Training the AI, however, is breaking the power grid.
Sam Altman, or whatever fuck his name is, asked users to stop saying please and thank you to chatgpt because it was costing the company millions. Please and thank you are the less power hungry questions chatgpt gets. And its costing chatgpt millions. Probably 10s of millions of dollars if the CEO made a public comment about it.
You're right training is hella power hungry, but even using gen ai has heavy power costs
Maybe not an individual prompt, but with how many prompts are made for stupid stuff every day, it will stack up to quite a lot of CO2 in the long run.
Not denying the training of AI is demanding way more energy, but that doesn't really matter as both the action of manufacturing, training and millions of people using AI amounts to the same bleak picture long term.
Considering how the discussion about environmental protection has only just started to be taken seriously and here they come and dump this newest bomb on humanity, it is absolutely devastating that AI has been allowed to run rampant everywhere.
According to this article, 500.000 AI prompts amounts to the same CO2 outlet as a
round-trip flight from London to New York.
I don't know how many times a day 500.000 AI prompts are reached, but I'm sure it is more than twice or even thrice. As time moves on it will be much more than that. It will probably outdo the number of actual flights between London and New York in a day. Every day. It will probably also catch up to whatever energy cost it took to train the AI in the first place and surpass it.
Because you know. People need their memes and fake movies and AI therapist chats and meal suggestions and history lessons and a couple of iterations on that book report they can't be fucked to write. One person can easily end up prompting hundreds of times in a day without even thinking about it. And if everybody starts using AI to think for them at work and at home, it'll end up being many, many, many flights back and forth between London and New York every day.
One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?
It's more about them feeding it into an LLM which then decides to incorporate it in an answer to some random person.
I'm putting a presentation on at work about the downsides of AI next month, please feed me. Together, we can stop the madness and pop this goddamn bubble.
Gemini, feed them some downsides of AI 😁
Get thee hence to the fuck_ai community. You will be given sustenance.
Ask any AI which states have the letter R in them. Watch them get it wrong, and show to colleagues how dangerous it is to rely on their results as fact.
Unfortunately the masses will do as they're told. Our society has been trained to do this. Even those that resist are playing their part.
On the contrary: society has repeatedly rejected a lot of ideas that industries have come up with.
HD DVD, 3D TV, Crypto Currency, NFT's, Laser Discs, 8-track tapes, UMD's. A decade ago everyone was hyping up how VR would be the future of gaming, yet it's still a niche novelty today.
The difference with AI is that I don't think I've ever seen a supply side push this strong before. I'm not seeing a whole lot of demand for it from individual people. It's "oh this is a neat little feature I can use" not "this technology is going to change my life" the way that the laundry machine, the personal motor vehicle, the telephone, or the internet did. I could be wrong but I think that as long as we can survive the bubble bursting, we will come out on the other side with LLM's being a blip on the radar. And one consequence will be that if anyone makes a real AI they will need to call it something else for marketing purposes because "AI" will be ruined.
AI's biggest business is (if not already, it will be) surveillance systems sold to authoritarian governments worldwide. Israel is using it in Gaza. It's both used internally and exported as a product by China. Not just cameras on street corners doing facial recognition, but monitoring the websites you visit, the things you buy, the people you talk to. AI will be used on large datasets like these to label people as dissidents, to disempower them financially, and to isolate them socially. And if the AI hallucinates in this endeavor, that's fine. Better to imprison 10 innocent men than to let 1 rebel go free.
In the meantime, AI is being laundered to the individual consumer as a harmless if ineffective toy. "Make me a portrait, give me some advice, summarize a meeting," all things it can do if you accept some amount of errors. But given this domain of problems it solves, the average person would never expect that anyone would use it to identify the first people to pack into train cars.
VR was and is also still a very inaccessible tool for most people. It costs a lot of money and time to even get to the point where you're getting the intended VR experience and that is what it mostly boils down to: an experience. It isn't convenient or useful and people can't afford it. And even though there are many gamers out there, most people aren't gamers and don't care about mounting a VR headset on their cranium and getting seasick for a few minutes.
AI is not only accessible and convenient, it is also useful to the everyday person, if the AI doesn't hallucinate like hell, that is. It has the potential to optimize workloads in jobs with a lot of paperwork, calculations and so on.
I completely agree with you that AI is being pushed very aggressively in ways we haven't seen before and that is because the tech people and their investors poured a lot of money into developing these things. They need it to be a success so they can earn their money back and they will be successful eventually because everybody with money and power has a huge interest in this tool becoming a part of everyday life. It can be used to control the masses in ways we cannot even imagine yet and it can earn the creators and investors a lot of money.
They are already making AI computers. According to some it will entirely replace the types of computers we are used to today. From what I can understand, it will be preferable to the open AI setups we have currently that are burning our planet to a crisp with the amount of data centers that need to keep them active. Supposedly the AI computer will have it be a local thing on the laptop and it will therefore demand less resources, but I'm so fucking skeptic about all this shit that I'm waiting to see how much power a computer with an AI operating system will need to swallow in energy. I'm too tech-ignorant to understand the ins and outs of what this and that means, but we are definitely going to have to accept that AI is here to stay and the current setup with open AIs and forced LLM's in every search engine is a massive environmental nightmare. It probably won't stop or change a fucking lick because people don't give a fuck as long as they are comfortable and the companies are getting people to use their trash tech just like they wanted so they won't stop it either.
HDDVDs weren’t rejected by the masses they were a casualty in Sony’s vendetta against the loss of Beta and DAT. Both of which were rejected by industry not consumers (though both were later embraced by industry and Betas even outlasted VHSs). They would have won out for the same reasons that Sony lost the previous format wars (insistence on licensing fees) except this time Sony bought out Columbia and had a whole library of video and a studio to make new movies to exclusively release on their format. Essentially the supply side pushing something until consumers accepted it, though to your point not quite as bad as AI is right now.
8-Tracks and laserdiscs were just replaced by better formats (Compact Cassette and Video CD/DVD respectively). Each of them were also replacements for previous formats like Reel to Reel and CEDs.
UMDs only don’t exist still because flash media got better and because Sony opted to use a cheaper scratch resistant coating instead of a built in case for later formats (like Blu-ray). Also, UMDs themselves were a replacement for or at least inspired by an earlier format called MiniDisc.
Capitalism’s biggest feat has been convincing people that everything is the next big thing and nothing that has come before is similar when just about everything is just a rinse and repeat, even LLMs… remember when Watson beat Ken Jennings?
See also: Cars, appliances, consumer electronics, movies, food, architecture.
We are ruled by the market and the market is ruled by the lowest common denominator.
I think a healthier perspective would involve more shades of grey. There are real issues with power consumption and job displacement. There are real benefits with better access to information and getting more done with limited resources. But I expect bringing any nuance into the conversation will get me downvoted to hell.
There are real benefits with better access to information and getting more done with limited resources.
If there were, someone would have made that product and it would be profitable.
But they ain’t and it isn’t because those benefits are miniscule. The only cases we know of where that was the actual story turn out to be outsourcing to India and calling it AI.
This is a great representation of why not to argue with someone who debates like this.
Arguments like these are like Hydras. Start tackling any one statement that may be taken out of context, or have more nuance, or is a complete misrepresentation, and two more pop up.
It sucks because true, good points get lost in the tangle.
For instance, there are soft science, social interaction areas where AI is doing wonders.
Specifically, in the field of law, now that lawyers have learned not to rely on AI for citations, they are instead offloading hundreds of thousands or millions of pages of documents that they were never actually going to read, and getting salient results from allowing an AI to scan through them to pull out interesting talking points.
Pulling out these interesting talking points and fact checking them and, you know, A/B testing the ways to interact and bring them in front of the jury with an AI has made it so that many law firms are getting thousands or millions of dollars more on a lawsuit than they anticipated.
And you may be against American law for all of its frivolous plaintiffs' lawsuits or something, but each of these outcomes are decided by human beings, and there are real damages that are lifelong that are being addressed by these lawsuits, or at least in some way compensated.
The more money these plaintiffs get for the injuries that they have to live with for the rest of their lives, the better for them, and AI made the difference.
Not that lawyers are fundamentally incapable or uncaring, but for every one, I don't know who the fuck is a super lawyer nowadays, but you know, for every, you know, madman lawyer on the planet, there's 999 that are working hard and just do not have the raw plot armor Deus Ex Machina dropping everything directly into their lap to solve all of their problems that they would need to operate at that level.
And yes, if you want to be particular, a human being should have done the work. A human being can do the work. A human being is actually being paid to do the work. But when you can offload grunt work to a computer and get usable results from it that improves a human's life, that's the whole fucking reason why we invented computers in the first place.
I'd like to hear more about this because I'm fairly tech savvy and interested in legal nonsense (not American) and haven't heard of it. Obviously, I'll look it up but if you have a particularly good source I'd be grateful.
I have lawyer friends. I've seen snippets of their work lives. It continues to baffle me how much relies on people who don't have the waking hours or physical capabilities to consume and collate that much information somehow understanding it well enough to present a true, comprehensive argument on a deadline.
Yes, you're the weird one. Once you realize that 43% of the USA is FUNCTIONALLY ILLITERATE you start realizing why people are so enamored with AI. (since I know some twat is gonna say shit: I'm using the USA here as an example, I'm not being us-centric)
Our artificial intelligence, is smarter than 50% of the population (don't get started on 'hallucinations'...do you know how many hallucinations the average person has every day?!) -- and is stupider than the top 20% of the population.
The top 20%, wonder if everyone has lost their fucking minds, because to them it looks like it is completely worthless.
It's more just that the top 20% are naive to the stupidity of the average person.
I have to say, I don't agree with some of your other points elsewhere here, but this makes a lot of sense.
Our artificial intelligence, is smarter than 50% of the population
"Smartness" and illiteracy are certainly different things, though. You might be incapable of reading, yet be able to figure out a complex escape room via environmental cues that the most high quality author couldn't, as an example.
There are many places an AI might excel compared to these people, and many areas it will fall behind. Any sort of unilateral statement here disguises the fact that while a lot of Americans are illiterate, stupid, or even downright incapable of doing simple tasks, "AI" today is very similar, just that it will complete a task incorrectly, make up a fact instead of just "not knowing" it, or confidently state a summary of a text that is less accurate than first grader's interpretation.
Sometimes it will do better than many humans. Other times, it will do much worse, but with a confident tone.
AI isn't necessarily smarter in most cases, it's just more confident sounding in its incorrect answers.
Yeah, when I refer to intelligence here I don't mean actual intelligence. AI isn't "smart" (it's not intelligent in the classic sense, it doesn't even think), it's just good at regurgitating what it's been trained on.
But it turns out -- That's kind of what humans do too. It's worth having a philosophical discussion on what intelligence REALLY is.
It's also much less incorrect than your average person would be on a much larger library of content. I think the real litmus test for AI is to compare it to an average person. The average person messes up constantly; also likely covers it up or course-corrects after they've screwed up. I don't think it's fair to expect perfectly correct responses out of AI at all; because there is absolutely no human that could reach those heights at an equal level. Look at competitive knowledge games where AI competes - it stomps some of our most intelligent people, and quite often.
Billionaires: invests heavily in water.
Billionaires: "In the future there's going to be water wars. You need to invest NOW! Quick before it's too late. I swear I'm not just trying to pump the stock."
Billionaires: "Water isn't accruing value fast enough. Let's invent a product that uses a shit ton of it!"
Billionaires: "No one likes or is using the product. Force them to. Include it in literally all software and every website. Make it so they're using the product even when they don't know they're using it. Include it in every web search. I want that water gone by the end of this quarter!"
The way I look at it is that I haven't heard anything about NFTs in a while. The bubble will burst soon enough when investors realize that it's not possible to get much better without a significant jump forward in computing technology.
We're running out of atomic room to make thing smaller just a little more slowly than we're running out of ways to even make smaller things, and for a computer to think like, as well as as quickly or faster than a person we need processing power to continue to increase exponentially per unit of space. Silicon won't get us there.
This is a good take for a lot of reasons.
In part because NFTs are still used and have some interesting applications, but 90% of the marketing and use cases were companies trying to profit from the hype train.
OTOH you haven't heard of NFTs in a while because AI hype replaced it, so... what hell spawn is going to replace the AI hype?
I'm calling it now– it's quantum computing.
I have some friends who work in it, and I've watched and read damn near everything I can on it (including a few uni courses). It is neat, it has uses, it will not install transform all computing or invalidate all security or anything like that. It's gonna be oversold as fuck.
3 blue 1 brown has great videos on it. Grover's Algorithm, the best we can think to try to apply, is √N faster than traditional computing. Which is a lot faster for intense stuff like protein folding, but it's linearly faster. SHA256 encryption still would take an eternity to brute force, just a smaller eternity.
Meanwhile every company finds out the week after they lay off everyone that the billions they poured into their shitty "AI" to replace them might as well have been put in bags and set on fire
Me and the homies all hate ai. The only thing people around me seem to use ai for is essentially just snapchat filters. Those people couldn’t muster a single fuck about the harms ai has done though.
The only thing people around me seem to use ai for is essentially code completion, test case development and email summaries. I don't know a single person who uses Snapchat. It's like the world is diverse and tools have uses.
"I hate tunnel boring machines, none of my buddies has an use for a tunnel boring machine, and they are expensive and consume a ton of energy"
I can see that you’re trying to mirror my comment, I just fail to see the point you’re trying to make. Cool, you know people who have a somewhat legitimate use for the unprofitable, unreliable technology that’s built on rampant theft and consumes obscene amounts of power and water. And?
I absolutely agree that AI is becoming a mental crutch that a disturbing number of people are snatching up and hobbling around on. It feels like the setup of Wall-E, where everyone is rooted in their floating rambler scooters.
I think the fixation on individual consumer use of AI is overstated. The bulk of the AI's energy/water use is in the modeling and endless polling. The random guy asking "@Grok is this true?" is having a negligible impact on energy usage, particularly in light of the number of automated processes that are hammering the various AI interfaces far faster than any collection of humans could.
I'm not going to use AI to write my next adventure or generate my next character. I'm not going to bemoan a player who shows up to game with a portrait with melted fingers, because they couldn't find "elf wizard in bearskin holding ice wand while standing on top of glacier" in DeviantArt.
For the vast majority of users, this is a novelty. What's more, its a novelty that's become a stand-in for the OG AI of highly optimized search engines that used to fulfill the needs we're now plugging into the chatbot machine. I get why people think it sucks and abstain from using it. I get why people who use it too much can straight up drive themselves insane. I get that our Cyberpunk style waste management strategy is going to get one of the news few generations into a nightmarish blight. But I'm not going to hang that on the head of someone who wants to sit down at a table with their friends, look them in the eye, and say "Check out this cool new idea I turned into a playable character".
Because if you're at the table and you're excited to play with other humans in a game about going out into the world on adventures, that's as good an antedote to AI as I could come up with.
And hey, as a DM? If you want to introduce the Mind Flayer "Idea Sucker" machine that lures people into its brain-eating maw by promising to give them genius powers? And maybe you want to name the Mind Flayer Lord behind the insidious plot Beff Jezos or Mealon Husk or something? Maybe that's a good way to express your frustration with the state of things.
As someone who's GM'ed tabletops, I find it interesting that players who froth at the mouth at the existance of an AI token because "AI commits possibly piracy and art theft" then turn around and insist on me doing / do the "pick the image from searching the internet", which if you've ever browsed an art site, would know that doing such a thing is actual piracy and art theft, especially with artists that have the 40 page long terms and conditions, and an interesting number of "use in tabletops forbidden" clauses.
What's more, its a novelty that's become a stand-in for the OG AI of highly optimized search engines that used to fulfill the needs we're now plugging into the chatbot machine.
I don’t think it’s temporary. That was the whole goal - suck up everybody’s work, dark-magick it into a chatbot and voilá: no more need for anyone’s webpage.
The fact that it's broken what was working is more than just a metaphor for gen-AI in any setting. It’s fundamentally changed it for the worse and we’ll never get the unfucked version back.
I may dress like an android, but I’m humanist as all hell. Down with AI slop, jail those responsible for wildlife destruction and theft from artists, and banish this slop to the history books!
Have you heard of these things called humans? I think this is more a reflection of them. Books ate trees and corrupted the youth, tv rotted your brain and made you go blind, the internet made people lazy. Wait until I tell you about gasp auto-correct or better yet leet speak! The horror. Clearly we are never recovering from either of those. In fact, I’m speaking to you now in emojis. And wait until you learn about clutches pearls Wikipedia— ah the horror!
Is tech and its advancements perfect? No. Can people do better? Yes. Are criticisms important? Sure are. But panic and fighting a rising tech? You’re probably not going to win.
Spend time educating people on how to be more ethical with their tech use and absolutely pressuring companies to do the same. Taking a club to a computer didn’t stop the rise of the word processor or the spread of Wikipedia madness. But we can control how we consume and relate to tech and what our demands of their creators are.
PS— do you even know how to read and write cursive? > punchable smug face goes here. <
I mean - propaganda has in fact gotten us to the shittiest administration possible. AI hype is off-the-scale for anything - more than The Space Race, more than, well, anything. And it isn’t even useful!
It’s far and away a different thang than a new medium about, by, and for humans.
I agree. I would say we’re at the cusp of a new technological revolution. Our world is changing fundamentally and rapidly.
Is there a way for me to take a picture of a food and find nutritional values without AI? I sometimes use duck.ai to ask because, when making tortilla for example idk what could be exact because while I can read values for a tortilla, I don't have a way to check the same for meat and other similar stuff I put in tortilla.
You're probably just gonna have to get better at guesstimating, (e.g. by comparing to similar pre-made options and their nutrition labels), or use an app for tracking nutrition that integrates with OpenFoodFacts and get a scale to weigh your ingredients. (or a similar database, though most use OpenFoodFacts even if they have their own, too)
I don't really know of any other good ways to just take photos and get a good nutritional read, and pretty much any implementation would use "AI" to some degree, though probably more a dedicated machine learning model over an LLM, which would use more power and water, but the method of just weighing out each part of a meal and putting it in an app works pretty well.
Like, for me, I can scan the barcode of the tortillas I buy to import the nutrition facts into the (admittedly kind of janky) app I use (Waistline), then plop my plate on my scale, put in some ground beef, scan the barcode from the beef packaging, and then I can put in how many grams I have. Very accurate, but a little time consuming.
Not sure if that's the kind of thing you're looking for, though.
Wow, I am old. This has never in my life been an issue? I just used a calorie counter and people’s own recipes for estimates. I guess that would be the old fashioned way of doing this and probably what AI is doing most of the time. Pulling a recipe, looking at the ingredients and quantities and spitting back some values. Granted it can probably do it far faster than we can. But, I got by with that method for decades…
The orphan crushing machine needs its line go up as much as everyone else, don't be mean to it!
Why would you want to stop enhancing it? How else can we get those sweet stories about heroes saving orphans? Have you seen the news lately! We NEED this.
I can't take anyone seriously that says it's "trained on stolen images."
Stolen, you say? Well, I guess we're going to have to force those AI companies to put those images back! Otherwise, nobody will be able to see them!
...because that's what "stolen" means. And no, I'm not being pendantic. It's a really fucking important distinction.
The correct term is, "copied" but that doesn't sound quite as severe. Also, if we want to get really specific, the images are presently on the Internet. Right now. Because that's what ImageNET (and similar) is: A database of URLs that point to images that people are offering up for free to anyone that wants on the Internet.
Did you ever upload an image anywhere publicly, for anyone to see? Chances are someone could've annotated it and included it in some AI training database. If it's on the Internet, it will be copied and used without your consent or knowledge. That's the lesson we learned back in the 90s and if you think that's not OK then go try to get hired by the MPAA/RIAA and you can try to bring the world back to the time where you had to pay $10 for a ringtone and pay again if you got a new phone (because—to the big media companies—copying is stealing!).
Now that's clear, let's talk about the ethics of training an AI on such data: There's none. It's an N/A situation! Why? Because until the AI models are actually used for any given purpose they're just data on a computer somewhere.
What about legally? Judges have already ruled in multiple countries that training AI in this way is considered fair use. There's no copyright violation going on... Because copyright only covers distribution of copyrighted works, not what you actually do with them (internally; like training an AI model).
So let's talk about the real problems with AI generators so people can take you seriously:
The first one seems impossible to solve (to me). If someone generates a fake nude and never distributes it... Do we really care? It's like a tree falling in the forest with no one around. If they (or someone else) distribute it though, that's a form of abuse. The act of generating the image was a decision made by a human—not AI. The AI model is just doing what it was told to do.
The second is—again—something a human has to willingly do. If you try hard enough, you can make an AI image model get pretty close to a copyrighted image... But it's not something that is likely to occur by accident. Meaning, the human writing the prompt is the one actively seeking to violate someone's copyright. Then again, it's not really a copyright violation unless they distribute the image.
The third one seems likely to solve itself over time as more and more idiots are exposed for making very poor decisions to just "throw it at the AI" then publish that thing without checking/fixing it. Like Coca Cola's idiotic mistake last Christmas.
Goddamn, I'm stoked I'm not you.
There might be as many shit takes in this post as there are em dashes. I mean, wow.
Whenever someone bitches about em dashes I assume they haven't read books.
I do agree with them that stealing may not be the right word, isn't plagiarism more accurate? But plagiarism is generally considered theft so it probably doesn't matter. I just found it really interesting that I personally haven't really given much thought to the semantics of theft when no physical object is involved even though it's been discussed for like centuries atp
"Stolen" images.. if its on the web its free to learn from.
Not necessarily. There are "do not crawl" tags bypassed by AI bots, burdening sites with greater server load.
Seriously, copyright doesn't just go away because it's online. The concept of "right of reproduction" is a vast and well defined area of law.
You can argue copyright law is garbage and archaic and needs to be overhauled sure, but right now "if it's on the Web it's free" only counts if you're Meta and can pay off a judge or something
That's not how intellectual property rights work.
In the US, an artistic work is automatically protected by copyright when it is created. Displaying the art publicly does not remove the artist's copyright. Only the artist/copyright owner can grant someone else rights to use the work. Again, public accessibility of the work does not degrade the copyright.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.