The once-prophesized future where cheap, AI-generated trash content floods out the hard work of real humans is already here, and is already taking over Facebook.
This is going to get soooo much more treacherous as this becomes ubiquitous and harder to detect. Apply the same pattern, but instead of wood carvings, it's an election, or sexual misconduct trial, or war.
Our ability to make sense of things that we don't witness personally is already in bad shape, and it's about to get significantly worse. We aren't even sure how bad it is right now.
It's already happening to some extent (I think still a small extent).
I'm reminded of this Ryan Long video making fun of people who follow wars on Twitter.
I can say the people who he's making fun of are definitely real: I've met some of them.
Their idea of figuring out a war or figuring out which side to support basically comes down to finding pictures of dead babies.
At 1:02 he specifically mentions people using AI for these images, which has definitely been cropping up here and there in Twitter discussions around Israel-Palestine.
Exactly-- They're two sides of the same coin. Being convinced by something that isn't real is one type of error, but refusing to be convinced by something that is real is just as much of an error.
Some people are going to fall for just about everything. Others are going to be so apprehensive about falling for something that they never believe anything. I'm genuinely not sure which is worse.
It already happening. Adobe is selling them but even if they weren't it's not hard to do.
I think the worst of it is going to be places like Facebook where people already fall for terrible and obvious Photoshop images. They won't notice it there are mistakes, even as AI gets better and there are fewer mistakes (Dall-E used to be awful at hands, not so bad now). However even smart folks will fall for these.
This doesn't surprise me, given how messy Facebook has become. What does disturb me is people not being able to recognize that they are AI-generated. Now, this could be due to the AI becoming so sophisticated that it can actually generate life-like images, or it could be due to humanity's inability to question what they're viewing and whether it is true or not. Either way, this is very concerning, and if it can happen on Facebook, I'm sure it's also happening on other social media sites as well.
Speaking of which, how can we stop something like this from happening on Lemmy and other federated sites?
This is just the next wave to me. I wish we had better automation around helping people find the sources of an image or detect where it could from. I've showed people reverse image searching a little at least, hopefully we can see it improve to handle new forms of copies like this going forward.
It really depends on the image. Some AI generated images of people really seem indistinguishable from a photograph. Not all, but it's definitely going to become more common. At this point it seems like every month has a new breakthrough for some aspect AI regarding consistency and realism. As people work to bring stable diffusion video to a more reliable state the images themselves are only going to get better.
There's a number of models out there right now that don't have hand issues as much. There are other methods like using a pose to force hide the hands. We're getting to the point where we just need metadata viewers on every image because the eye alone isn't reliable for this. How many artists have already been accused of being "AI art like" and they were not.
Stopping it? I don't know if you can stop Pandora's box. Personally, I'm of the opinion that flooding AI images is the only way to "stop" it, simply by making people not care about it. At a certain point, you can only make so many variations of gollum as a playing card /ninja turtle/movie star before they get boring to the public. I also have doubts that the people using this are overall the type of people who would have paid artists for commissions, or that that would even stop a sale from happening in the first place (my partner makes mushroom forest scenes with Stable Diffusion but she also buys about $300 of art from our friends and local artists through the year). Like, Stable Diffusion doesn't make oil paintings.
Little Jimmy in his room or the 60 hour week worker? They pose no harm making these images and it makes them happy, so why take that away from them? And regarding the nefarious aspect of it - that one is much harder but it's in part societal shame as well. Explicit images made by AI may make the process easier, but if it's something nefarious the people will find a way to do it regardless. Video sex scams and catfish were around long before AI. I'm not certain that these will inherently become more prevalent as the tech becomes more accessible. It's the people posting them. Which to me comes down to more of a societal issue than a technological one.
However, in terms of trying to stop it maybe there could be a hash similar to the CASM that gets used here. Maybe the image uploader can look for metadata markers that come with generated images and deny those ones? That's an easy workaround though, since metadata can be stripped.
I dunno. I think we've gotta EEE AI. Embrace AI images by flooding society with their presence, Extend AI images by getting the ability to use it into the hands of everyone, and then Extinguish the power that generated images hold over people because fake pictures don't matter in the slightest.
Wonder how many of the posts here are ai generated. Are we even talking to real people anymore or are we in our own ai generated bubbles engaging in simulated discourse?
Am I real or am I just an ai generated simulacrum?
Wonder.
I think the chances for the fediverse having this are generally lower. I saw a rough estimate that puts our bubble at 1.5 million. By comparison, reddit is 500 million and YouTube is 2.5bn.
Yes, we are a bunch of nerds in that 1.5m number but we also value human input and generally seem to only use practical bots - and some people don't even like those. We also have the bot account option, which should inspire more trust, though we have to trust that people actually use it.
Compared to reddit or YouTube, which I've personally seen as testing grounds for rolling out bot accounts. Whole subs dedicated to it. It's not that it doesn't, can't, or won't happen here, I know we have a number of repost bots from various instances - I'm moreso just saying that I think being so small helps. When the dead Internet arrives, the fediverse will have been one of the last bastions of human interaction on the Internet.
Except for mastodon. I see a lot of bots there compared to lemmy/Kbin
Especially egregious because the images are basically the same as the original photos, they just used controlnet to alter the details. It shouldn't be that difficult to stop this kind of thing, assuming Facebook even wanted to; looks like it is entirely possible to find the original popular post the AI posts are trying to copy to prove this is going on, and people are already doing the volunteer work of tracking it all, there would just need to be a way to report this stuff and confirm it. I doubt Facebook wants to do this though since engagement is engagement.
the availability of the tools makes the potential scale of the problem pretty drastic. It may take teams of people to track this stuff down, and there's no guarantee that you'll catch all of it or that you won't have false positives (which would really piss people off)
in a culture that seems obsessed with 'free speech absolutism', I imagine the Facebook execs would need to have a solid rationale to ban 'AI generated content', especially given how hard it would be to enforce.
That said, Facebook does need to tamp this down, because engagement isn't engagement when it's taken over by AI, because AI isn't compelled to buy shit from advertising, and it can create enough noise to make ad targeting less useful for real people.
I personally think people will need to adapt to smaller, more familiar networks that they can trust, rather than trying to play whack-a-mole with AI content that continues to get better.
We're overdo for some degrowth, especially when it comes to social media.
there’s no guarantee that you’ll catch all of it or that you won’t have false positives
You wouldn't need to catch all of it, the more popular a post gets the more likely at least one person notices it's an AI laundered repost. As for false positives, the examples in the article are really obviously AI adjusted copies of the original images, everything is the same except the small details, there's no mistaking that.
in a culture that seems obsessed with ‘free speech absolutism’, I imagine the Facebook execs would need to have a solid rationale to ban ‘AI generated content’, especially given how hard it would be to enforce.
Personally I think people seem to hate free speech now compared to how it used to be online, and are unfortunately much more accepting of censorship. I don't think AI generated content should be banned as a whole, just ban this sort of AI powered hoax, who would complain about that?
I mean take the average media literacy of the relatively younger folks on Reddit (atrocious) and then realize that those are incredibly tech savvy compared to FB's audience.
Several AI posts have made the rounds on Reddit, but now that Reddit is 90% bots reposting and upvoting content by other bots, I don't think anyone cares much.
I'm reasonably sure half the stories on subreddits like r/amitheasshole are written by ChatGPT at this point. Doesn't matter, or course; drama is drama, whether it's drama from a stranger on the other side of the world or generated by a machine.
This is less of an issue if you judge everything that isn't first hand from a known friend or family member as suspect or at least just a waste of time. Facebook used to be a place to talk to people you knew in the real world. You could ignore anything they reposted and still engage with the actual examples of their own experiences that they posted. But now it's so flooded with ads and listicles and clickbait and video clips that it's not even worth trying to keep up with the people you actually know.
It's really disgusting. It was a great platform for keeping in touch with long distance friends and family. If you kept your friends list trimmed to people you know, then it was actually a really fun platform. Now it's like all the worst parts of corporate internet all glommed together on a single site. I see maybe 1-2 of my actual friend's posts, and the rest is all absolute crap. Hundreds of billions of dollars wasn't enough for zuck? Nope! He just had to go and squeeze every last cent out of the site, even if it meant burning it to the ground. It's not even worth visiting anymore. I was still visiting to see my memories, but now he's slowly breaking that functionality too. Congratulations Facebook, you're awful.
I still need Facebook unfortunately to keep in touch with some friends using messenger. Also marketplace is usually the most active classifieds where I am.
AI generated content isn't stealing. That being said, Facebook is literally only reposts, there is practically zero original content. The AI generated stuff is amongst the few things that isn't technically stolen.
I dunno dude, taking an image-to-image generation with 90% strength to just change a few details to make it look like your work sure sounds like stealing to me
It's stealing. Training is theft. It is NOT like "a person looking at art in a museum and gaining inspiration". AI has no inspiration or creativity. It's an image autocomplete algorithm using millions of other people's images as bases to combine and smooth out. That's all it does. If I took a bunch of Monet paintings and creates some brushes in Photoshop and used it to create a new work, those brushes would still be theft. At best, it'd be a collage art piece I'd have to credit Monet for.
I know you're wanting to argue your point, so I'm only going to say one thing. Yes, it is sometimes stealing. Not all of the time, but some of the time. If you use a live artist as a prompt for selling and that artist isn't getting paid (like musicians now do with sampling), then yes it's stealing. You're not only stealing their work, but you're also stealing their business.
Ai art is stealing though. Artists are afraid to post their art online and get their work used in a machine learning model by some tech guys who never produced anything artistic in their lives
Why is it that the people who decided to devote their life to filling the world with art are the most angry about custom art being abundant and free?
There are already open models that understand almost any prompt. Whether someone uploads an amateur piece to their deviant art isn't going to change anything.
I have a few relatives who seem incapable of understanding that miraculously high-definition photos from the 1800s containing never-before-seen imagery of lumberjacks posing with 12ft. tall sasquatches could possibly be inauthentic.
.
I'm doing a fun art project now, I'm feeling like from the conversations I've been having in this thread about AI, that I'm totally okay with selling these logos as is. I put in inspiration prompts and "in the style of" so I'm sure I'm good.