Not as bad as the AI-generated articles showing up in search results. Some websites I get driven to make absolutely no sense, despite a lot of words being written about all kinds of topics.
I'm looking forward to the day when "certified human content" is a thing, and that's all search engines allow you to see.
I mean, they would have started appearing in there from the first moment that someone created one and hosted it somewhere, no? So it's already been a thing for a couple years now, I believe.
Why would they not? There’s no way for such a system to know it’s AI generated unless there’s some metadata that makes it obvious. And even if it was, who’s to say the user wouldn’t want to see them in the results?
This is a nothing issue. It’s not like this is being generated in response to a search, it’s something that already existed being returned as a result because there is assembly something that links it to the search.
Its time to start talking about "memetic effluent." In the same way corporations polluted our physical world, they're pollution our memetic world. AI spewing garbage data is just the most obvious way, but corporations have been toxifying our memetic space for generations.
This memetic effluent will make sorting through data harder and harder over the years. But the oil and tobacco industries undermined science and democracy for decades with it's own memetic effluent in order to protect their business for decades. Advertising is it's own effluent that distorts and destroys language. Jerry Rubin said it in 1970,
"How can I tell you 'I love you' after hearing 'cars love shell?'"
While physical effluent destroys our physical environment making living in the world harder, memetics effluent destroys meaning and makes thinking about and comprehending the world harder. Both are the garbage side effects of the perpetuation of capitalism.
This example of poisoning the data well is just too obvious to ignore, but there are so many others.
I wonder what would happen in the future as future AI's get trained with AI generated images that they got from the internet. Would the generated images start to degrade or have somekind of distinct style pop out.
Just wanted to point out that the Pinterest examples are conflating two distinct issues: low-quality results polluting our searches (in that they are visibly AI-generated) and images that are not "true" but very convincing,
The first one (search results quality) should theoretically be Google's main job, except that they've never been great at it with images. Better quality results should get closer to the top as the algorithm and some manual editing do their job; crappy images (including bad AI ones) should move towards the bottom.
The latter issue ("reality" of the result) is the one I find more concerning. As AI-generated results get better and harder to tell from reality, how would we know that the search results for anything isn't a convincing spoof just coughed up by an AI? But I'm not sure this is a search-engine or even an Internet-specific issue. The internet is clearly more efficient in spreading information quickly, but any video seen on TV or image quoted in a scientific article has to be viewed much more skeptically now.
Why bother looking anything up when you can just make it fresh right now. I figured this might start happening once I heard of people using chatGPT as a search replacement (tell me lies, pretty pretty lies).