A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos.
I'm not brave enough at the moment to say it isn't some kind of crime, but creating such images (as opposed to spamming them everywhere, using them for blackmail, or whatever) doesn't seem to be a crime that involves any victims.
I am sort of curious, bc I don't know: of all the types of sexual abuse that happens to children, ie being molested by family or acquaintances, being kidnapped by the creep in the van, being trafficked for prostitution, abuse in church, etc etc... in comparison to these cases, how many cases deal exclusively with producing imagery?
Next thing I'm curious about: if the internet becomes flooded with AI generated CP images, could that potentially reduce the demand for RL imagery? Wouldn't the demand-side be met? Is the concern normalization and inducing demand? Do we know there's any significant correlation between more people looking and more people actually abusing kids?
Which leads to the next part: I play violent video games and listen to violent aggressive music and have for many years now and I enjoy it a lot, and I've never done violence to anybody before, nor would I want to. Is persecuting someone for imagining/mentally roleplaying something that's cruel actually a form of social abuse in itself?
Props to anybody who asks hard questions btw, bc guaranteed there will be a lot of bullying on this topic. I'm not saying "I'm right and they're wrong", but there's a lot of nuance here and people here seem pretty quick to hand govt and police incredible powers for.. I dunno.. how much gain really? You'll never get rights back that you throw away. Never. They don't make 'em anymore these days.
Isnt it better the are AI generated than real? Pedophiles exist and wont go away and no one can control it. So best they watch AI images than real ones or worse
Now that CSAM content is generated by bigcos with deep pockets, politicians don't want to scan their servers or take any other action. These are the same demagogues who wanted to kill end-to-end encryption and scan ordinary people's devices in the name of CSAM. Greedy and hypocritical vermin.
Normally I err on the side of 'art' being separated from actual pictures/recordings of abuse. It falls under the "I don't like what you have to say, but I will defend your right to say it" idea.
Photorealistic images of CP? I think that crosses the line, and needs to be treated as if it was actual CP as it essentially enables real CP to proliferate.
🤖 I'm a bot that provides automatic summaries for articles:
Click here to see the summary
NEW YORK (AP) — The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday.
In a written report, the U.K.-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.
In a first-of-its-kind case in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 virtual child abuse images, according to the Busan District Court in the country’s southeast.
What IWF analysts found were abusers sharing tips and marveling about how easy it was to turn their home computers into factories for generating sexually explicit images of children of all ages.
While the IWF’s report is meant to flag a growing problem more than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-generated abuse.
Users can still access unfiltered older versions of Stable Diffusion, however, which are “overwhelmingly the software of choice ... for people creating explicit content involving children,” said David Thiel, chief technologist of the Stanford Internet Observatory, another watchdog group studying the problem.