Is there a similar tool that will "poison" my personal tracked data? Like, I know I'm going to be tracked and have a profile built on me by nearly everywhere online. Is there a tool that I can use to muddy that profile so it doesn't know if I'm a trans Brazilian pet store owner, a Nigerian bowling alley systems engineer, or a Beverly Hills sanitation worker who moonlights as a practice subject for budding proctologists?
These "AI models" (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual "image maker" (U-Net).
A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.
This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI "sees" is very different from the image the human sees.
Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.
I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).
It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.
Apparently people who specialize in AI/ML have a very hard time trying to replicate the desired results when training models with 'poisoned' data. Is that true?
It's not FOSS and I don't see a way to review if what they claim is actually true.
It may be a way to just help to diferentiate legitimate human made work vs machine-generated ones, thus helping AI training models.
Can't demostrate that fact neither, because of its license that expressly forbids sofware adaptions to other uses.
Edit, alter, modify, adapt, translate or otherwise change the whole or any part of the Software
nor permit the whole or any part of the Software to be combined with or become incorporated
in any other software, nor decompile, disassemble or reverse engineer the Software or
attempt to do any such things
is anyone else excited to see poisoned AI artwork? This might be the element that makes it weird enough.
Also, re: the guy lol'ing that someone says this is illegal - it might be. is it wrong? absolutely not. does the woefully broad computer fraud and abuse act contain language that this might violate? it depends, the CFAA has two requirements for something to be in violation of it.
the act in question affects a government computer, a financial institution's computer, OR a computer "which is used in or affecting interstate or foreign commerce or communication" (that last one is the biggie because it means that almost 100% of internet activity falls under its auspices)
the act "knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;" (with 'protected computer' being defined in 1)
the poisoned artwork is information created with the intent of causing it to be transmitted to computers across state or international borders and damaging those computers. Using this technique to protect what's yours might be a felony in the US, and because it would be considered intentionally damaging a protected computer by the knowing transmission of information designed to cause damage, you could face up to 10 years in prison for it. Which is fun because the people stealing from you face absolutely no retribution at all for their theft, they don't even have to give you some of the money they use your art to make, but if you try to stop them you go to prison for a decade.
The CFAA is the same law that Reddit co-founder Aaron Swartz was prosecuted under. His crime was downloading things from JSTOR that he had a right to download as an account holder, but more quickly than they felt he should have. He was charged with 13 felonies and faced 50 years and over a million dollars in fines alongside a lifetime ban from ever using an internet connected computer again when he died by suicide. The charges were then dropped.
They clam a credit to using AI to make the thumbnail..... The same people who did nothing more then ask Chat GPT to make a picture to represent the article on a tool that poisons AI models to protect people who make pictures for a living from having Chat GPT use their work to make; say a picture to represent an article on a tool that poisons AI models......
Won't this thing actually help the AI models in the long run? The biggest issue I've heard is the possibility of AI generated images getting into the training dataset, but "poisoned" artworks are basically guaranteed to be of human origin.
As an artist, nightshade is not something I will ever use. All my art is public domain, including AI. Let people generate as many pigeon pictures as they want I say!
I like the idea, but Nightshade and Glaze take some pretty high-end graphics specifications. Sadly, I have a Nvidia GTX 1660 which apparently has issues with Pytorch.😢
Sorry if this is a stupid question.. But can this be used for profile pictures on social media too? That way if your profile picture is scrapped by some bot it will just poison the set instead?