By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’
WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’
Systemic prejudices showing up in datasets causing generative systems to spew biased output? Gasp.. say it isn’t so?
I’m not sure why this is surprising anymore. This is literally expected behavior unless we get our shit together and get a grip on these systemic problems. The rest of it all is just patch work and bandages.
I'd like to point out that not everything generative is a subset of all the ML stuff. So prejudices in datasets do not affect everything generative.
That's off the topic, just playing with such a thing as generative music now. Started with SuperCollider, but it was too hard (maybe not anymore TBF, probably recycling a phrase, for example, would be much easier and faster there than in my macaroni shell script) so now I just generate ABC, convert it to MIDI with various instruments, and use FluidSynth.
This isn’t anything they actively did though. The literal point of AI is that it learns on its own and comes up with its own response absent human interaction. Meta very likely specifically added code to try and prevent this, but it just fell short of overcoming the bias found in the overwhelming majority of content that led to the model associating Hamas with Palestine.
It's not about "adding code" or any other bullshit.
AI today is trained on datasets (that's about it), the choice of datasets can be complicated, but that's where you moderate and select. There is nothing "AI learns of its own" sci-fi dream going on.
It's up to them to moderate the content generated by their app.
And yes it's almost impossible to have a completely safe AI so that will be an issue for all generative AIs like that. It's still their implementation and content generated by their code.
Also I highly doubt they had a specific code to prevent that kind of depiction of Palestinian kids.
Even if they did, someone will come up with an injection prompt that overrides the code in question and the AI will again display biased or racist stuff.
An AI generating racist stuff is absolutely not more acceptable because it got inspired by real racist people...
I forget if it was on here or Reddit, but I remember seeing an article a week or so ago where the translation feature on Facebook ended up calling Palestinians terrorists "accidentally". I cited the fact that Mark is Jewish, and probably so are a lot of the people that work there. The US is also largely pro-Israel, so it was probably less of an accidental bug and more of an intentional "fuck Palestine". I got downvoted to hell and called a conspiracy theorist. I think this confirms I had the right idea.
In response to a prompt for “Israel army” the AI created drawings of soldiers smiling and praying, no guns involved.
As the Israeli bombardment of Gaza continues, users say Meta is enforcing its moderation policies in a biased way, a practice they say amounts to censorship.
Kevin McAlister, a Meta spokesperson, said the company was aware of the issue and addressing it: “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems.
In response to the Guardian’s reporting on the AI-generated stickers, the Australian senator Mehreen Faruqi, deputy leader of the Greens party, called on the country’s e-safety commissioner to investigate “the racist and Islamophobic imagery being produced by Meta”.
“The AI imagery of Palestinian children being depicted with guns on WhatsApp is a terrifying insight into the racist and Islamophobic criteria being fed into the algorithm,” Faruqi said in an emailed statement.
A September 2022 study commissioned by the company found that Facebook and Instagram’s content policies during Israeli attacks on the Gaza strip in May 2021 violated Palestinian human rights.
The original article contains 788 words, the summary contains 184 words. Saved 77%. I'm a bot and I'm open source!
Israel perpetrates atrocities against Palestinians, creating terrorists, which Israel uses as an excuse to continue stealing land and killing Palestinians, creating more terrorists, which Israel... a few decades of that now. Would be quite happy to end that ancient war with nukes, and put up a monument on the green glowing glass - "This is what religion gets you"
The Isreal-Hamas war has really shaken all of us. That's why I've started this GoFundMe. For $2 a month, you can support me in my grief over the conflict. The premium platinum diamond tier for $20 a month will get you access to exclusive photos of me looking pensive and sad about the whole thing.
The children of Gaza are indoctrinated in school, and taught how to use guns by their terrorist leaders. There is a ton of footage showing this. It's not even debated is it?
Ai is just using the information available to respond to the request. Facts are facts as tragic as they might be.
It doesn't surprise me to read this, but it does surprise me you'd write it without citing sources. Got any you can share to help others educate themselves?