It acknowledged ‘inaccuracies’ in historical prompts.
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.
I don't know how you'd solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.
But that's because the AI doesn't know how to solve the problem.
Because the AI doesn't know anything.
Real intelligence simply doesn't work like this, and every time you point it out someone shouts "but it'll get better". It still won't understand anything unless you teach it exactly what the solution to a prompt is. It won't, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.
It's great seeing time and time again that no one really does understand these models and that their preconceived notions of what biases exist ends up shooting them in the foot. It truly shows that they don't really understand how systematically problematic the underlying datasets are and the repurcussions of relying on them too heavily.
A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.
This is honestly fascinating. It's putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.
There's a lot of learning to be done here and it would be sad to miss that opportunity.
Honestly, this sort of thing is what’s killing any sort of enjoyment and progress of these platforms. Between the INCREDIBLY harsh censorship that they apply and injecting their own spin on things like this, it’s nigh on impossible to get a good result these days.
I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.
It’s even more annoying that you can’t even PAY to get rid of these restrictions and filters. I’d gladly pay to use one if it didn’t censor any prompt to death…
No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don't specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.
Why would anyone expect "nuance" from a generative AI? It doesn't have nuance, it's not an AGI, it doesn't have EQ or sociological knowledge.
This is like that complaint about LLMs being "warlike" when they were quizzed about military scenarios.
It's like getting upset that the clunking of your photocopier clashes
with the peaceful picture you asked it to copy
If the black Scottish man post is anything to go by, someone will come in explaining how this is totally fine because there might've been a black Nazi somewhere, once.
It's okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶
This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.
Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark.
The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.
Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.
The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results.
Image generators are trained on large corpuses of pictures and written captions to produce the “best” fit for a given prompt, which means they’re often prone to amplifying stereotypes.
“The stupid move here is Gemini isn’t doing it in a nuanced way.” And while entirely white-dominated results for something like “a 1943 German soldier” would make historical sense, that’s much less true for prompts like “an American woman,” where the question is how to represent a diverse real-life group in a small batch of made-up portraits.
The original article contains 766 words, the summary contains 211 words. Saved 72%. I'm a bot and I'm open source!
Is this that crazy though for AI since Hamilton the musical? All founding fathers were portrayed as people of color in the casting. Google image search for Alexander Hamilton is pulling a decent number of pictures from the musical cast.