Google issued an apology and will pause the image generation feature of its artificial intelligence model Gemini after it refused to show images of White people.
Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.
So what. It means they overtrained, deployed, and had to choose between reverting to a model with known issues or training a new model. They probably tried a temporary fix with a LoRA and it failed so they have to wait on the next big version to finish training and those can take weeks even on massive data center class hardware.
People don't seem to have any fundamental understanding of AI here. It is all static tensor math. There is no persistence or learning inside the model. Any illusion of persistence is due to the loader code that turns your text into math tokens. That is just standard code.
There is no fundamental difference between an offline AI and the proprietary like Gemini. One loader code is just data mining while the other is not. Training has a sweet spot. If too much John Oliver is added, everything will generate as John Oliver, like absolutely everything.
No, the problem is that they filter prompts and inject new parameters into prompts specifically to avoid creating white subjects. It's so bad that, when asked to generate a chessboard, Gemini would only make one with black pieces.
That would not have caused them to go offline. Modifying a hash table takes 0 minutes of down time. Likewise a LoRA layer takes no down time. The only reason to go completely offline is because they need to filter the base dataset and retrain from scratch. It means the error is so intertwined across so many neural layers that a simple extra filter layer is unable to address it.
The neural network is like a giant multi dimensional cloud in 3d but where there are more than 3 dimensions. All the stuff in the cloud are vector relationships. If there is some easily traversed path where neural connections are gravitating towards a simple modification like slice across that cloud can modify that easily traversed path ever so slightly to make it less easily traversed. This is something like a LoRA that can be tacked onto the model's math.
However, if the undesirable behavior is due to something like all roads leading to the center of a giant city metropolis, no slice across that cloud can subtly alter all of the neural paths without impacting adjacent data. It is all approximated floating point math where every concept and generation parameter is inner related. Things like bunny rabbit and Playboy playmate are stored in the same tables. If you try and make all bunny rabbits black, you are also altering all playmates. It is simply because there is an minor relationship between these concepts and therefore they share a vector space inside some tensor tables. There is a very big difference between how the initial table values are created across all layers and how a modified layer works. When things go really bad, the only option is to retrain the whole thing from scratch.
I think the interesting thing about this is that these LLMs are essentially like children: they don't have the benefit of years and years of social training to learn our complex set of unspoken rules and exceptions.
Race consciousness is such an ever-present element of our social interactions, and many of us have been habituated not to really notice it. So it's totally understandable to me that LLMs reproduce our highly contradictory set of rules imperfectly.
To be honest, I think that if we can set aside our tendency to understandably avoid these discussions because they're usually instigated by racist trolls, there's some weird and often unexamined social tendencies we can interrogate.
I think it's helpful to remind ourselves frequently that race is real like gender, but not like sex. Race exists because when people encountered new cultures, they invented a pseudoscience to create the concept of whiteness.
Whiteness makes no sense. Who is white is highly subjective, and it's always been associated with the dominant mainstream culture to which whiteness claims ownership. This means that you either buy into the racist falsehood that white culture is interchangeable with the default culture or it has no culture at all.. Whiteness really exists only in opposition to perceived racial inferiority. Fundamentally, that's all "white" means. It's a weird anachronistic euphemism for, "Not racially inferior".
There are plenty of issues with our racial construction of blackness and the quality of being Asian and east Asian and Desi and Indigenous and Latin, but none are quite as fucked up, imo, as the fact that we as a culture attempt to continue to use the concept of "Whiteness" as a non-racist construction. In my thinking, it can be a useful tool for studying the past and studying an unhealthy set of attitudes we're still learning to unlearn. But it's not possible to reform the concept, because it's fundamentally constructed upon beliefs we're trying to discard. If you replace every use of "white" with "not one of the lesser races", then I think you get a better understanding of why it's never going to stop causing problems as long as we try to use it in a non-racist way.
Today, people who were told growing up to view themselves as "white" now feel a frankly understandable sense of grievance and cultural alienation. Because we've begun acting more consistently and recognizing that there's really no benign version of white pride, but we never bothered to teach people to stop thinking of anyone as "white" or taught the people who identify as white to find pride in an actual culture. Midwestern in a culture. Irish is a culture. New Englander is a culture. White has never been a culture. But if we don't ever acknowledge that the entire concept's only value is as a tool to understand racism, it's inevitable that a computer repeating back to us our own attitudes is going to look dumb, inconsistent and either racially biased for or against white people.
I think the interesting thing about this is that these LLMs are essentially like children
Naw, dog. LLMs are nothing like children. A child has an inaccurate model of the world in their heads. I can explain things to them and they'll update their believs and understandings.
I think this presentation -- which at 10 months old is already quite dated! -- does a good job examining these questions in a credible and credulous manner:
Sparks of AGI: Early Experiments with GPT4 (presentation) (text)
I fully recognize that there is a great deal of pseudomystical chicanery that a lot of people are applying to LLM's ability to perform cognition. But I think there is also a great deal of pseudomystical chicanary underlying the mainstream attitudes towards human cognition.
People point to these and say, 'They're not thinking! They're just making up words, and they're good enough at relating words to symbolic concepts that they credibly imitate understanding concepts! It's just a trick.' And I wonder: why are they so sure that we're not just doing the same trick?
Here's an idea: what if the intent of the prompt had nothing to do with race, that it was prompting simple artistic expression no different than prompting hair, or shirt, or sky colour?
Whiteness makes no sense. Who is white is highly subjective.
Skin tone can be measured pretty objectively. We have colour standards for describing and reproducing colours with a degree of accuracy that is sufficient for practical purposes. The label "white" itself is quite non-specific. But the entire point of the AI is to fill in the blanks anyway, to generate content from non-specific prompts. I don't agree that trainers can't generate some consensus about the typical colour values for "white" skin tone. "I know it when I see it."
Society has an absurd and unhealthy obsession with race and all that baggage.
When someone asks to see a "white family", they are not asking for a family with skin of a certain shade. They're asking for an image in which our pattern recognition identifies in their clothes, posture, hair style, and facial features that they look like people who could appear in a soap ad in the 1950's. That they look like people who feel totally welcome in their society. They live a certain lifestyle. Simply changing color is the point of the problem. Koreans look pretty white in skin color, but they have other facial features that communicate that their parents or ancestors father back left the land of their birth and traveled to the US likely after 1900. Additionally, based on their dress some people might look at an image of a family with a Korean dad and say, 'Great, that's a white family', while others would say, 'Why did the model generate this? I asked for a white family.'
There's a world of context that our current racial terminology can't capture because it's not suited to our modern understanding of culture.
The guy who leads this group is extremely vocal (almost weirdly so) about white privilege and systemic racism. He is also white. It's true that many AI models have white-bias. The reasons for this are multi-faceted. Our datasets are grossly imbalanced against racial minorities. I also think I understand that for some darker-skinned races, it is more difficult for the model to extract relevant features from the shitty Flickr photos they scrape for these models.
That said, injecting words into the users prompt to force the model to generate minorities more often is an extremely naive approach. Kind of like if Google added "reddit" to all searches just because it worked for some specific test cases, but ignoring that you now no longer get any site except reddit. Probably the solution here looks like paying a lot of money for high quality datasets as well as investing in user education and more AI explainability of these tools.