Google issued an apology, or something close to it, following another embarrassing AI misstep this week. The blunder involved an image-generating model, named Gemini, which injected diversity into images without considering historical context, leading to laughable results.
Gemini, Google’s flagship conversational AI platform, utilizes the Imagen 2 model to generate images upon request. However, users recently discovered that requesting images depicting certain historical scenarios or figures resulted in nonsensical representations. For example, the Founding Fathers, known historically as white slave owners, were depicted as a multicultural group including people of color.
This oversight quickly became fodder for online commentators and was dragged into the ongoing discourse on diversity, equity, and inclusion within the tech sector. Critics accused Google of perpetuating a “woke mind virus” and labeled it an ideological echo chamber.
Google attributed the issue to a workaround implemented to address systemic bias in training data. When generating images, the model defaults to representations most common in its training data, often resulting in over-representation of white individuals due to biases in available images.
However, Google acknowledged the need for diversity in generated images to cater to its global user base. It emphasized the importance of providing a variety of representations, especially in scenarios where users do not specify certain characteristics.
The problem stemmed from a lack of implicit instructions in situations requiring consideration of historical context. While the model was designed to provide diverse outputs, it failed to differentiate between scenarios where diversity was appropriate and those where historical accuracy was paramount.
Google’s SVP, Prabhakar Raghavan, admitted the model’s overcautious behavior and its tendency to refuse certain prompts incorrectly, leading to embarrassing and inaccurate results.
While Google stopped short of a full apology, Raghavan’s acknowledgment of the model’s behavior raises questions about accountability. Despite attributing the errors to the model, it’s essential to recognize that the responsibility ultimately lies with the developers who created and trained it.
Mistakes are inevitable in AI models, but holding developers accountable is crucial in ensuring transparency and accountability in AI development.