Google’s Gemini AI Fiasco

Opinion Piece: Google’s Gemini AI Fiasco

Cade Alcock, ELC Member

 

Artificial Intelligence is sweeping the investing community, academia, and the consumer space alike with the promise of a fundamentally paradigm-shifting technology that can be useful in a mind-bogglingly numerous array of cases. Microsoft, Meta, and other mega-cap tech companies are finding ways to innovate and trailblaze using the novel invention faster than ever before. Google, however, has been far more conservative in its approach to AI despite owning access to perhaps the largest assembly of training data in history. In its rollout of its now thrice rebranded AI tool Gemini, a very curious pattern was observed. No matter how explicit a prompt was, Gemini refused to generate an image of white people. George Washington was displayed as every imaginable race/ethnicity/color/creed you could imagine, except for, well, what he actually was. Many in the media and within the company argued that this pattern was merely a bug and that its failure to produce accurate representations of historical figures was an anomaly characteristic of any new technology. This could not be further from the truth. The failure of Gemini to produce anything other than overtly wrong and not-so-subtly “diverse” images was not a flaw, but a feature.

 

Google and other tech companies often govern their technologies and companies with dogmatic rules straight out of a college anthropology course. Observe several of Google’s AI governing principles. The first is that AI should be “socially beneficial” (whatever that means). How exactly is an image generator refusing to produce images of white people, and more importantly, accurate images, socially beneficial? Is producing inaccurate renderings of figures from history when specifically prompted aiming to benefit society, or is it looking to benefit an agenda? Another principle states that AI should not “reinforce bias or stereotypes.” Does showing an accurate depiction of history reinforce stereotypes? If someone asked it for a historically evil or unsavory individual, would it pull someone from a minority community? What then, DEI bureaucrats?

 

Does Google deserve some governmental punishment? Absolutely not. This is a company owned by private citizens that does and should be able to operate as it wishes, solely accountable to its shareholders and consumers. However, it seems Google and many other companies are not focusing on making a product, but making a point. This should be of great concern to all seeking to champion truth in an increasingly censorious society. 

 

AI is not revolutionary, but is instead evolutionary. The censorship of Conservative speakers on campuses, social media companies conspiring with the government to suppress information politically damaging to certain candidates, and yes, even Gemini refusing to make a white George Washington, are all different shades of the same color. These companies that are radically overrun by anti-free speech and virtue-signaling identity politics (who also happen to be the main utility of information gathering) could just as easily train their chatbots to refuse to assemble Conservative principles when prompted because promulgating that information to anyone who asks would not be “socially beneficial” or because it would “reinforce bias.” These self-imposed AI guidelines Google has established, as lofty and morally upright as they seem, are merely recommendations by a group not actually involved in the creation of productive enterprise, but one that is akin to the bloated and overpowered HR departments and administrative bureaucracies so prevalent across corporate america and academia. As powerful as we think AI is, it does and basically always will simply do whatever Silicon Valley tells it to do. 

 

It is an unavoidable reality that, no matter how powerful or seemingly revolutionary the tool, anything having to do with information recall/filtering/processing requires rules. These AI tools are based on probabilities, meaning that any prompt-generated image it produces is an assembly of probabilistic guesses based on data it has been fed and trained on that eventually creates an image in accordance with these guesses. In order for an AI system to know if it is doing a good job, it must be, well, told it is doing a good job if it produces a result in line with what it was prompted with. A human is responsible for making AI aware of whether it is doing a good or bad job. If AI is constantly told it has failed when it produces renderings of white people, it will stop doing that. And as the tool evolves from an online toy/proof of concept to a full blown replacement of the search engine, it is all the more dangerous for truth-seekers as the agenda of big-tech bureaucrats will be instilled within this technology. 

 

Perhaps a filter should be applied to these chatbots that asks the consumer first whether it would like the truth or whether it would like the answer to be “socially beneficial.” I think we know what consumers will choose.

 

 

Cade Alcock is a member of The Steamboat Institute’s Emerging Leaders Council. 

March 29, 2024