AI Bloopers

Can Artificial Intelligence results be wrong? Unfortunately machine learning and algorithms are lacking some human interventions to provide accurate and factual data. There can be AI blunders that cause damage to revenue and reputation. And some of the information produced is just plain wrong.

Recent news about the inaccuracies of AI reporting by Microsoft and Google captured the headlines. Some images depicting historical figures were misidentified by gender or race.

Generative AI is meant to get better over time, as it adds to its vast databases and machine learning to create more algorithms to assist it with defining and creating new content. But, studies are showing signs of model drift, or behavioral shifts. This means that the parameters that guide AI’s interpretation can change if those parameters change. AI is mimicking the reasoning and not able to actually define the reasoning.

When using AI, it is important to remember that this information is created artificially. That is to say that AI does not have the ability to reason and therefore makes determinations and predictions based on the information and parameters it has been fed. Generative large language models (LLM) can understand and generate text. However, this does not ensure accuracy.

In the late 90s, a middle school English teacher shared a story with me about a lesson she assigned her students. They were to use the Internet for their research and then share their findings with the class. One student chose to research Bill Gates. He found vast information on the world wide web that he shared. Mr. Gates is an entrepreneur and programmer. He founded Microsoft which skyrocketed the microcomputer industry. He went on and on about all of his accomplishments. Then he said, ” And when he died…” To which the teacher stopped him and asked why he thought he had died. He replied, ” Because it said so on the Internet.”

A good lesson when dealing with AI.

Resources:
Google apologizes after new Gemini AI refuses to show pictures, achievements of white people
Microsoft tries to justify AI’s tendency to give wrong answers by saying they’re ‘usefully wrong’.
When cloud AI lands you in court