AI Hallucinations


This entry is part 5 of 5 in the series AI

AI hallucinations sound bad, but what are they?

When I asked Gemini the following question: “What are ai hallucinations?” Here is the respose: AI hallucinations refer to instances where the AI generates outputs that are:

  • False or misleading: The AI presents information as factual, even though it’s incorrect or fabricated.
  • Nonsensical or illogical: The output might be incoherent or lack a clear connection to reality.
  • Fabricated: The AI might invent information, such as nonexistent facts, sources, or events.

When I asked Gemini what the causes of these hallucinations are, I got the following response: “Training data issues: Biased, incomplete, or flawed training data can lead AI models to learn incorrect patterns. Lack of grounding: AI models may struggle to connect their knowledge to real-world facts and contexts. Overfitting: When a model memorizes training data too closely, it can perform poorly on new, unseen data, leading to fabricated outputs.

AI does have it’s drawbacks. It will confabulate. It will never return “I do not know’. Instead it will lie. Confabulate or hallucinate. Returning I do not know is not an option for AI models. AI will even give you references to try to convince you of the truth of what it’s saying, but the references are made up (fabricated/made up). This is a huge concern.

A deepfake is something that compute generates that appears to be so real that it convinces you it is real, but it is not real at all. It could create a video of a famous person that is completely fake. It never happened. Chatbot conversations can seem so real, but there are not truly real.

Concerned people have created the Responsible Artificial Intelligence Institute. It’s on LinkedIn.

Series Navigation<< AI Productivity Tools

Leave a Reply