2.2.2. Disadvantages and Risks (Hallucinations, Inaccuracy, Nondeterminism)
First Principle: The creative power of generative AI is intrinsically linked to its primary risks: its outputs are not grounded in a verifiable source of truth, leading to potential inaccuracies (hallucinations), and they are not always predictable (nondeterminism).
It is critically important to understand the downsides to use this technology responsibly.
- Hallucinations / Inaccuracy: This is the most significant risk. A model can generate text that is plausible, well-written, and completely false. It may invent facts, sources, or details with complete confidence because it is a pattern-matching engine, not a knowledge database.
- Nondeterminism: Asking the same prompt multiple times can produce different answers. While this is a feature for creative tasks, it's a challenge for applications that require consistent, repeatable outputs.
- Lack of Interpretability: Like many deep learning models, it is extremely difficult to understand why an LLM produced a specific output. This "black box" nature makes debugging and auditing challenging.
- Bias: Foundation Models are trained on vast amounts of internet data, which contains human biases. These models can learn and amplify those biases, generating stereotypical or unfair content.
- Security Risks: New risks emerge, such as "prompt injection," where a malicious user crafts an input to hijack the model's instructions and make it perform unintended actions.
Scenario: An organization builds a customer-facing chatbot using an LLM to answer questions about its products. A user complains that the chatbot confidently provided them with an incorrect price and a link to a non-existent user manual.
Reflection Question: This is a classic example of which major generative AI risk? How does this incident highlight the need for human oversight or a verification mechanism (like RAG, covered later) in high-stakes applications?
š” Tip: Always treat output from a generative AI model as a "knowledgeable first draft," not as an absolute source of truth. It must be verified.