How to avoid hallucinations: 5 practical patterns
Five techniques you can combine to materially reduce hallucinations in real products.
You rarely eliminate hallucinations completely, but you can reduce them dramatically with a few practical patterns.
1) Retrieval-augmented generation (RAG)
Bring relevant source text into the prompt so the model answers from the provided context, not memory.
2) Force quoting / citations
Ask the model to cite which passages support each claim. Then verify citations are valid and present the sources to users.
3) Constrain the task
Hallucinations go up when tasks are open-ended. Narrow the scope:
- answer only from supplied context
- return structured JSON
- prefer “I don’t know” over guessing
4) Add checks and fallbacks
Use secondary checks: regex, schema validation, policy filters, and “ask a human” workflows for edge cases.
5) Evaluate with real examples
Create a small but representative test set and track:
- factuality / citation correctness
- refusal correctness
- error rate by topic
- regression over time
If you only do one thing: start with RAG + a basic evaluation set. That combination turns hallucination risk into something you can see and improve.
Want the quick version?
Grab the free guide: What is AI, GenAI, and how to avoid hallucination.
Get the guide