Mitigating Generative A.I Hallucination

Hallucination is a tendency of a.i to generate made up answers. This is due to various factors such as lack of proper training data (that leads to biases), and incorrect assumptions of the a.i model.

Usage for hallucination

It’s not all bad, hallucination is actually useful for various creative works such as story writing and discovering possibilities regarding own data. But for for factual based applications such as knowledge base or Q&A apps, it may lead to misleading result to end users.

To reduce hallucination

Approach 1: Explicitly define the scope of the A.I

this can be define through prompt engineering, by instructing the A.I on what kind of “persona” it will act as. the more detailed your A.I persona and instruction, the better the result will be.

Example:

SYSTEM: Act as historian. If you do not know the answer, respond with "?"

Approach 2: Configure your A.I’s

Two parameters are configurable for generative A.I – temperature and top p. details about the two will be covered on prompt engineering, but here’s the general guideline.

  • temperature and top p can be altered, but not both.
  • temperature
    • lower temperature is ideal for fact-based applications (e.g knowledge based, Q&A apps). Based on my testing, I noticed 0.75 to 1 seems ideal configuration.
    • higher temperature tends to be better for creative kind of application such as generating stories or poems.
  • top p
    • for exact and factual answers. the value must be low

NOTE:

Entry will be update as i progress on this topic.