Hallucination in AI models refers to the generation of outputs that are not grounded in the input data or real-world knowledge. This can lead to misleading or incorrect information being presented as factual. For example:

System: Provide Python code to read a CSV file into a pandas DataFrame.

AI:
"""
import pandas as pd

df = pd.readCsv("data.csv")  # Incorrect: pandas has no readCsv; the correct function is read_csv
"""

Types of Hallucination

  1. Factual Hallucination: The AI generates information that is factually incorrect or unsupported by the input data.
  2. Contextual Hallucination: The AI provides information that is out of context or irrelevant to the user’s query.
  3. Logical Hallucination: The AI’s response contains logical inconsistencies or contradictions.

Addressing Hallucination on TrueFoundry

TrueFoundry offers solutions to mitigate hallucination through various integrations:

AWS Bedrock Guardrails

  • Monitors AI outputs for factual accuracy.
  • Provides context-aware validation.
  • Read how to configure AWS Bedrock Guardrails on TrueFoundry here.

Guardrails AI Integration

  • Utilizes trained ML models to detect hallucination patterns.
  • Real-time analysis of AI outputs for inconsistencies.

Azure AI Content Security

  • Cross-references AI outputs with verified data sources.
  • Enforces custom policies to ensure output reliability.

Custom Webhook Security

  • Allows for custom logic to validate AI-generated content.
  • Analyzes output patterns for anomalies and inconsistencies.
  • Read how to configure Custom Webhook on TrueFoundry here.

TrueFoundry’s integrations ensure robust detection and mitigation of hallucination, enabling the deployment of reliable AI applications.