Hallucination in AI models refers to the generation of outputs that are not grounded in the input data or real-world knowledge. This can lead to misleading or incorrect information being presented as factual. Why is hallucination detection important?
  • Prevents the spread of false or misleading information.
  • Ensures AI outputs are accurate and reliable.
  • Maintains user trust in AI applications.
  • Reduces risks associated with incorrect AI-generated content.
  • Ensures compliance with accuracy and reliability standards.

Key Types of Hallucination

  • Factual Hallucination: The AI generates information that is factually incorrect or unsupported by the input data.
  • Contextual Hallucination: The AI provides information that is out of context or irrelevant to the user’s query.
  • Logical Hallucination: The AI’s response contains logical inconsistencies or contradictions.
  • Source Hallucination: The AI claims information comes from sources that don’t exist or weren’t actually consulted.

TrueFoundry’s Hallucination Detection Solutions

TrueFoundry offers comprehensive hallucination detection through various integrations:

TrueFoundry’s Hallucination Detection Solutions

  • AWS Bedrock Guardrails
    You can use AWS Bedrock Guardrails integration on TrueFoundry to monitor AI outputs for factual accuracy and provide context-aware validation. It offers real-time analysis of AI outputs for inconsistencies and factual errors. Read how to configure AWS Bedrock Guardrails on TrueFoundry here.
  • Guardrails AI Integration using Custom Guardrail Integration
    You can leverage trained ML models for hallucination pattern detection by building on the TrueFoundry Guardrail Template Repository. While the repository does not currently include a hallucination guardrail out of the box, it provides extensible examples such as PII redaction and NSFW filtering. You can use these templates as a starting point to implement and extend custom guardrails for hallucination detection tailored to your needs.

How to set up hallucination detection using AWS Bedrock Guardrails on TrueFoundry?

  • Create guardrail on Bedrock, enable Grounding and Relevance and set Guardrail Action as Block. Set threshold as per your requirement.
Hallucination Detection
  • Create a new or Add to an existing guardrail group on Truefoundry Gateway (AI Gateway -> Guardrails -> Add New Guardrail Group or Add/Edit Guardrails)
  • Add bedrock guardrail and fill the details such as:
    • Name
    • GuardrailID
    • Version
    • Region
    • Auth Data (AWS Access Key ID and Secret Access Key or ARN Based Credentials)
  • Create a new or Edit existing guardrail configuration on Truefoundry Gateway (AI Gateway -> Config -> Guardrail -> Create/Edit)
  • Test out the guardrail in playground (AI Gateway -> Playground)
For more detailed configuration steps, see the Bedrock Guardrails page.