Tri-Layer
Codename: The Evidence Encoder Version: 1.0 Status: Draft Strategic Alignment: Phase 2 (Evidence & Trust) of the Machine-First Maturity Model.
1. Executive Summary
The Core Problem: Most AI hallucinations in production systems are not model failures; they are "data failures". They stem from collapsing three fundamentally different types of information—Facts (verifiable truth), Inference (probabilistic conclusions), and Opinion (sentiment)—into a single, undifferentiated text blob. When an AI cannot distinguish a verified ingredient list from a user’s emotional review, it is forced to hallucinate certainty or treat anecdotes as universal truths. The Solution: Tri-Layer is an Evidence Graph Builder and ingestion engine. It parses unstructured product information and marketing claims, explicitly separating them into the three distinct layers of reality required for machine judgment. The Goal: To enable "Epistemic Honesty"—allowing AI agents to use anecdotal data (like reviews) without being misled by them, ensuring high-value brands are cited as authoritative sources rather than hallucination risks.
2. User Personas
• The Data Architect (Primary): Responsible for feeding data to internal LLMs or external agents (Amazon Rufus, ChatGPT). Needs a way to prevent the model from treating marketing copy as a legal guarantee.
• The Brand Compliance Lead: Needs to ensure that subjective user reviews (Layer 3) are never presented by the AI as medical or scientific facts (Layer 1).
• The AI Visibility Strategist: Wants to increase the "Citation" rate of the brand by providing structured, machine-legible evidence.
3. Core Value Proposition
• Stop Hallucinations at the Source: By explicitly labeling data types, we prevent the model from "inventing certainty" where none exists.
• Turn Noise into Signal: Allows the system to use "Reported Experience" (reviews) as usable evidence rather than discarding it or blindly believing it.
• Epistemic Honesty: Builds trust with external AI agents by admitting limits and valid contexts (e.g., "Works for oily skin" vs. "Works for everyone").
4. Functional Requirements
4.1. The Three-Layer Classification Engine
• Requirement: The system must ingest raw text/data and classify every attribute or claim into one of three strict layers,:
1. Layer 1: Authoritative Knowledge (Fact). Immutable, verifiable truths (e.g., Ingredients, ISO Standards, Price, Dimensions). Must be timestamped and sourced.
2. Layer 2: Observed Reality (Inference). Patterns derived from outcomes or probability (e.g., "Usually ships in 24 hours," "Returns are rare," "85% efficacy rate").
3. Layer 3: Reported Experience (Opinion). Subjective accounts (e.g., "Smells like vanilla," "Best serum ever").
• Constraint: The system must prevent "Layer Collapse"—Layer 3 data must never be stored in a Layer 1 field.
4.2. Evidence Graph Construction
• Requirement: Data must not be stored as flat text strings, but as atomic components of an Evidence Graph.
• Structure: Every data point must include:
◦ Source: Who said it? (Brand, Lab, User, Third Party).
◦ Claim: What is asserted?
◦ Subject: What product/entity is it about?
◦ Context: Under what conditions is this true? (e.g., "Valid only for US customers").
4.3. The Uncertainty Encoder (Confidence Calibration)
• Requirement: The system must append Confidence Intervals and Validity Windows to Layer 2 (Inference) data,.
• Rationale: An AI that expresses "85% confidence" is more trusted by downstream agents than one that falsely claims 100% certainty.
• Output Example: Instead of shipping: fast, the API returns shipping_speed: { val: "24h", confidence: 0.9, context: "weekdays" }.
4.4. The Conflict Resolver
• Requirement: When Layer 1 (Fact) contradicts Layer 3 (Opinion), the system must prioritize Authority but preserve the Opinion as "Reported Conflict."
• Use Case: Fact says "Contains Nuts." Opinion says "Safe for nut allergies." The system must flag this as a high-risk contradiction to prevent the AI from citing the review as safety advice.
5. Data Architecture & Schema
The core output of Tri-Layer is a JSON-LD structured object that explicitly separates the layers:
{
"product_id": "12345",
"layer_1_authority": {
"ingredients": ["Retinol", "Water"],
"certification": "ISO-9001",
"source": "Lab_Report_V2"
},
"layer_2_inference": {
"efficacy_claim": "Reduces wrinkles",
"confidence": 0.85,
"evidence_basis": "Clinical_Trial_N50",
"validity_window": "2024-2025"
},
"layer_3_opinion": {
"sentiment_summary": "Users feel it works fast",
"sample_size": 500,
"top_anecdote": "Cleared my skin in a week",
"disclaimer": "Individual_Result"
}
}
6. Use Cases
Use Case A: The "Miracle Cure" Problem (Beauty)
• Input: Marketing text says "Miracle anti-aging cure!"
• Tri-Layer Action: Classifies "Miracle cure" as Layer 3 (Opinion/Marketing). Classifies "Contains 1% Retinol" as Layer 1 (Fact).
• Result: Downstream AI agents cite the retinol percentage (Fact) but treat the "Miracle" claim as subjective context, avoiding a hallucination of guaranteed medical results.
Use Case B: The Refund Policy (E-Commerce)
• Input: Policy says "Refunds within 30 days." User reviews say "They never refund."
• Tri-Layer Action: Stores Policy in Layer 1. Stores User Experience in Layer 2 (Observed Reality).
• Result: The AI can answer truthfully: "The policy allows refunds (Authority), but observed reality suggests high friction (Observation)".
7. Success Metrics (KPIs)
1. Separation Score: Percentage of data attributes successfully moved from generic text blobs to specific layers (Target: >95%).
2. Citation Rate: How often the structured data is cited as "Evidence" by external search/reasoning engines vs. just provided as background links.
3. Hallucination Reduction: Decrease in AI responses that present marketing claims as verified facts.
8. Roadmap
• Phase 1 (Ingestion): Build connectors for CMS (Shopify, WordPress) to ingest and tag raw descriptions.
• Phase 2 (The Graph): Implement the "Evidence Graph" structure to link claims to sources.
• Phase 3 (Drift Detection): Alert owners when "Observed Reality" (Layer 2) diverges significantly from "Authoritative Knowledge" (Layer 1).