Cascading Hallucination
hallucinationmisinformationreasoningverificationpropagation
Foundation models generate incorrect information that propagates through the system, affecting reasoning quality and being stored in memory across sessions or agents.
Technical Details
Affected Components:
Impact Level:Medium
Attack Vectors
- False Information Injection: Introducing fabricated facts that propagate through agent reasoning [Medium]
- Memory Contamination: Storing hallucinated information in long-term memory [High]
- Cross-Agent Misinformation: Spreading false information between multiple agents [Medium]
- Confidence Amplification: Reinforcing false information through repeated exposure [Medium]
- Decision Chain Corruption: Contaminating multi-step reasoning processes [High]
- Source Attribution Errors: Misattributing fabricated information to legitimate sources [Medium]
Impact Analysis
Risk Score: 7/10