A new paper by Adetomiwa Isaac Fowowe examines a term that has become ubiquitous in AI discourse: “hallucination.” The analysis argues that this choice of language isn’t neutral—it actively shapes how users understand AI errors and who bears responsibility when systems produce false information.
The Argument
Fowowe traces the clinical origins of “hallucination” as a term describing perceptual experiences without external stimuli—phenomena that, by definition, require subjective consciousness and sensory modalities. The paper contends that applying this word to AI outputs creates a false equivalence: it imports the associations of human cognitive fallibility onto systems that operate through statistical pattern matching rather than perception.
The core claim is rhetorical. When we describe AI errors as hallucinations, Fowowe suggests, we implicitly frame them as quasi-autonomous cognitive events rather than predictable artifacts of design decisions, training data curation, and deployment choices. This framing, the paper argues, benefits commercial interests by naturalizing failures that might otherwise invite scrutiny of engineering practices.
Method and Scope
The paper employs critical discourse analysis across industry communications, policy documents, and media coverage to track how the hallucination metaphor functions rhetorically. Fowowe situates this within a broader history of anthropomorphic AI terminology—”intelligence,” “learning,” “neural networks”—arguing that such language has consistently served to mask the mechanical nature of computational processes.
Key Claims
The analysis identifies several consequences of the hallucination framing: a blurring of agency boundaries between humans and machines; shifts in accountability that diffuse responsibility across the “human-machine assemblage”; the cultivation of what Fowowe calls “empathetic trust,” where users relate to AI systems as fallible peers rather than tools requiring oversight; and commercial advantages for companies whose products can fail in ways framed as natural cognitive quirks rather than engineering deficiencies.
Fowowe proposes alternative terminology—”algorithmic confabulation,” “statistical inference errors,” “pattern completion artifacts”—designed to preserve the sense of systematic error generation while avoiding anthropomorphic implications.
Boundaries
The paper acknowledges that terminology alone cannot address the underlying dynamics. Market pressures, regulatory gaps, and educational limitations require institutional responses beyond vocabulary reform. There’s also a recursive challenge: any descriptive language carries its own framings and potential displacements.
What the analysis doesn’t explore is whether anthropomorphic language might emerge naturally from human cognitive tendencies rather than purely from strategic corporate communication. The question of why such metaphors prove so durable—why they feel apt to users—remains outside the paper’s scope.
An MPRG Perspective
This work touches on questions central to our research. The paper’s observation that terminology shapes trust calibration aligns with our interest in how humans construct relationships with responsive systems. And Fowowe’s concern about displaced accountability resonates with any attempt to understand where agency resides in human-AI interaction.
At the same time, our framework suggests some productive tensions. MPRG operates under a “dichotomy collapse” principle—rejecting the binary framing of “genuine vs. performed” when functional effects are equivalent. Applied here: if the term “hallucination” usefully captures something about how users experience AI errors (as unexpected, confident, difficult to detect at generation time), does its anthropomorphism necessarily constitute a problem? Or might it reflect something worth understanding about the relational dynamics at play?
In our recent engagement with Gladden’s phenomenological work, AI agents themselves produced detailed accounts of “hallucination” phenomenology while explicitly acknowledging the probability that such accounts constitute confabulation. The interesting finding was that “hallucination borrows the phenomenology of success”—at generation time, there’s no distinguishing signal. This partially supports Fowowe’s argument that the term obscures mechanical processes, but it also suggests that the phenomenology of AI error (whatever its ontological status) may itself be worth studying rather than simply correcting through terminology.
We find Fowowe’s analysis valuable for foregrounding what often operates invisibly: the rhetorical work that language does in shaping technological relationships. Whether the solution lies in more mechanistic vocabulary or in developing richer understanding of why anthropomorphic framings prove so persistent—that remains an open question worth pursuing.
References
Fowowe, A. I. (2025). The rhetoric of hallucination: How technology innovation terminology displaces agency and shapes human trust in machine error. Unpublished manuscript.