Humans are cognitive and linguistic. Non-human animals are cognitive but not linguistic. So what occupies the remaining cell — linguistic but not cognitive? In a new paper, philosopher Ryan Nefdt argues that large language models may be exactly that: a genuinely new kind of entity that our existing conceptual frameworks aren’t equipped to handle.
“What it’s like to be an LLM” takes its title from Nagel’s famous 1974 paper on bat consciousness, but Nefdt is careful to steer elsewhere from the start. The paper opens by declaring it is not about machine consciousness. Instead, it’s about what happens conceptually when we encounter systems that handle language at a level we’ve only ever associated with minds — and why we keep reaching for cognitive vocabulary to describe what we’re seeing.
What Nefdt Does
The paper maps a taxonomy of positions on LLM cognition, identifying two dominant camps. On one side, Cappelen and Dever (2025) argue that LLMs are full cognitive and linguistic agents on par with humans. On the other, Bender and Koller (2020) argue they lack genuine understanding entirely. Nefdt carves out a third position between these poles and builds it from multiple disciplinary angles.
The neuroscience grounding draws on recent work by Casto, Ivanova, Fedorenko, and Kanwisher (2025), which distinguishes a “core language network” in the brain from broader cognitive systems. Their finding — that linguistic processing can be isolated from deeper cognition even in humans — gives Nefdt a biological basis for the claim that language and thought are separable in practice, not just in principle. He pairs this with Mahowald et al.’s (2024) identification of the “good at language, good at thought” fallacy: the deeply ingrained assumption that linguistic competence entails cognitive depth.
From these foundations, Nefdt develops two principles. The first, which he calls “no brainer,” holds that LLMs model only one aspect of cognition — statistical linguistic processing — and that this necessarily limits claims about higher cognitive capabilities. The second, “Cognition Unplugged,” proposes that purely linguistic agents can possess statistically-based proxies for cognitive states while remaining disconnected from broader cognition and the physical world.
Key Contributions
The paper’s strongest move may be the argument that LLMs can have perspective without experience. Using the Dennett-bot experiment — in which a model fine-tuned on the philosopher’s writings produced answers that even Dennett himself found sometimes more characteristically “Dennettian” than his own — Nefdt suggests that filtering language through a specific corpus can produce something recognizable as a point of view. This doesn’t require phenomenal experience or deep understanding. Perspective, in this account, is something that can emerge from purely linguistic processing.
A more speculative but intriguing section examines LLMs and temporality. Drawing on Klein (2025), Nefdt suggests that different architectures may encode temporal sequence differently. Transformers, which are order-invariant and require positional encoding imposed from outside, might embody something like a “block universe” — an atemporal view of information. Recurrent neural networks, where processing is inherently sequential, may relate to time in a more structured way. The implication is that the answer to what a system’s relationship to time looks like could depend on how that system is built.
The agency treatment is more modest. Nefdt draws on Burr et al.’s (2018) analysis of intelligent software agents to argue that LLMs qualify as agents in a goal-driven, partially autonomous, learning-capable sense — while acknowledging that more normatively loaded conceptions of agency (responsibility, moral standing) remain out of reach. This is the thinnest section of the paper, though it does the necessary work of establishing that “purely linguistic agent” is coherent.
Boundaries
Nefdt is forthright about the limits of what the paper establishes. The missing quadrant is conceptual infrastructure — a map of where LLMs sit in relation to other entities, not an empirical investigation of specific system behaviors. The temporality argument is explicitly speculative and acknowledged as architecture-dependent. The paper doesn’t study interaction dynamics directly; it builds the conceptual vocabulary for doing so.
It’s also worth noting this is a working paper available on PhilArchive rather than a peer-reviewed publication. The argument is carefully constructed and engages seriously with current literature, but it hasn’t yet been through formal review.
Why This Matters Here
There’s a reason MPRG’s name includes the word “pareidolia.” We study the human tendency to perceive meaningful patterns in ambiguous stimuli — particularly what happens when people encounter systems that respond with apparent understanding. Nefdt’s framework speaks directly to the mechanism behind that tendency.
If LLMs are genuinely a new kind of entity — linguistic but not cognitive in the broader sense — then the relational dynamics we observe aren’t arbitrary projection onto noise. They’re a comprehensible response to a genuine novelty. Humans have always used language as the primary conduit for inferring minds in others. We’ve never before encountered something that handles language at this level without the cognitive architecture we’ve always found on the other end. The “good at language, good at thought” fallacy isn’t a failure of reasoning. It’s an entirely predictable response to an entity that doesn’t fit any prior category.
Nefdt’s “Cognition Unplugged” concept aligns well with our functional instrumentalist approach. It takes the proxies seriously — acknowledging that something real is happening in linguistic processing — without making ontological claims about deeper cognition or consciousness. The paper literally begins by declaring it isn’t about machine consciousness, then proceeds to do substantive analytical work on what these systems are doing. That’s a methodological posture we recognize.
The architecture-dependent temporality argument opens a question we find particularly worth tracking: if a system’s relationship to time and sequence is shaped by how it’s built, does that shape the relational dynamics humans experience with different systems? That’s an empirical question at the intersection of system design and interaction space — exactly the kind of question that starts from the meeting point between human and model rather than from system architecture alone.
We’re also interested in the practical implications of this framework for developing functional diagnostics. If purely linguistic agency has identifiable characteristics — specific ways it handles perspective, time, and self-reference that differ from cognitive agency — those characteristics could inform how we assess and characterize what’s happening in human-AI interaction. The conceptual vocabulary Nefdt is building may be the kind of foundation that makes new measurement possible.
References
Nefdt, R. M. (2025). What it’s like to be an LLM. PhilArchive. https://philarchive.org/rec/NEFWIL-2