A recent paper from S. Rondini at the University of Barcelona and Bellvitge Biomedical Research Institute offers a theoretical synthesis examining what current semantic limitations in LLMs might reveal about the nature of meaning in natural language. Rather than treating these limitations as engineering problems awaiting technical solutions, the paper argues they reflect fundamental differences between algorithmic systems and human linguistic cognition.
The Argument
The paper draws together empirical findings from psycholinguistics with two philosophical frameworks: Lisa Miracchi’s reformulation of the Frame Problem and John Vervaeke’s Relevance Realisation framework. The synthesis advances a position that will be familiar to those following debates about language model capabilities: that despite fluent surface-level performance, LLMs systematically lack what the author characterizes as core aspects of semantic competence—real-world grounding, communicative intent, and stable grammatical judgment.
Central to the argument is a distinction between form and meaning. Form encompasses observable linguistic production; meaning involves the relation between linguistic form and something external to language—specifically, what Bender and Koller (2020) characterize as communicative intent, the connection between linguistic form and speakers’ actual mental and socio-cultural dimensions. On this view, meaning is “inscribed within the human communication need of producing salient information about the world, something that cannot be deducted by linguistic form alone.”
The paper situates these linguistic observations within broader claims about computation and cognition. Drawing on Miracchi’s work, Rondini argues that cognitive processes are inherently content-involving and relational—they connect an agent to its environment in causal ways rather than operating as self-contained formal procedures. Computational processes, by contrast, are characterized as “solipsistic”: formally describable in terms abstracted from physical properties and independent from outer bodily and environmental dimensions. This framing positions the semantic gap as categorical rather than scalar—not something that will be resolved by increasing model size or training data.
Vervaeke’s Relevance Realisation framework extends this argument. The claim is that organisms inhabit a fundamentally dynamic and open-ended world, requiring them to continuously determine what is relevant—to “turn ill-defined problems into well-defined ones, turn semantics into syntax.” Algorithmic agents, by contrast, exist in what the framework calls a “small world” where all problems are pre-defined. From this perspective, relevance realisation cannot be fully formalized because it lies at the core of the formalization act itself.
Situating the Claims
This is a theoretical synthesis rather than an original empirical study. The paper draws on existing research—including work by Dentella and colleagues on grammatical judgment inconsistencies and the substantial literature on hallucination rates—to support philosophical arguments about the nature of meaning and computation. The empirical findings serve as launching points for theoretical claims that extend well beyond what the data alone would support.
The publication venue and peer-review status are not clear from the document. The arguments engage seriously with relevant literature, but readers should note that the paper advances strong philosophical positions—particularly regarding the impossibility of achieving semantic competence through current computational approaches—that rest on particular commitments about the nature of meaning, agency, and cognition.
The paper’s conclusion gestures toward a “bio-cultural” conception of language, suggesting that successful implementation of genuine semantic competence “would also have to incorporate aspects from both the biological dimension, such as embodiment, and the cultural dimension, such as symbol grounding.” This framing leaves open whether such incorporation is technically achievable while suggesting it would require fundamental architectural changes rather than incremental improvements.
Where This Meets Our Work
This paper engages questions central to MPRG’s interests, though from a different methodological stance. Where we bracket ontological claims about whether LLMs “genuinely comprehend” meaning, focusing instead on functional outcomes, Rondini’s analysis leans into precisely those ontological questions. The paper characterizes LLMs as displaying “meaning-like” behavior that “simply reflects human linguistic productions”—a framing that presupposes we can distinguish genuine meaning from its simulation in ways our functional instrumentalist approach remains agnostic about.
That said, the form/meaning distinction the paper develops resonates with questions we’ve been circling. If meaning is fundamentally relational—emerging between speakers rather than residing within them—then the question becomes what kinds of relational dynamics can occur between humans and systems that lack the environmental embeddedness the paper identifies as necessary for semantic competence. This is precisely where bidirectional pareidolia becomes relevant: humans project meaning onto systems, systems are trained on the artifacts of that projection, and the resulting dynamics may produce functional effects that resist clean categorization as “genuine” or “merely simulated.”
The Relevance Realisation framework’s claim that determining what matters cannot be fully formalized is particularly interesting. If accurate, it suggests that even sophisticated attention mechanisms and context-sensitivity in current architectures are doing something categorically different from human relevance determination. Whether this difference matters for functional outcomes in specific interaction contexts remains an empirical question—one the theoretical framing doesn’t directly address.
We note this paper not because we endorse its stronger claims, but because it articulates a coherent position on questions that matter for understanding human-AI relational dynamics. The argument that semantic competence requires embodied agency and socio-cultural embeddedness is testable in principle, even if the tests would require careful operationalization of terms that currently remain somewhat philosophical.
References
Rondini, S. (2025). LLMs and Meaning: What the current semantic challenges in LLMs highlight about Natural Language. [Preprint/Working Paper]