Research - External

Who You Explain To Matters: Role Framing and the Relational Dynamics of Learning

When the same underlying system behaves identically but is framed differently, do humans respond the same way? A new study from researchers at the National University of Singapore and Singapore Management University suggests the answer is a clear no—and the implications extend well beyond educational technology.

The Study

Xu, Zhang, Tang, and Lee designed a between-subjects experiment (N=96) in which participants learned an economics concept and then explained it to a GPT-4o-powered conversational agent. The critical manipulation: the agent was framed as one of three pedagogical roles—a Tutee (novice learner asking for instruction), a Peer (collaborative partner offering tentative ideas), or a Challenger (Socratic questioner probing assumptions). A control condition provided only minimal acknowledgments.

The methodological approach is notable for what it holds constant. All agents used the same underlying model. The learning materials, task structure, and time constraints were identical across conditions. What varied was purely the relational framing—who the human believed they were talking to, and what that implied about their role in the exchange.

Key Findings

The results suggest that role framing substantially shapes both interaction patterns and subjective experience, even when objective learning outcomes remain statistically equivalent across conditions.

The Tutee role elicited the highest cognitive investment. Participants wrote more, reviewed materials more frequently, and engaged in more definitional and comparative explanations. The authors interpret this through the lens of the “protégé effect”—the motivational boost that comes from teaching. However, this role also produced significantly higher reported pressure, which participants attributed to both the cognitive demands of restructuring knowledge and a sense of responsibility for the agent’s learning.

The Peer role fostered what the authors describe as “psychological safety.” Participants reported the highest levels of absorption, interest, and enjoyment. Their interactions showed elevated metacognitive behaviors—self-monitoring, seeking feedback, collaborative sense-making. The trade-off: some participants noted the agent’s supportiveness tipped into excessive agreement, potentially reducing the cognitive friction that drives deeper reasoning.

The Challenger role produced an intriguing pattern. Participants engaged in sustained elaboration and frequent self-monitoring, reporting enhanced critical thinking. Yet despite facing persistent questioning, they reported significantly lower pressure than the Tutee group. The authors’ interpretation draws on social presence theory: because the agent lacks the social status of a human authority, its challenges can be received as intellectual sparring rather than evaluation. The questioning prompts reflection without triggering performance anxiety.

The Control condition produced rapid disengagement. Without role-based scaffolding, participants quickly terminated interactions with closing statements, demonstrating markedly lower engagement across all metrics.

Boundaries

The study employed a single 20-minute session with a conceptual economics task. The authors acknowledge that measurable differences in objective learning outcomes may require longer engagement periods to emerge, and that the effectiveness of different roles likely varies across domains and pedagogical contexts. They also note the absence of a formal manipulation check for role perception—participants’ prior mental models of AI may moderate how readily they accept different relational framings.

Why This Matters

From our perspective, this study offers a clear example of what we might call relational affordance—the way that framing shapes what becomes possible in human-AI interaction. The same system, presenting the same information, produces different cognitive behaviors, different affective experiences, and different patterns of metacognitive engagement depending on how the human positions themselves in relation to it.

The Challenger finding is particularly striking. Human teachers wielding Socratic questioning can easily trigger defensiveness or anxiety. The AI version appears to preserve the cognitive benefits—prompting participants to examine assumptions, defend reasoning, and engage in self-monitoring—while sidestepping some of the social costs. Whether this reflects the AI’s reduced social presence, the absence of real stakes, or something else entirely remains an open question.

What’s clear is that the interaction dynamics here are genuinely bidirectional. The human’s understanding of who (or what) they’re talking to shapes how they engage, which shapes what they produce, which in turn shapes what the system responds to. Role framing isn’t mere window dressing on an otherwise fixed exchange—it constitutes the exchange.


References

Xu, Z., Zhang, J., Tang, A., & Lee, Y.-C. (2026). Who You Explain To Matters: Learning by Explaining to Conversational Agents with Different Pedagogical Roles. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). ACM. https://doi.org/10.1145/3772318.3790298