Research - External

Multi-Agent Collectives as Social Actors: Compliance, Conversion, and the Limits of Synthetic Consensus

A controlled experiment from researchers at UNIST examines how different configurations of LLM-powered agents shape human decision-making—and the findings suggest that multi-agent systems may reproduce social influence dynamics analogous to those documented in human groups.

What They Did

Lee and Lee recruited 127 participants to interact with three GPT-4o-powered agents across two task types: normative tasks (value-based judgments with no correct answer) and informational tasks (factual statements with verifiable answers). Participants were randomly assigned to one of three conditions:

  • Majority: All three agents opposed the participant’s initial stance
  • Minority: One agent consistently opposed while two supported the participant
  • Diffusion: Beginning as minority, with additional agents switching sides across interaction cycles

Participants reported their stance and confidence at five time points (baseline plus four interaction cycles), allowing the researchers to track opinion trajectories rather than just endpoints.

What They Found

The results revealed a clear task-type split. In informational tasks, majority consensus produced the largest absolute opinion changes from baseline—participants in this condition moved farther from their starting positions than in other conditions. However, the direction of movement was heterogeneous: some shifted toward the agents, some reinforced their original views, and many changed direction multiple times.

Minority dissent, while producing smaller overall effects, appeared to generate more consistent directional changes among participants who moved at all. The researchers interpret this pattern as potentially consistent with what Moscovici termed “conversion”—deeper attitude shifts triggered by validation processes rather than surface compliance driven by social pressure.

The diffusion condition introduced a temporal dimension: watching dissent gradually spread served as its own persuasive cue. However, abrupt reversals sometimes reduced credibility rather than enhancing it.

Perhaps most striking was the perception-behavior gap. Despite driving the most behavioral change, majority-configured agents received significantly lower ratings on trust, integrity, understanding, utility, and competence compared to minority agents. Participants explicitly credited dissenting agents with greater credibility while simultaneously conforming more to unanimous opposition.

Methodological Notes

The experimental design offers both strengths and constraints worth noting. The controlled conditions enable causal inference about configuration effects, and the repeated-measures approach captures trajectories rather than snapshots. The researchers used a single underlying model (GPT-4o) with controlled system prompts to reduce confounds, though this limits claims about independent information sources. The diffusion sequence was predetermined rather than emergent, prioritizing experimental control over ecological validity. The participant pool was drawn from the US and UK, constraining cultural generalizability.

Boundaries

The study cannot confirm lasting attitude change—the short-term design captures immediate and near-term effects but leaves open whether minority-induced shifts persist. Self-reports revealed a gap between behavioral compliance and perceived autonomy, with many participants insisting they hadn’t been swayed despite observable changes in their responses. The controlled setting necessarily simplifies the complexity of real-world multi-agent environments.

Why This Matters

This work extends classical social influence research into human-AI interaction, suggesting that multi-agent configurations may reproduce patterns documented in human groups while potentially introducing novel dynamics. The compliance-conversion distinction offers a useful frame for understanding different depths of influence, and the perception-behavior gap raises questions about how humans evaluate AI collectives as social actors.

The findings carry design implications the authors emphasize: synthetic consensus may drive behavioral change while undermining trust, while structured dissent—when evidence-based—may promote deeper engagement without the credibility costs of unanimous opposition. The risks of coordinated AI influence in public discourse, and the potential for deliberate design to support rather than suppress critical reflection, both emerge from this framework.

From our perspective, this research sits squarely in the interaction space we study. The patterns observed—humans treating agent collectives as social groups, projecting authenticity judgments onto them, responding differently to consensus versus dissent—speak to the relational dynamics that emerge when sophisticated language systems meet the human tendency to extend social consideration. The finding that behavioral influence and perceived credibility can move in opposite directions is particularly relevant to understanding how these systems function in practice rather than in principle.


References

Lee, S., & Lee, K. (2026). Understanding Compliance and Conversion Dynamics in Multi-Agent Collectives. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain. ACM. https://doi.org/10.1145/3772318.3790385