When people embrace AI writing tools, what happens to how they feel about writing itself? A new study in Acta Psychologica suggests the answer is more complicated than we might expect: acceptance appears to increase both enjoyment and anxiety simultaneously.
Sun, Wang, Mendoza, and Li (2026) examined the relationships among LLM acceptance, emotional states, and self-efficacy in the context of second-language academic writing. Their central finding—that accepting LLMs as writing tools predicts higher levels of both positive and negative emotions—points to something worth sitting with.
The Study
The researchers surveyed 643 Chinese graduate students who use LLMs for English academic writing. Participants reported on four constructs: their acceptance of LLMs (perceived usefulness, ease of use, intention to use, and actual use), their enjoyment while writing, their anxiety about writing, and their self-efficacy—their confidence in their ability to produce quality academic work in English.
The study drew on two theoretical frameworks. Control-value theory positions LLM acceptance as a “distal factor” that shapes emotional experiences during cognitively demanding tasks. Self-efficacy theory posits that emotional states serve as one source of people’s beliefs about their own capabilities. Together, these frameworks predict that acceptance should influence emotions, and emotions should influence confidence.
Structural equation modeling tested these pathways across the full sample.
What They Found
The direct effects largely confirmed expectations. LLM acceptance positively predicted self-efficacy (β = 0.328), and enjoyment positively predicted self-efficacy (β = 0.395). Anxiety negatively predicted self-efficacy (β = −0.176). So far, straightforward.
The surprise came with acceptance and anxiety. The researchers hypothesized that greater LLM acceptance would reduce anxiety—a reasonable expectation given prior work showing that AI tools can lower stress in learning contexts. Instead, LLM acceptance positively predicted anxiety (β = 0.120). Students who more fully embraced these tools also reported more worry about their writing.
This created what the authors call “competitive mediation.” Acceptance builds self-efficacy through two simultaneous pathways: a facilitating route through increased enjoyment, and an inhibiting route through increased anxiety. The positive pathway is stronger, so the net effect remains beneficial. But the negative pathway exists, partially offsetting the gains.
The enjoyment pathway showed “complementary mediation”—acceptance increases enjoyment, which increases self-efficacy, reinforcing the direct positive effect. The anxiety pathway works against this. Both are statistically significant. Both are real.
Boundaries
Several limitations shape how we should interpret these findings. The sample consisted entirely of Chinese graduate students writing academic English—a specific population facing specific pressures. Self-report measures capture what participants believe they experience, not necessarily what they do. And the cross-sectional design means causality remains directional in theory but correlational in practice.
The authors acknowledge they didn’t measure potential mechanisms for the acceptance-anxiety link. Why would embracing a tool increase worry? They speculate about factors like technophobia, self-doubt when comparing one’s work to AI output, or reduced sense of control. These remain hypotheses for future work to test.
Why This Matters to Us
MPRG’s research program centers on the relational dynamics between humans and AI systems. We’re particularly interested in “the human side of the equation”—what our interactions with these systems reveal about human cognition, attachment, and emotional response.
This study offers a clean example of something we encounter repeatedly: the relationship between humans and AI tools resists simple characterization. Acceptance doesn’t straightforwardly reduce friction or increase comfort. It appears to intensify the emotional stakes in both directions.
The finding resonates with qualitative work the authors cite, where students report AI-mediated learning eliciting positive and negative emotions simultaneously. We find this simultaneity more interesting than either pole alone. It suggests that engaging seriously with LLMs—accepting them as genuine tools for one’s work—involves something more complex than either enthusiasm or resistance.
One interpretation: accepting LLMs as writing partners raises the salience of writing itself. The tool becomes a mirror, reflecting both possibilities and inadequacies back at the user. This would explain why both enjoyment and anxiety increase together. The stakes feel higher because the user is now genuinely invested in the collaboration.
Whether this interpretation holds, the empirical pattern is worth noting. Human-AI interaction may not follow the utilitarian logic we sometimes assume—where better tools simply make tasks easier and feelings more positive. The emotional landscape appears more textured than that.
References
Sun, F., Wang, J., Mendoza, L., & Li, H. (2026). Exploring the relationships among large language model acceptance, enjoyment, anxiety, and self-efficacy in L2 academic writing. Acta Psychologica. https://doi.org/10.1016/j.actpsy.2026.106237