Research - External

Who Owns the Idea? New Research on Human-AI Creative Collaboration

Debates about AI-generated creative work often center on the artifact: Is this image “real” art? Does this text have a “real” author? A recent study from researchers at UT Austin suggests we may be asking the wrong question. When humans and AI collaborate on creative work, ownership appears to be less about what the system did and more about what the human values.

Liu et al. recruited 54 researchers across natural sciences, computer science, and humanities to develop actual research proposals using an agentic LLM system. The system integrated three roles—Ideator, Writer, and Evaluator—and participants were randomly assigned to one of three control levels: Low (minimal human intervention after initial prompts), Medium (some steering capacity), or Intensive (fine-grained editing and feedback throughout). After completing their proposals, participants estimated the percentage of human versus AI contribution and classified the final output as “Human Work,” “AI Work,” or “Co-Created Work.”

The Invariance of Ownership

The central finding is striking in its consistency. Participants who classified their proposals as “Human Work” attributed higher contributions to themselves regardless of which control level they used. Those who classified proposals as “AI Work” consistently credited the AI as primary contributor—even when they had intensive control over the process. Only “Co-Created Work” showed variation across conditions, with attribution reflecting active negotiation.

What drove these stable patterns? The qualitative data points to three factors: the perceived originality of the initial idea, the execution effort involved in developing it, and recognition of how labor was divided. Different participants weighted these factors differently based on their values about what constitutes meaningful creative contribution.

One participant in the Intensive condition—with maximum control—still attributed 80% ownership to the AI: “I think the high-level idea was mine, but all the technical details came from AI, that’s the most important on how you would actually execute that.” Another in the Low condition claimed the work as primarily human: “Idea-wise, 90% is from me.” Same type of output, radically different attribution—driven by what each person considered the “real” contribution.

The Effort Shift

The study also documented a transformation in human labor. When the AI handled idea generation, participants reported spending more effort on verification—checking citations, evaluating methodological feasibility, assessing whether suggestions were workable. As one participant described it, they adopted “a reviewer mindset.” The creative work didn’t disappear; it changed form.

Several participants expressed ambivalence about this shift. The system reduced the effort required for writing, but that effortful process was precisely what some found meaningful: “Actually writing things out by hand helps me think about them more deeply… I lost some agency in the research process.” Another noted that while writing literature reviews “can feel tedious… you end up discovering a lot, not just about the topic, but also about yourself.”

Methodological Notes

The combination of behavioral logging, standardized instruments (Creativity Support Index, NASA Task Load Index), and semi-structured interviews allowed the researchers to examine not just what people attributed but why. The 35-40 minute task window meant participants had substantive engagement with the system—not hypothetical attribution but reflection on work they had just completed in their own areas of expertise.

Limitations Worth Noting

The task duration captures early-stage interaction patterns rather than the longer-term dynamics of sustained collaboration. The between-subjects design, while clean for comparison, leaves unexplored how the same person might respond differently across control levels. And the focus on research proposal writing—a genre with particular norms around authorship and contribution—may not generalize straightforwardly to visual art, fiction, or other creative domains where ownership debates run hottest.

Why This Matters to Us

From our perspective, this study illuminates something important about the recursive dynamics between humans and AI systems. The ownership question doesn’t resolve at the level of the artifact. It resolves—or fails to resolve—at the level of what the human brings to the interaction: their values, their sense of what counts as creative contribution, their prior beliefs about authorship.

This suggests that debates around AI art and writing may be less about the capabilities of the systems and more about unresolved disagreements among humans about what makes creative work valuable. The AI becomes a surface onto which these deeper disagreements are projected. Whether you see the output as “yours” or “the machine’s” depends significantly on which aspects of creative labor you consider essential—and that’s a question the technology cannot answer for you.

The finding that attribution patterns remained stable across control levels—that people with strong prior commitments saw what they expected to see regardless of how the interaction unfolded—suggests that understanding human-AI creative collaboration requires attending as much to the human side of the equation as to the capabilities of the system.


Reference

Liu, H., Choi, Y., Gautam, S., Jaffe, G., Rieh, S. Y., & Lease, M. (2026). Who Owns Creativity and Who Does the Work? Trade-offs in LLM-Supported Research Ideation. arXiv preprint arXiv:2601.12152. https://arxiv.org/abs/2601.12152