When machines join the moral circle: The persona effect of generative AI agents in collaborative reasoning
British Journal of Educational Technology
Published online on April 05, 2026
Abstract
["British Journal of Educational Technology, EarlyView. ", "\nAbstract\nGenerative AI is increasingly positioned as a peer in collaborative learning, yet its effects on ethical deliberation remain unclear. We report a between‐subjects experiment with university students (N = 217) who discussed an autonomous‐vehicle dilemma in triads under three conditions: human‐only control, supportive AI teammate or contrarian AI teammate. Using moral foundations lexicons, argumentative coding from the augmentative knowledge construction framework, semantic‐trajectory modelling with BERTopic and dynamic time warping, and epistemic network analysis, we traced how AI personas reshape moral discourse. Supportive AIs increased grounded/qualified claims relative to control, consolidating integrative reasoning around care/fairness, while contrarian AIs modestly broadened moral framing and sustained value pluralism. Both AI conditions reduced thematic drift compared with human‐only groups, indicating more stable topical focus. Post‐discussion justification complexity was only weakly predicted by moral framing and reasoning quality, and shifts in final moral decisions were driven primarily by participants' initial stance rather than condition. Overall, AI teammates altered the process, the distribution and connection of moral frames and argument quality, more than the outcome of moral choice, highlighting the potential of generative AI agents as teammates for eliciting reflective, pluralistic moral reasoning in collaborative learning.\n\n\nPractitioner notes\n\nWhat is currently known about this topic\n\n\n\nAI tools can support discussion in collaborative learning, but evidence on ethical reasoning processes is mixed.\n\nMoral foundations theory and argumentation frameworks offer useful lenses for analysing value‐laden dialogue.\n\nConversation analytics (eg, ENA and topic models) can reveal changes in discourse structure beyond outcome scores.\n\n\n\n\nWhat this paper adds\n\n\n\nSupportive AI teammates increase grounded/qualified claims compared with human‐only groups, improving the quality of moral reasoning.\n\nContrarian AI teammates sustain value pluralism by connecting grounded claims to a wider moral repertoire, with only modest shifts in specific frames.\n\nBoth AI personas reduce thematic drift, stabilising discussion focus; however, final moral decisions rarely change and justification complexity gains are small.\n\n\n\n\nImplications for practitioners\n\n\n\nTreat AI as persona‐configured teammates: use supportive styles to scaffold integrative reasoning and contrarian styles to elicit critical contrast.\n\nDesign for process gains: instrument chats, monitor framing/argument quality and avoid over‐weighting post hoc decision change as the sole outcome.\n\nGovern participation: cap consecutive AI turns, keep timing natural and align persona goals with learning goals to prevent dominance while sustaining reflective dialogue.\n\n\n\n\n\n"]