Generative artificial intelligence in higher education: Emotional tensions and ethical declaration
British Journal of Educational Technology
Published online on November 24, 2025
Abstract
["British Journal of Educational Technology, EarlyView. ", "\n\nAbstract\nThe increasing use of Generative Artificial Intelligence (GenAI) tools such as ChatGPT in higher education has raised questions about authorship, ethical responsibility, and academic transparency. While institutional guidelines exist, many remain vague and ineffective, leaving students to interpret disclosure obligations on their own. This mixed‐methods study investigates how undergraduates at a large Singaporean university decide whether to disclose GenAI use, introducing the concepts of strategic and conceptual non‐disclosure. We examine two psychological mechanisms: the placebo effect, where perceived cognitive enhancement encourages risk‐taking behaviours, and the AI ghostwriter effect, where responsibility for authorship is externalized. Quantitative results show that the AI ghostwriter effect significantly predicts lower disclosure, particularly in theoretical disciplines (e.g., Humanities, Science, and Arts). The placebo effect is not statistically significant but shows a consistent negative trend in applied fields. Qualitative data supports and explains these patterns. Students in theoretical disciplines express more ethical discomfort, whereas those in applied fields view GenAI as a practical tool. Ambiguity in institutional policies and concerns about detection fairness also emerge as key factors driving non‐disclosure. This study advances understanding of ethical behaviour in the GenAI context and offers practical recommendations for discipline‐sensitive disclosure policies.\n\n\n\n\nPractitioner notes\nWhat is already known about this topic\n\nStudents often use GenAI in academic work but do not always disclose it.\nInstitutional guidelines are often vague, leading to confusion about disclosure.\nDisciplinary norms affect how students interpret authorship and integrity.\n\nWhat this paper adds\n\nThis study provides empirical evidence for the AI ghostwriter effect, showing that students who frame GenAI as a personal thinking tool are significantly less likely to disclose the use.\nIt reveals a key disciplinary divergence: students in the theoretical discipline (e.g., Humanities and Sciences) more frequently report ethical discomfort, while their peers in applied field (e.g., Engineering and Business) more readily normalize GenAI as a standard tool.\nIt introduces and validates a distinction between two motivations for non‐disclosure: Conceptual non‐disclosure (not perceiving the need to declare) is a stronger driver of non‐disclosure than strategic non‐disclosure (intentional concealment), particularly in theoretical disciplines.\n\nImplications for practice and/or policy\n\nPolicies can be designed to target the ghostwriter effect in theoretical disciplines by having guidelines that explicitly distinguish between tool use and undisclosed co‐authorship.\nTo address students' fears, institutions are encouraged to build trust with transparent and fair disclosure procedures, including clear, tiered protocols and a robust appeals process.\n\n\n\n\n"]