Scenario‐Based AI Literacy Scale (SAILS): Evidence for distinct instrumental and critical‐reflective AI skills and their difference from traditional digital skills
British Journal of Educational Technology
Published online on April 10, 2026
Abstract
["British Journal of Educational Technology, EarlyView. ", "\nAbstract\n\nGiven the increasing integration of Artificial Intelligence (AI) into everyday life and professional contexts, it is essential to investigate learners' existing capabilities regarding AI tools to inform possible interventions to equip them with necessary AI skills, but also advance the theoretical frameworks on digital skills measurement and development. In this vein, this study aims to validate a Scenario‐Based AI Literacy Scale (SAILS) tailored to vocational learners. In this study, we differentiate between instrumental (e.g., using AI to prepare a presentation) and critical‐reflective AI skills (e.g., recognizing AI‐generated deepfake content). The scale is operationalized through a scenario‐based self‐assessment approach, ensuring a context‐driven evaluation of AI skills. We validated the AI skills scale consisting of 12 scenarios (5 instrumental and 7 critical‐reflective) with a sample of police officers in training (N = 420), investigating reliability and validity of the scale. Additionally, we compared SAILS with a traditional digital skills scale to investigate convergent and discriminant validity. The analysis resulted in excellent reliability of the subscales (Cronbach's α = 0.88 for critical‐reflective skills and 0.89 for instrumental skills) and the entire scale as a whole (Cronbach's α = 0.92). A 3‐factor‐model (including critical‐reflective and instrumental AI skills subscales as well as digital skills) shows an acceptable model fit (RMSEA = 0.06, TLI = 0.93) with the standardized factor loadings ranging from 0.56 to 0.81 indicating an acceptable construct validity. AI skills were not only found to be related to more general digital skills, but also exhibiting some unique features, emphasizing that AI use requires skills which were not yet covered by digital literacy. These results provide support for an easy‐to‐use template, to be used for additional research in different contexts where objective performance measurements cannot be used and with a broader array of learners.\n\n\n\n\nPractitioner notes\nWhat is already known about this topic\n\nBasic AI skills are important in an ever‐growing AI‐driven world.\nCurrently available measurements for AI literacy include self‐assessments and objective performance‐based tests.\nAI literacy is closely linked with digital skills.\n\nWhat this paper adds\n\nAI literacy can be seen as a related but separate construct from traditional digital skills.\nAI literacy can be measured through a scenario‐based approach combining strengths from self‐assessment and objective measures.\nAI literacy skills can be empirically differentiated into instrumental and critical‐reflective skills, for example, recognizing AI‐generated content.\n\nImplications for practice and/or policy\n\nThe SAILS instrument provides educators with a reliable tool to assess learners' AI literacy when objective measurement is problematic.\nAI literacy should be regarded—and measured—in more than one dimension, that is, differentiated between instrumental and critical‐reflective skills in order to identify learners' strengths and needs.\nSAILS can be adapted to a vast variety of thematic contexts and is thus applicable in many different venues of education.\n\n\n\n\n"]