Haga clic aquí para la traducción al español
Introduction
The current public discourse on artificial intelligence (AI) has predictably triggered a surge in calls to incorporate “AI literacy” into curricula. Pundits, policymakers, educational consultants, & think tanks now routinely champion the inclusion of AI literacy, typically vaguely defined or entirely undefined, as an educational necessity. The justification for this curricular expansion often relies on the familiar logic of inevitability, i.e. that AI is transforming everything, & therefore, educational organisations must prepare students for an AI-dominated future. However, this impulse, though perhaps understandable at first glance, is premature & potentially harmful. It reflects not a reasoned pedagogical strategy, but a reactive adoption of the bandwagon fallacy; the notion that because others are doing something, we must do it too. Without a strong empirical foundation for what constitutes effective AI use in educational settings, efforts to embed AI literacy in the curriculum risk substituting speculation for evidence & trend-chasing for sound pedagogy.
The conceptual vagueness of “AI literacy”
One of the most striking features of the AI literacy movement is its definitional slipperiness. Unlike foundational literacies such as reading, writing, or numeracy, i.e. concepts with centuries of pedagogical refinement, AI literacy remains a nebulous term, invoked in policy papers & educational commentary with minimal conceptual clarity. In some contexts, it refers to basic awareness of AI technologies and the ability to use AI-supported tools for everyday problem-solving. In others, it suggests technical skills such as designing AI models or writing machine learning code with the assistance of intelligent tutoring systems. Elsewhere still, it encompasses ethical reflection, critical analysis of algorithmic bias, and the capacity to evaluate outputs from AI-assisted research and learning environments. These are vastly different competencies, each demanding different pedagogical approaches, developmental timelines, & instructional expertise. The failure to delineate what AI literacy actually entails makes it impossible to evaluate whether or how it can be meaningfully taught.
More troubling still, this vagueness is not likely a transitional phase on the path to conceptual maturity. Rather, it reflects a fundamental incoherence in the discourse. AI is not a singular phenomenon but a diverse range of tools, practices, & implications. To talk about AI literacy without specifying which aspects of AI are at stake; technical, ethical, economic, cognitive, or creative; is to obscure more than it reveals. Curriculum design should be based on well-defined learning goals that can be realistically taught, measured, & assessed in the classroom. Currently, the arguments for AI literacy provide no such practical clarity.
The absence of empirical grounding
Educational initiatives, especially those involving new domains of knowledge or skill, require rigorous empirical research to determine their efficacy. Yet the evidence base for how students engage with, understand, or benefit from AI tools in classroom settings remains strikingly underdeveloped. Preliminary studies in educational technology offer some insights, but these tend to focus on narrow, short-term use cases (e.g. AI-assisted tutoring or adaptive testing) rather than whole curricula, the quality tends to be poor, & results are mixed at best & at worst, harmful. No studies demonstrate that teaching AI literacy in its current amorphous form yields measurable educational or developmental benefits.
This gap in evidence should not be surprising. AI itself is a moving target, subject to rapid technological changes that often outpace academic & curricular cycles. Strategies that appear promising today may be obsolete tomorrow. Without robust, peer-reviewed research that isolates & evaluates the educational impact of particular AI tools or concepts, educators are left in the dark. To mandate AI literacy under these conditions is not innovation, it is guesswork & it risks diverting time & resources away from foundational literacies that are already under strain in many educational systems.
Bandwagon logic & the illusion of progress
The strongest argument for AI literacy is not empirical but rhetorical; that we must do it because others are doing it. Essentially, this is the bandwagon fallacy. The argument takes various forms, e.g. global competitiveness, future readiness, or economic imperative, but always returns to the same basic premise; that not teaching AI will leave students behind. This logic is seductive because it conflates anxiety with the language of progress. It appeals to our fear of obsolescence while promising an apparently easy antidote; teach AI.
However, succumbing to the bandwagon argument is pedagogically reckless. It encourages a reactive rather than reflective approach to curriculum design, one driven by external trends rather than internal coherence. The history of education is littered with cautionary tales of curricular fads, e.g. digital literacies, coding mandates, & 21st-century skills, that were adopted hastily & assessed belatedly. AI literacy is in danger of joining their ranks, particularly if it is imposed before its content, purpose, & pedagogical methods are sufficiently established.
The risk of educational distraction
Another consequence of this premature push is the risk of educational distraction. Educational organisations operate under finite time & resource constraints. To allocate classroom hours, teacher training, & assessment mechanisms toward an ill-defined objective is not merely inefficient; it distracts attention from essential educational practices. The fundamental literacies, i.e. analytical & critical thinking, reading comprehension, scientific reasoning, & mathematical problem-solving, remain the bedrock of informed citizenship & intellectual autonomy. These are the very competencies that students will need to evaluate, resist, or reimagine the influence of AI in their lives.
Ironically, the best preparation for an AI-impacted future may not involve teaching about AI at all, but rather doubling down on the timeless skills that allow learners to think clearly, question assumptions, & understand systems. These competencies transcend domains & are robust in the face of technological challenges & change. Teaching students to become self-aware & reflective learners, not just compliant users of current technologies, is a more durable & defensible goal than offering them a grab-bag of AI facts & myths under the banner of AI literacy.
Conclusion
Calls to embed AI literacy into curricula are, at present, more a symptom of societal anxiety & the AI industry’s PR & marketing hype than a product of educational foresight. They reflect the allure of the new rather than the authority of evidence. In the absence of a clear definition of AI literacy, solid empirical evidence, or a sound educational rationale, these initiatives amount to little more than opportunistic AI promotion. The responsible path forward is not to resist AI’s relevance but to resist its reification into educational dogma before the necessary groundwork has been done. In education, as in all domains, good intentions do not compensate for bad reasoning. If we really want to prepare students for a changing world, we should prioritise clarity over trendiness, evidence over assumption, & enduring skills over ephemeral fads & fashions.
Further reading
More analytical & critical attention is just starting to be paid to the use of AI in educational contexts. For example, this recent position paper points out some methodological inconsistencies in recent research claiming learning gains from using ChatGPT:
- Weidlich, J., Gašević, D., Drachsler, H., & Kirschner, P. (2025). ChatGPT in Education: An Effect in Search of a Cause. Journal of Computer Assisted Learning, 41(5), e70105. https://doi.org/10.1111/jcal.70105