Online reaction to my recent opinion piece in Times Higher Education The failure of universities to strategically engage with artificial intelligence (AI) has been both fierce and illuminating.
Some criticism was measured and thoughtful; others were reflexive, polemical, or rooted in deeply held beliefs about what universities are – and what they must never become. However, together they inadvertently reinforce what I was saying: the resistance to change in the sector is so ingrained that it has become part of its identity. And this resistance now poses a real threat to its long-term well-being.
A number of responses centered on definitional nitpicks. Why call higher education a “sector”? Why invoke the “principles of the Enlightenment”? These procedural questions, while valid, highlight a particular challenge associated with critiquing higher education. It’s tempting to get sucked down this rabbit hole and turn away from the bigger question: why is the sector so reluctant to question its own structures, norms and assumptions?
Elsewhere, critics have claimed that AI is overkill and could be another phlogiston – an intellectual dead end or a passing pipe dream. For what must universities get involved, they ask. Shouldn’t they resist fashions, as they have done so well in the past?
This argument, popular among teachers, invokes the precautionary principle, but in practice represents an abdication of adaptive responsibility. It assumes that the status quo is safe, neutral, and intrinsically more virtuous than the unknown. Yet universities themselves have long taught that knowledge – and society – advances through research, experimentation and engagement, not through entrenchment.
What makes this reasoning particularly problematic is that it is often forcefully adopted by those with the least technological knowledge. Many of these critiques demand a “rigorous case” for AI, but such a case is difficult to recognize if one lacks an understanding of data, machine learning, or emerging practices. Contrary to some claims, academic integrity and technology adoption are not mutually exclusive; rather, preserving a meaningful conception of academic integrity now requires an informed understanding of the technologies that challenge it.
Another set of responses presented my argument as morally suspect – an endorsement of extractive digital oligarchies, a capitulation to commodification. This is familiar territory. For some academics, the adoption of technology is indistinguishable from the neoliberal trend that they perceive to be “emptying” universities. But such framing, once again, obscures more than it clarifies.
When I noted that banks have adopted AI in a way that universities have not, I was not saying that banks are paragons of virtue. I was simply pointing out that even the most conservative institutions have been able to reconfigure themselves in response to technological change and the existential crisis it represents. The fact that universities, with their vast intellectual resources and alleged dedication to societal progress, are lagging behind should give us pause.
Concerns have also been expressed about the harms of AI: the ecological costs, the erosion of critical thinking, the risk of excessive dependence. These are important issues that deserve serious attention, but they do not justify strategic disengagement. Indeed, universities must simultaneously explore AI adoption across all areas of institutional practice, while proactively leading ethical, pedagogical, and ecological responses to AI. Ignoring the transformative possibilities of technology for practice will not help preserve the sanctity of the student experience; it simply cedes leadership to actors outside the academy.
Some reactions to the article were openly dismissive: “naive,” “hyperbolic,” “written by a robot,” “a black plague.” [of mass adoption]”These comments are emotionally revealing and suggest that a deeper objection is not so much about the technology as the perceived threat it poses to identity, expertise, and authority. Such concerns, while understandable, are not evidence against the argument. Rather, they are evidence in its favor.
There were more constructive responses. Several have noted that universities are more proactively involved in AI than is suggested in the article, and I am very happy to recognize examples of thought leadership in the sector. But exceptions do not make a rule. Resource constraints, governance gaps, and an unwillingness to challenge existing practices conspire to hinder strategic thinking about how technology aligns with institutional purpose and design. AI is treated as an ad hoc add-on, delegated to committees and integrated into existing systems.
When AI transformation starts with use cases that build on existing logics, rather than pilots that challenge current institutional design, the result is inevitable: more of the same, with the promise of a faster, cheaper solution (if you’re lucky).
Crucially, some commentators have pointed to a deeper cultural problem: that universities talk about preparing students for the future but rarely treat them as partners in shaping it. This failure of “reverse mentoring” reflects a guard mentality at the heart of institutional resistance. For some, questioning the logics underlying how universities organize knowledge, learning and governance is not only uncomfortable, but also sacrilegious. But clinging to rituals with a transcendental vocation will not preserve the social value of the university.
The irony is that the quest for truth and understanding is a dynamic process that requires constant questioning of existing beliefs and a willingness to revise ideas based on new evidence. It is this process that, for many decades, has contributed to the development of AI – and universities have played a central role in this development. But when it comes to internal transformation – rethinking curricula, governance, research practices, teaching models – the will and curiosity are mysteriously absent.
I repeat: the biggest threat to higher education is not AI. It is institutional inertia supported by reflexive criticism that mistakes resistance for virtue. AI is not the cause of this problem, but it reveals dysfunctions and contradictions accumulated over decades.
Ultimately, it is less important whether universities engage with AI enthusiastically or reluctantly than whether they do so strategically, imaginatively, and with a willingness to challenge their own design. Because if they don’t, others will.
Ian Richardson is a faculty member and director of executive education at Stockholm Business School, Stockholm University. With a background in technology media, he is co-founder of Sweden’s national AI for Executives program, which aims to drive understanding and organizational adoption of AI at board level across all sectors and industries.