I have recently become an Honorary Professor of Practice at School of Society and Environment at Queen Mary University of London, where until recently was the Chair of the governing council. This was a lecture due to be given to accademic staff but which sadly had to be postponed. This is what I would have said.
When I chaired the House of Lords Select Committee on AI in 2017-18, we concluded that the UK had a unique opportunity to shape AI for public benefit. More recently, I co-chaired Policy Connect’s inquiry into “Skills in the Age of AI,” which concluded last summer after nine months of evidence from business, academia, and citizens themselves. What emerged reinforces both the urgency and complexity of what we’re discussing today.
And we now have an important new data point on the scale of that urgency. The third HEPI Student Generative AI Survey, published this year, found that 95% of students report using AI in at least one way, and 94% say they use generative AI specifically to help with assessed work. This is no longer a technology on the horizon. It is already operating at the heart of the student experience, in this institution and every other. Any response that treats this as something still to be planned for has already been overtaken by events.
The Government’s New Framework: Promise and Limitations
The Skills England report published last October represents genuine progress. It identifies critical shortages: 26% of AI companies cite lack of technical skills as barriers, there’s a gap for data architect roles, and 7.3 million employed adults lack essential digital skills — projected to become the UK’s largest deficit by 2030.
Importantly, Skills England emphasizes the need for “interdisciplinary professionals” who combine technical knowledge with management, leadership, and communication — “blended skills.” They’ve identified that demand exists for leadership capable of governing emerging technologies and managing change at pace.
But the report operates within the paradigm of skills acquisition — teaching people to use AI tools more effectively within existing frameworks. What it doesn’t address is the structural transformation AI is driving — how educational institutions must respond not just with new content, but with entirely new pedagogical models.
Skills England recognizes we need “blended” professionals with system-level capabilities, but doesn’t grapple with how we develop those capabilities when AI’s very ease threatens to erode the critical thinking that underpins them.
The Employer Perspective: What’s Actually Valued
Recent research from Kingston University provides stark evidence of the gap between what employers need and what graduates provide. Their latest Future Skills report, published in June 2025 in partnership with Nanyang Technological University, reveals that 56% of businesses are now likely to consider skills-based recruitment as the optimal way to modify their hiring practices — a fundamental shift from credential-based to capability-based hiring.
Kingston identified nine core future skills employers value: creative problem-solving, digital competency, being enterprising, a questioning mindset, adaptability, empathy, collaboration, resilience, and self-awareness. Significantly, their 2025 survey shows that every single one of these skills increased in perceived importance since 2023, with digital skills showing the highest growth at 8%. Notice what dominates that list. Digital competency is one of nine. The rest are adaptive capabilities: thinking critically, questioning assumptions, solving problems creatively, adapting as contexts shift.
The World Economic Forum’s 2025 Future of Jobs Report identifies analytical thinking, creative thinking, resilience, flexibility and agility as top skills. LinkedIn Global Talent Trends places adaptability in the top five most in-demand skills globally. The convergence is clear: employers desperately seek meta-capabilities, not just tool proficiency.
Yet there’s a concerning disconnect. Kingston’s research reveals that only 23% of UK businesses anticipate AI will fundamentally change their business model in the next five years — though this represents a 10% increase since 2023. Compare this to East Asian nations where AI and digital skills are uniformly treated as top strategic priorities. This suggests the UK risks lagging behind international competitors in recognizing AI’s transformative potential.
The Friction Paradox: Why Ease Threatens Capability
Recent research raises a profound challenge: the emerging evidence that AI’s ease may be undermining the critical thinking we’re trying to develop.
A 2025 study found significant negative correlation between frequent AI use and critical-thinking performance, especially among younger users. The mechanism is “cognitive offloading” — when AI provides effortless answers, users stop doing the evaluative reasoning that builds judgment.
This connects to Daniel Kahneman’s framework of thinking fast and slow. AI encourages System 1 thinking — fast, intuitive, effortless responses that bypass critical analysis. AI’s fluency triggers System 1 acceptance: it sounds authoritative, it’s grammatically perfect — so we accept it. But genuine learning requires System 2 thinking — slow, deliberate, analytical engagement that questions assumptions.
The very characteristics that make AI appealing — speed, ease, confident fluency — prevent the effortful System 2 thinking that develops critical capabilities. Productive friction isn’t pedagogical stubbornness; it’s necessary intervention to force System 2 engagement.
An MIT Media Lab experiment found students who relied on ChatGPT performed worse, remembered less, and were less cognitively engaged than peers who wrote without AI. Inside Higher Ed recently declared that “learning requires friction,” identifying productive friction as a guiding principle for AI integration. The distinction matters: friction that merely slows learning is counterproductive, but friction that forces cognitive engagement develops the capabilities employers value.
Compelling corroboration of this argument has recently emerged from research conducted partly here at Queen Mary. Researchers from this institution, alongside colleagues from Erasmus University Rotterdam, the University of Campinas, and McMaster University, studied how medical students learn and make diagnostic decisions when working with AI tools in clinical-style scenarios. They compared four approaches: clinical cases alone; with human feedback; with AI support but no feedback; and with AI support combined with human and AI feedback.
The results speak directly to the friction paradox. Students who combined AI support with human feedback performed best. But those using AI alone not only scored the lowest — they were also the most confident, despite being the least skilled. One of the researchers described this as giving learners a powerful sports car before they have learned how to drive. Without the experience to ask the right questions, students leaned too heavily on the tool and missed key diagnostic nuances — including the critical fact that many AI systems are trained predominantly on populations from the Global North.
The lesson is unambiguous: AI is transformative only when used at the right time, with the right training, and under strong governance. And it has implications well beyond the medical faculty. When the HEPI survey finds that 47% of students say they use AI tools to improve the quality of their work, we should ask what “quality” they are actually measuring — and whether, like those medical students, some are becoming more confident precisely as their independent skills atrophy.
Beyond Skills to Readiness
Our Policy Connect inquiry revealed a troubling gap. Despite massive investment — millions enrolled in AI courses, organisations making training mandatory — only 21% of workers feel “very confident” integrating AI into workflows, and 77% feel lost about how AI connects to career progression.
McKinsey found that while 89% of organizations use AI, only 9% have achieved “AI maturity.” BCG identified the “silicon ceiling” — only 50% of frontline employees regularly use AI tools despite massive investment. Investment in training isn’t translating into meaningful capability development.
The HEPI survey makes the institutional dimension of this gap concrete. While 68% of students believe generative AI skills are essential to thrive in today’s world, fewer than half — just 48% — feel their teaching staff are helping them develop those skills for their future careers. That twenty percentage point gap between what students recognise they need and what they feel they are receiving is a direct measure of the delivery shortfall our institutions must address. Critically, the survey found this gap is widest among Arts and Humanities students — the very cohort most exposed to AI’s disruptive effects on creative industries and least likely to feel their lecturers are equipping them for what is coming.
Three Emerging Patterns
Research from the Centre for Finance, Technology and Entrepreneurship identifies three patterns defining graduates’ trajectories:
Mass Displacement — the gradual erosion of relevance. Entry-level positions providing domain expertise are disappearing. A 2025 Harvard study found sharp declines in junior-level hiring while senior hiring remained flat. The National Foundation for Educational Research projects that up to three million UK jobs in declining occupations could disappear by 2035 due to AI and automation, with administrative, secretarial, customer service and machine operation roles most at risk. The traditional entry-level pathway is contracting as AI automates routine tasks — this isn’t speculative, it’s measurable displacement.
And here’s the additional challenge: remaining positions are increasingly filtered by AI recruitment systems. CV screening, initial assessments, video interview analysis — automated systems make the first cut. Graduates need to understand not just how to work with AI, but how to navigate AI systems judging their employability. The irony is stark: we’re preparing students for careers where AI will decide if they’re qualified to work with AI.
Supercharged Professionals — individuals achieving ten to hundred times previous output. Startups reaching scale with forty people rather than four hundred. These aren’t people who’ve merely learned tools — they’ve fundamentally restructured how they work and create value.
Creative Disruptors — those building entirely new systems. This distribution isn’t predetermined. Mass displacement becomes default only when no deliberate action is taken. Universities are where that action must begin.
The Performance Hexagon Framework
What CFTE call “the Performance Hexagon” maps contribution levels: Task Robots who execute when given clear instructions; Problem Solvers who work independently; System Thinkers who design structures solving categories of problems; and Superstars who identify opportunities without direction.
Overlay AI and a pattern emerges. At lower levels — task execution — AI replaces human work. At higher levels, AI amplifies. Problem Solvers find solutions faster. System Thinkers automate structures. Superstars move from ideas to scalable systems at unprecedented speed.
The critical question: are you preparing graduates who can move vertically through this hexagon? The Skills England framework helps with the horizontal. It doesn’t address the vertical movement — developing questioning mindset, creative problem-solving, adaptability — that determines whether graduates become supercharged or face displacement.
What Future-Proofing Demands
The fundamental divide: who does the thinking?
Future-ready graduates will need three attributes:
Domain expertise — deep understanding of how value is created. AI executes but cannot replace years of tacit knowledge about how industries function.
Technology fluency — structuring workflows around AI, assessing output quality, integrating systems intelligently.
Adaptive capabilities — structured thinking, independent problem-solving, operating in ambiguity. These meta-capabilities — what Kingston identifies as questioning mindset, creative problem-solving, resilience — allow meaningful contribution as landscapes shift. These are precisely the capacities that risk atrophy without designed friction.
Deloitte found 66% of managers believe recent hires are unprepared — they identify the “experience gap” as larger than the skills gap. What’s missing isn’t technical knowledge; it’s judgment, contextual understanding, autonomous operation in ambiguous situations. Employers increasingly hire “new collar” workers with non-traditional backgrounds but strong adaptive capabilities.
The World Economic Forum projects 44% of workers’ core skills will change within five years. Technical skills alone provide insufficient protection.
The Transformation Required
Our inquiry recommended making AI literacy mandatory in the National Curriculum and establishing an AI in Education Advisory Board. For universities, the implications are profound.
First, AI literacy must be embedded across all disciplines. A law graduate needs to understand algorithmic decision-making as thoroughly as a computer science graduate needs data protection. This means designing assessments where students critique AI outputs, identify limitations, and reconcile AI analysis with primary sources.
As Nick Potkalitsky points out, students need to understand why different engagement modes exist: one-shot prompting for speed, chain-of-thought for accuracy, retrieval-augmented generation for grounding in documents. The pedagogical goal isn’t teaching which button to press — it’s shifting from “Does this sound right?” to “How would I check this?” That shift from passive acceptance to active verification is the questioning mindset Kingston identifies and employers need.
This requires embedded practice time, discipline-specific translation, and structures supporting ongoing faculty learning — not, as Potkalitsky notes, “another 2-hour professional development session.”
Second, we must teach judgment. The Post Office Horizon scandal demonstrates the catastrophic cost when professionals cannot challenge automated systems. Graduates need to understand not just using AI in diagnosis but ensuring accountability. Not just deploying recruitment algorithms but auditing for bias. Not just using language models but understanding copyright implications.
Third, assessment must evolve. The HEPI survey found that 65% of students say assessment has already changed significantly in response to AI — up from 56% last year and just 32% the year before. That trajectory tells us the transformation of assessment is already well underway, driven partly by students and partly by institutional response. The question is whether institutions are shaping that change deliberately or simply reacting to it. Can our graduates conduct algorithmic impact assessments? Understand explainable AI? Maintain critical thinking when AI offers seductively fluent answers? The shift toward skills-based hiring is underway — Kingston’s 2025 research shows 56% of employers are now likely to adopt capability-based recruitment, while 65% believe AI will influence how they hire. Our assessments must catch up with that reality.
Scale-to-Density Shift
Before 2022, top startups needed ten years and 500 people to reach significant scale. Today, AI-native startups reach the same milestones in two years with 50 people. A hundredfold efficiency gain.
This reveals a shift from scale to talent density — the proportion capable of structural thinking and leading transformation. A small cohort of truly future-ready graduates may contribute more than large numbers trained only in tool usage. This has uncomfortable implications for university business models built around volume.
Enabling Structural Change
Skills England identifies that employers want “bolt-on” training — short, modular options allowing existing workforce to supplement learning without multi-year apprenticeships. This signals current structures are too rigid.
There are proposals for lifelong skills grants providing dedicated funding for continuous education. Some advocate replacing the rigid apprenticeship levy with a flexible skills and training levy funding exactly the short courses Skills England identifies employers demanding.
Our inquiry recommended relaunching Local Digital Skills Partnerships. These partnerships, with modest £75,000 per region investment, upskilled 12,000 workers and reduced digital exclusion by 18%. The model worked through genuine collaboration.
Building Trust Through Transparency
Throughout my work on AI regulation, including my Public Authority Algorithmic and Automated Decision-Making Systems Bill, I’ve emphasized that public trust is fundamental. Our inquiry found only 33 UK public sector AI projects have published transparency records.
The conventional wisdom that regulation stifles innovation needs turning on its head. Appropriate regulation isn’t just restricting harmful practices — it’s key to driving adoption. Many AI adopters hesitate due to uncertainties about liability and ethical boundaries. Clear regulatory frameworks can accelerate adoption by providing clarity and confidence.
Our graduates will make decisions about algorithmic systems affecting millions. If they haven’t been taught to prioritize explainability, fairness, accountability, they’ll erode trust. Well-designed regulation catalyzes innovation, just as environmental regulations spurred cleaner technologies.
Digital Exclusion
Our inquiry found 19 million people in the UK face digital poverty. Skills England adds that 7.3 million employed adults lack essential workplace digital skills, projected to become the UK’s largest deficit by 2030.
But the HEPI survey adds a dimension that should concern universities specifically. Students from higher socioeconomic households are measurably more likely to be using AI tools — including coding assistants and data analysis platforms — than those from lower socioeconomic backgrounds. The gap persists even where free versions of tools are available, which means cost alone does not explain it. Universities cannot assume that providing access to AI tools is sufficient to level the playing field. Structured support, embedded training, and deliberate attention to which students are actually developing AI fluency rather than merely nominal access to AI tools will be required. The skills divide is becoming a new dimension of educational inequality, and it is forming now, in our institutions, among our current students.
What This Means
For academic leaders: Move beyond incremental adjustments. Invest in faculty development — not just using AI tools, but designing experiences maintaining productive friction. Create assessment frameworks capturing adaptive capabilities. Build genuine partnerships. Engage with policy discussions about lifelong learning funding and flexible skills levies.
For business representatives: Share concrete insights about what you value — Kingston’s nine core skills provide a framework. Provide meaningful project opportunities. Work with us to develop sector-specific AI Skills Accelerators aligned with the Industrial Strategy.
For both: Recognize that Skills England, while useful, provides tools for navigation within the existing model. We need transformation of the model itself — preparing graduates to design the structures that come next, with critical thinking capacities that only come from wrestling with difficult problems.
Conclusion
Skills England estimates AI adoption could boost the UK economy by £400 billion by 2030. But that assumes we get skills development right.
Kingston University’s research shows we’re moving in the right direction but also reveals we may be underestimating the challenge, with only 23% anticipating fundamental business model change from AI when international competitors are treating this as a top strategic priority.
Queen Mary has always combined academic excellence with social purpose. And as the research emerging from this institution — including on how AI overconfidence undermines clinical diagnostic skill — now demonstrates, we are also building the evidence base that should inform how every university approaches these questions. The UK can lead in AI — not through fastest adoption or lightest regulation, but through the most thoughtful, ethical, human-centred approach. That leadership must begin in universities.
The transformation is underway. The patterns — mass displacement, supercharged professionals, creative disruptors — are forming now. With intellectual clarity, institutional courage, and genuine collaboration, we can ensure graduates don’t just survive this transformation — they lead it.
But we must be honest about the depth of change required. Adding AI modules won’t suffice. Providing unfettered AI access without pedagogical friction won’t suffice. Even the Skills England framework won’t suffice.
We need to fundamentally rethink professional capability development in an age where AI’s ease paradoxically threatens the cognitive capacities — that questioning mindset, that creative problem-solving — we’re trying to develop.
That’s the challenge before us. And it’s one we cannot afford to meet with unambitious thinking.
Thank you, and I look forward to our discussion.
27th November 2022
AI Governance: Science and Technology Committee launches enquiry
10th September 2021






