From Skills to Impact – Assessing, Showcasing and Shaping a Future-Ready Graduate Profile

 

I have recently become an Honorary Professor of Practice at  School of Society and Environment at Queen Mary University of London, where until recently was the Chair of the governing council.  This was a lecture due to be given to accademic staff but which sadly had to be postponed. This is what I would have said. 

When I chaired the House of Lords Select Committee on AI in 2017-18, we concluded that the UK had a unique opportunity to shape AI for public benefit. More recently, I co-chaired Policy Connect's inquiry into "Skills in the Age of AI," which concluded last summer after nine months of evidence from business, academia, and citizens themselves. What emerged reinforces both the urgency and complexity of what we're discussing today.

And we now have an important new data point on the scale of that urgency. The third HEPI Student Generative AI Survey, published this year, found that 95% of students report using AI in at least one way, and 94% say they use generative AI specifically to help with assessed work. This is no longer a technology on the horizon. It is already operating at the heart of the student experience, in this institution and every other. Any response that treats this as something still to be planned for has already been overtaken by events.

The Government's New Framework: Promise and Limitations

The Skills England report published last October represents genuine progress. It identifies critical shortages: 26% of AI companies cite lack of technical skills as barriers, there's a gap for data architect roles, and 7.3 million employed adults lack essential digital skills — projected to become the UK's largest deficit by 2030.

Importantly, Skills England emphasizes the need for "interdisciplinary professionals" who combine technical knowledge with management, leadership, and communication — "blended skills." They've identified that demand exists for leadership capable of governing emerging technologies and managing change at pace.

But the report operates within the paradigm of skills acquisition — teaching people to use AI tools more effectively within existing frameworks. What it doesn't address is the structural transformation AI is driving — how educational institutions must respond not just with new content, but with entirely new pedagogical models.

Skills England recognizes we need "blended" professionals with system-level capabilities, but doesn't grapple with how we develop those capabilities when AI's very ease threatens to erode the critical thinking that underpins them.

The Employer Perspective: What's Actually Valued

Recent research from Kingston University provides stark evidence of the gap between what employers need and what graduates provide. Their latest Future Skills report, published in June 2025 in partnership with Nanyang Technological University, reveals that 56% of businesses are now likely to consider skills-based recruitment as the optimal way to modify their hiring practices — a fundamental shift from credential-based to capability-based hiring.

Kingston identified nine core future skills employers value: creative problem-solving, digital competency, being enterprising, a questioning mindset, adaptability, empathy, collaboration, resilience, and self-awareness. Significantly, their 2025 survey shows that every single one of these skills increased in perceived importance since 2023, with digital skills showing the highest growth at 8%. Notice what dominates that list. Digital competency is one of nine. The rest are adaptive capabilities: thinking critically, questioning assumptions, solving problems creatively, adapting as contexts shift.

The World Economic Forum's 2025 Future of Jobs Report identifies analytical thinking, creative thinking, resilience, flexibility and agility as top skills. LinkedIn Global Talent Trends places adaptability in the top five most in-demand skills globally. The convergence is clear: employers desperately seek meta-capabilities, not just tool proficiency.

Yet there's a concerning disconnect. Kingston's research reveals that only 23% of UK businesses anticipate AI will fundamentally change their business model in the next five years — though this represents a 10% increase since 2023. Compare this to East Asian nations where AI and digital skills are uniformly treated as top strategic priorities. This suggests the UK risks lagging behind international competitors in recognizing AI's transformative potential.

The Friction Paradox: Why Ease Threatens Capability

Recent research raises a profound challenge: the emerging evidence that AI's ease may be undermining the critical thinking we're trying to develop.

A 2025 study found significant negative correlation between frequent AI use and critical-thinking performance, especially among younger users. The mechanism is "cognitive offloading" — when AI provides effortless answers, users stop doing the evaluative reasoning that builds judgment.

This connects to Daniel Kahneman's framework of thinking fast and slow. AI encourages System 1 thinking — fast, intuitive, effortless responses that bypass critical analysis. AI's fluency triggers System 1 acceptance: it sounds authoritative, it's grammatically perfect — so we accept it. But genuine learning requires System 2 thinking — slow, deliberate, analytical engagement that questions assumptions.

The very characteristics that make AI appealing — speed, ease, confident fluency — prevent the effortful System 2 thinking that develops critical capabilities. Productive friction isn't pedagogical stubbornness; it's necessary intervention to force System 2 engagement.

An MIT Media Lab experiment found students who relied on ChatGPT performed worse, remembered less, and were less cognitively engaged than peers who wrote without AI. Inside Higher Ed recently declared that "learning requires friction," identifying productive friction as a guiding principle for AI integration. The distinction matters: friction that merely slows learning is counterproductive, but friction that forces cognitive engagement develops the capabilities employers value.

Compelling corroboration of this argument has recently emerged from research conducted partly here at Queen Mary. Researchers from this institution, alongside colleagues from Erasmus University Rotterdam, the University of Campinas, and McMaster University, studied how medical students learn and make diagnostic decisions when working with AI tools in clinical-style scenarios. They compared four approaches: clinical cases alone; with human feedback; with AI support but no feedback; and with AI support combined with human and AI feedback.

The results speak directly to the friction paradox. Students who combined AI support with human feedback performed best. But those using AI alone not only scored the lowest — they were also the most confident, despite being the least skilled. One of the researchers described this as giving learners a powerful sports car before they have learned how to drive. Without the experience to ask the right questions, students leaned too heavily on the tool and missed key diagnostic nuances — including the critical fact that many AI systems are trained predominantly on populations from the Global North.

The lesson is unambiguous: AI is transformative only when used at the right time, with the right training, and under strong governance. And it has implications well beyond the medical faculty. When the HEPI survey finds that 47% of students say they use AI tools to improve the quality of their work, we should ask what "quality" they are actually measuring — and whether, like those medical students, some are becoming more confident precisely as their independent skills atrophy.

Beyond Skills to Readiness

Our Policy Connect inquiry revealed a troubling gap. Despite massive investment — millions enrolled in AI courses, organisations making training mandatory — only 21% of workers feel "very confident" integrating AI into workflows, and 77% feel lost about how AI connects to career progression.

McKinsey found that while 89% of organizations use AI, only 9% have achieved "AI maturity." BCG identified the "silicon ceiling" — only 50% of frontline employees regularly use AI tools despite massive investment. Investment in training isn't translating into meaningful capability development.

The HEPI survey makes the institutional dimension of this gap concrete. While 68% of students believe generative AI skills are essential to thrive in today's world, fewer than half — just 48% — feel their teaching staff are helping them develop those skills for their future careers. That twenty percentage point gap between what students recognise they need and what they feel they are receiving is a direct measure of the delivery shortfall our institutions must address. Critically, the survey found this gap is widest among Arts and Humanities students — the very cohort most exposed to AI's disruptive effects on creative industries and least likely to feel their lecturers are equipping them for what is coming.

Three Emerging Patterns

Research from the Centre for Finance, Technology and Entrepreneurship identifies three patterns defining graduates' trajectories:

Mass Displacement — the gradual erosion of relevance. Entry-level positions providing domain expertise are disappearing. A 2025 Harvard study found sharp declines in junior-level hiring while senior hiring remained flat. The National Foundation for Educational Research projects that up to three million UK jobs in declining occupations could disappear by 2035 due to AI and automation, with administrative, secretarial, customer service and machine operation roles most at risk. The traditional entry-level pathway is contracting as AI automates routine tasks — this isn't speculative, it's measurable displacement.

And here's the additional challenge: remaining positions are increasingly filtered by AI recruitment systems. CV screening, initial assessments, video interview analysis — automated systems make the first cut. Graduates need to understand not just how to work with AI, but how to navigate AI systems judging their employability. The irony is stark: we're preparing students for careers where AI will decide if they're qualified to work with AI.

Supercharged Professionals — individuals achieving ten to hundred times previous output. Startups reaching scale with forty people rather than four hundred. These aren't people who've merely learned tools — they've fundamentally restructured how they work and create value.

Creative Disruptors — those building entirely new systems. This distribution isn't predetermined. Mass displacement becomes default only when no deliberate action is taken. Universities are where that action must begin.

The Performance Hexagon Framework

What CFTE call "the Performance Hexagon" maps contribution levels: Task Robots who execute when given clear instructions; Problem Solvers who work independently; System Thinkers who design structures solving categories of problems; and Superstars who identify opportunities without direction.

Overlay AI and a pattern emerges. At lower levels — task execution — AI replaces human work. At higher levels, AI amplifies. Problem Solvers find solutions faster. System Thinkers automate structures. Superstars move from ideas to scalable systems at unprecedented speed.

The critical question: are you preparing graduates who can move vertically through this hexagon? The Skills England framework helps with the horizontal. It doesn't address the vertical movement — developing questioning mindset, creative problem-solving, adaptability — that determines whether graduates become supercharged or face displacement.

What Future-Proofing Demands

The fundamental divide: who does the thinking?

Future-ready graduates will need three attributes:

Domain expertise — deep understanding of how value is created. AI executes but cannot replace years of tacit knowledge about how industries function.

Technology fluency — structuring workflows around AI, assessing output quality, integrating systems intelligently.

Adaptive capabilities — structured thinking, independent problem-solving, operating in ambiguity. These meta-capabilities — what Kingston identifies as questioning mindset, creative problem-solving, resilience — allow meaningful contribution as landscapes shift. These are precisely the capacities that risk atrophy without designed friction.

Deloitte found 66% of managers believe recent hires are unprepared — they identify the "experience gap" as larger than the skills gap. What's missing isn't technical knowledge; it's judgment, contextual understanding, autonomous operation in ambiguous situations. Employers increasingly hire "new collar" workers with non-traditional backgrounds but strong adaptive capabilities.

The World Economic Forum projects 44% of workers' core skills will change within five years. Technical skills alone provide insufficient protection.

The Transformation Required

Our inquiry recommended making AI literacy mandatory in the National Curriculum and establishing an AI in Education Advisory Board. For universities, the implications are profound.

First, AI literacy must be embedded across all disciplines. A law graduate needs to understand algorithmic decision-making as thoroughly as a computer science graduate needs data protection. This means designing assessments where students critique AI outputs, identify limitations, and reconcile AI analysis with primary sources.

As Nick Potkalitsky points out, students need to understand why different engagement modes exist: one-shot prompting for speed, chain-of-thought for accuracy, retrieval-augmented generation for grounding in documents. The pedagogical goal isn't teaching which button to press — it's shifting from "Does this sound right?" to "How would I check this?" That shift from passive acceptance to active verification is the questioning mindset Kingston identifies and employers need.

This requires embedded practice time, discipline-specific translation, and structures supporting ongoing faculty learning — not, as Potkalitsky notes, "another 2-hour professional development session."

Second, we must teach judgment. The Post Office Horizon scandal demonstrates the catastrophic cost when professionals cannot challenge automated systems. Graduates need to understand not just using AI in diagnosis but ensuring accountability. Not just deploying recruitment algorithms but auditing for bias. Not just using language models but understanding copyright implications.

Third, assessment must evolve. The HEPI survey found that 65% of students say assessment has already changed significantly in response to AI — up from 56% last year and just 32% the year before. That trajectory tells us the transformation of assessment is already well underway, driven partly by students and partly by institutional response. The question is whether institutions are shaping that change deliberately or simply reacting to it. Can our graduates conduct algorithmic impact assessments? Understand explainable AI? Maintain critical thinking when AI offers seductively fluent answers? The shift toward skills-based hiring is underway — Kingston's 2025 research shows 56% of employers are now likely to adopt capability-based recruitment, while 65% believe AI will influence how they hire. Our assessments must catch up with that reality.

Scale-to-Density Shift

Before 2022, top startups needed ten years and 500 people to reach significant scale. Today, AI-native startups reach the same milestones in two years with 50 people. A hundredfold efficiency gain.

This reveals a shift from scale to talent density — the proportion capable of structural thinking and leading transformation. A small cohort of truly future-ready graduates may contribute more than large numbers trained only in tool usage. This has uncomfortable implications for university business models built around volume.

Enabling Structural Change

Skills England identifies that employers want "bolt-on" training — short, modular options allowing existing workforce to supplement learning without multi-year apprenticeships. This signals current structures are too rigid.

There are proposals for lifelong skills grants providing dedicated funding for continuous education. Some advocate replacing the rigid apprenticeship levy with a flexible skills and training levy funding exactly the short courses Skills England identifies employers demanding.

Our inquiry recommended relaunching Local Digital Skills Partnerships. These partnerships, with modest £75,000 per region investment, upskilled 12,000 workers and reduced digital exclusion by 18%. The model worked through genuine collaboration.

Building Trust Through Transparency

Throughout my work on AI regulation, including my Public Authority Algorithmic and Automated Decision-Making Systems Bill, I've emphasized that public trust is fundamental. Our inquiry found only 33 UK public sector AI projects have published transparency records.

The conventional wisdom that regulation stifles innovation needs turning on its head. Appropriate regulation isn't just restricting harmful practices — it's key to driving adoption. Many AI adopters hesitate due to uncertainties about liability and ethical boundaries. Clear regulatory frameworks can accelerate adoption by providing clarity and confidence.

Our graduates will make decisions about algorithmic systems affecting millions. If they haven't been taught to prioritize explainability, fairness, accountability, they'll erode trust. Well-designed regulation catalyzes innovation, just as environmental regulations spurred cleaner technologies.

Digital Exclusion

Our inquiry found 19 million people in the UK face digital poverty. Skills England adds that 7.3 million employed adults lack essential workplace digital skills, projected to become the UK's largest deficit by 2030.

But the HEPI survey adds a dimension that should concern universities specifically. Students from higher socioeconomic households are measurably more likely to be using AI tools — including coding assistants and data analysis platforms — than those from lower socioeconomic backgrounds. The gap persists even where free versions of tools are available, which means cost alone does not explain it. Universities cannot assume that providing access to AI tools is sufficient to level the playing field. Structured support, embedded training, and deliberate attention to which students are actually developing AI fluency rather than merely nominal access to AI tools will be required. The skills divide is becoming a new dimension of educational inequality, and it is forming now, in our institutions, among our current students.

What This Means

For academic leaders: Move beyond incremental adjustments. Invest in faculty development — not just using AI tools, but designing experiences maintaining productive friction. Create assessment frameworks capturing adaptive capabilities. Build genuine partnerships. Engage with policy discussions about lifelong learning funding and flexible skills levies.

For business representatives: Share concrete insights about what you value — Kingston's nine core skills provide a framework. Provide meaningful project opportunities. Work with us to develop sector-specific AI Skills Accelerators aligned with the Industrial Strategy.

For both: Recognize that Skills England, while useful, provides tools for navigation within the existing model. We need transformation of the model itself — preparing graduates to design the structures that come next, with critical thinking capacities that only come from wrestling with difficult problems.

Conclusion

Skills England estimates AI adoption could boost the UK economy by £400 billion by 2030. But that assumes we get skills development right.

Kingston University's research shows we're moving in the right direction but also reveals we may be underestimating the challenge, with only 23% anticipating fundamental business model change from AI when international competitors are treating this as a top strategic priority.

Queen Mary has always combined academic excellence with social purpose. And as the research emerging from this institution — including on how AI overconfidence undermines clinical diagnostic skill — now demonstrates, we are also building the evidence base that should inform how every university approaches these questions. The UK can lead in AI — not through fastest adoption or lightest regulation, but through the most thoughtful, ethical, human-centred approach. That leadership must begin in universities.

The transformation is underway. The patterns — mass displacement, supercharged professionals, creative disruptors — are forming now. With intellectual clarity, institutional courage, and genuine collaboration, we can ensure graduates don't just survive this transformation — they lead it.

But we must be honest about the depth of change required. Adding AI modules won't suffice. Providing unfettered AI access without pedagogical friction won't suffice. Even the Skills England framework won't suffice.

We need to fundamentally rethink professional capability development in an age where AI's ease paradoxically threatens the cognitive capacities — that questioning mindset, that creative problem-solving — we're trying to develop.

That's the challenge before us. And it's one we cannot afford to meet with unambitious thinking.

Thank you, and I look forward to our discussion.


The Use of AI in Society and the Ethics surrounding this Technology 

I recently took up the role of Honorary Professor of Practice at Queen Mary University of London after having served as Chair of its Governning Council for 8 years. A great privilege. This is the lecture I recently gave to Students in the first year of the new Applied AI Bsc Programme 

When we started the House of Lords Select Committee special enquiry on Artificial Intelligence back in 2017, I had no idea we were standing at the threshold of one of the most extensive technological transformations in human history. We knew AI mattered. We understood it had economic potential. But I don't think any of us truly grasped just how rapidly this technology would reshape every aspect of our lives - from the mundane decisions about what we watch on streaming services to the profound questions about who gets a mortgage, who receives medical treatment, what jobs are available and increasingly, who lives or dies in conflict zones.

Eight years later, as I look back at our report "AI in the UK: Ready, Willing and Able?" and forward to where we are now, the question is no longer whether we need to regulate AI. The question is whether we can regulate it effectively before the costs of inaction become unsustainable.

When our Select Committee published its findings in April 2018, we proposed five fundamental principles:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

These principles were not revolutionary. They drew heavily on the OECD's work and on liberal democratic values. What was revolutionary was the context: we were trying to articulate was a comprehensive ethical framework for a technology that didn't yet fully exist.

As I wrote in my book "Living with the Algorithm: Servant or Master?", the central question we must answer is deceptively simple: how do we ensure that AI remains our servant and does not become our master?

But principles without enforcement mechanisms are merely aspirations. And aspirations, however noble, do not prevent algorithmic discrimination. They do not stop the deployment of facial recognition systems that misidentify people of colour at alarming rates. They do not protect workers from unfair automated hiring decisions. And they certainly do not prevent the kinds of catastrophic failures we saw with the UK Post Office Horizon scandal - a tragedy that should serve as a warning to us all about what happens when we allow complex automated systems to operate without transparency, accountability, or effective challenge mechanisms.

Too often in the UK, we legislate when the damage has already been done. When it comes to protecting citizens and their interactions with new technologies, we need to be proactive, not reactive. We cannot risk another Horizon scandal.

This is why I introduced the Public Authority Algorithmic and Automated Decision-Making Systems Bill. If "computer says no" to a benefit decision, an immigration decision, or any other significant automated determination, citizens must have the right to understand why that happened and to challenge it effectively. We need automatic logging capabilities, transparent procurement standards, and independent dispute resolution mechanisms. These are not burdens on innovation, they are the prerequisites for public trust.

AI, as we all know however, respects no borders, so the international dimension is crucial. 

In the past two years, we've witnessed an extraordinary flowering of regulatory approaches worldwide:

The EU AI Act represents the most comprehensive attempt yet to regulate AI through binding legislation. Its risk-based approach, with prohibited practices at one end and minimal-risk systems at the other, provides a clear framework. But it also demonstrates the challenges of regulating a rapidly evolving technology - by the time the Act was finalised, the AI landscape had already shifted dramatically with the emergence of generative AI systems like ChatGPT at the end of 2022. 

The Council of Europe Framework Convention on AI, which opened for signature in September 2024, takes a different approach. As a framework treaty, it establishes broad commitments while leaving implementation details to national legislation. I was pleased to contribute to this process, and I believe the Convention represents an important step toward international convergence on AI governance.

The Convention's strength lies in its inclusivity - bringing together not just the 46 Council of Europe member states but also includes countries like the United States, Canada, Japan, and others who are COE observer members.

Other jurisdictions are also moving forward. China has implemented specific regulations on algorithmic recommendation and generative AI. The United States is pursuing a deregulatory, hands-off federal approach to AI, but with an increasingly contentious federal-state battle playing out — with various states like Colorado and California passing their own AI legislation, and the Trump administration actively using litigation to push back against them.

South Korea has already enacted comprehensive AI legislation, and countries like Canada and Australia are moving fast to develop their own frameworks — the global momentum towards regulation is unmistakeable.

But here's the crucial question: are we moving toward convergence or fragmentation? And does it matter?

I would argue that it matters enormously. The tech companies developing these systems are global actors. Data flows across borders. AI systems trained in one jurisdiction are deployed in others. If we end up with radically different regulatory standards, we risk creating compliance nightmares that genuinely could stifle innovation, while simultaneously failing to protect citizens because regulatory arbitrage allows companies to exploit gaps between jurisdictions.

This brings me to what I call the innovation paradox - and here I want to challenge conventional wisdom.

We are constantly told that regulation stifles innovation. The tech industry has been remarkably successful in propagating this narrative. But I want to turn that conventional wisdom on its head.

Responsible regulation does not stifle innovation - it channels it, focuses it, and ultimately sustains it by building and maintaining public trust. Without trust in the technology being developed, there is no sustainable innovation.

Consider the pharmaceutical industry. We don't allow drug companies to release new medicines without rigorous testing and approval processes. These regulations don't prevent pharmaceutical innovation - they ensure that innovation is safe and effective. Why should AI be any different when it can have equally profound impacts on human health, liberty, and wellbeing?

Or consider the financial services sector. After the 2008 financial crisis, we strengthened regulations around banking and financial products. Has this killed innovation in fintech? Quite the opposite. The UK has one of the world's most vibrant fintech sectors precisely because clear, well-designed regulation creates certainty and builds confidence.

The problem is not regulation per se - it's poorly designed regulation that is either too prescriptive, trying to specify technical solutions that quickly become obsolete, or too vague, providing no real guidance to either developers or users.

What we need is agile, principles-based regulation that sets clear objectives - transparency, accountability, fairness, human rights protection - while allowing flexibility in how those objectives are achieved. This is the approach we advocated in our original Select Committee report, and I remain convinced it's the right one.

From Theory to Practice: Implementation Challenges

So how do we move from principles to practice? How do we implement these frameworks effectively?

Let me identify four critical challenges and some potential solutions:

1. The Technical Challenge: Making AI Explainable and Auditable

One of the fundamental problems with AI systems, particularly deep learning models, is their opacity. Even the developers often cannot fully explain why a system made a particular decision. This is a serious problem when those decisions affect people's lives.

We need to mandate that AI systems deployed in high-stakes contexts - criminal justice, healthcare, employment, access to public services - must be auditable and, to the extent technically possible, explainable. This means requiring comprehensive documentation, logging of decisions, and the ability to conduct meaningful algorithmic audits.

The EU AI Act's requirements for technical documentation and transparency are a good start, but we need to go further in developing standardised audit methodologies and potentially creating independent AI audit institutions. Perhaps through the evolution of the AI Security and Safety Institutes already established. 

2. The Skills Challenge: Building AI Literacy Across Society

As I noted in "Living with the Algorithm", we face a serious digital skills gap. If citizens don't understand even the basics of how AI systems work, how can they exercise their rights effectively? How can they challenge unfair decisions? How can they participate meaningfully in democratic debates about AI governance?

We need a massive investment in digital literacy at all levels - from primary education through to lifelong learning. But this isn't just about teaching people to code. It's about creating critical consumers of technology who can ask the right questions and demand accountability.

Policymakers and regulators also need to upskill dramatically. If we are to decide on regulation we need to understand the technology. That’s why, 10 years ago, I co-founded the All-Party Parliamentary AI Group. It’s still a work in progress!

3. The Enforcement Challenge: Who Guards the Guardians?

Principles and laws are meaningless without effective enforcement. But who enforces AI regulations, and how?

The UK’s approach of distributing responsibility across existing sector regulators — the ICO, Ofcom, the CMA, the FCA — without giving them new powers or adequate resources is, frankly, inadequate. It creates gaps, inconsistencies, and confusion.

We need either a dedicated AI regulator or a much more coherent, properly resourced framework for coordination among existing regulators. The recent establishment of various advisory bodies — the AI Security Institute, the Central AI Risk Function, the DRCF’s AI and Digital Hub, the Centre for Data Ethics and Innovation, and the Regulatory Innovation Office — is welcome, and each has a role to play. But advisory and coordinative powers are not enough. None of them has genuine enforcement teeth. Ahead of AGI or superintelligence, we need binding legislation, not just advisory frameworks.

We also need meaningful penalties for non-compliance. The EU's approach of fines up to 6% of global turnover for the most serious violations provides real deterrence. Regulatory enforcement must have teeth. We should - and this may be more controversial - think about developer codes of conduct too, a kind of Hippocratic oath for digital engineers. 

4. The Democratic Challenge: Keeping Pace with Technological Change

Perhaps the most fundamental challenge is the mismatch between the pace of technological change and the pace of democratic governance.

AI capabilities are advancing exponentially. Parliamentary processes move more slowly.  By the time we've debated and passed legislation, the technology has moved on. How do we solve this problem?

I don't have a perfect answer, but I believe part of the solution, lies in creating more agile regulatory mechanisms - frameworks that can be updated through secondary legislation or regulatory guidance without requiring full parliamentary process every time.

We also need better mechanisms for ongoing dialogue between technologists, policymakers, civil society, and the public. The OECD's Global Parliamentary Network on AI, which I helped found, is one example of how we can facilitate these conversations across borders.

Let me highlight three areas where I believe urgent action is needed:

Governments cannot credibly regulate private sector AI use if they cannot ensure responsible AI use in the public sector. Yet we've seen repeated failures - from algorithmic bias in benefits systems to questionable uses of facial recognition technology.

My Public Authority Algorithmic and Automated Decision-Making Systems Bill Private Members’ Bill addressed this directly. We need mandatory impact assessments, bias testing, transparency requirements, and robust appeal mechanisms for all significant public sector AI deployments. Government should be leading by example, not lagging behind.

Then - as I've argued consistently - we need a duty of transparency regarding the use of copyrighted material in training AI systems.

The tech companies' position that they should be able to scrape and use any content available online without permission or compensation is untenable. It would destroy the creative industries and undermine the very concept of intellectual property. We need clear legal frameworks that balance innovation with creators' rights, and that means mandatory transparency about training data, licensing requirements, and fair compensation mechanisms.

Finally, we must address what may be the most profound ethical question: autonomous weapons systems capable of making kill decisions without meaningful human control.

I initiated and served on the Special Inquiry Select Committee into AI in Weapons Systems, and our report "Proceed with Caution" was clear: we must establish and enforce international prohibitions on fully autonomous weapons. The autonomous power to hurt, destroy, or deceive human beings should never be vested in artificial intelligence.

This isn't just about the distant future. These systems are being developed and deployed now as we speak in the US-Iran war and in Ukraine. We need urgent international action to prevent an AI arms race that could make the nuclear arms race look tame by comparison.

So where do we go from here?

First, we need international convergence on core principles. The Council of Europe Framework Convention provides a foundation. We should build on it, working toward greater harmonisation of standards while respecting different legal traditions and governance models.

Second, we need to create interoperable regulatory frameworks. This doesn't mean identical regulations everywhere, but it does mean ensuring that compliance with one jurisdiction's requirements substantially satisfies others. Technical standards bodies like the ISO and The National Institute for Standards and Technology in the US  have a crucial role to play here.

Third, we need to invest in enforcement capacity. Regulatory bodies need resources, expertise, and powers adequate to the challenge. This includes funding for AI audits, algorithmic impact assessments, and compliance monitoring.

Fourth, we need to mandate transparency and explainability. High-risk AI systems should be required to provide clear documentation, enable meaningful audits, and offer explanations for their decisions that are comprehensible to affected individuals.

Fifth, we need meaningful public participation. AI governance cannot be left to tech companies and government officials. We need robust mechanisms for civil society engagement, public consultation, and democratic oversight.

Sixth, we need to get serious about education and skills. Digital literacy must become as fundamental as reading, writing, and arithmetic. And we need specialised training programs to ensure that policymakers, regulators, judges, and other key decision-makers understand the technology they're governing.

Looking to the Horizon: The Challenges We Cannot Afford to Defer

Before I conclude, I want to turn from the governance frameworks we are building today to the relatively new threats that are already emerging on the horizon - because if we wait until they are fully upon us, we will have left it too late.

The first is the question of agentic AI. We are already moving beyond systems that respond to prompts into systems that act autonomously - planning, executing, and adapting across complex tasks with minimal human intervention. These agentic systems will operate in our financial markets, our healthcare systems, our critical national infrastructure. The question of meaningful human oversight - who is responsible when an autonomous AI agent causes harm, and how that accountability is enforced in real time - is not a theoretical problem. It’s an immediate regulatory gap. Our existing frameworks were designed for tools, not for actors. We urgently need to revisit them.

The second is the trajectory toward artificial general intelligence. I want to be direct about this, because there is a tendency in parliamentary debate to treat AGI as a distant science-fiction scenario rather than a live policy question. It is not. The leading AI developers themselves publish timelines measured in years, not decades. The implications - for employment, for democratic governance, for the balance of power between states and corporations, and for human autonomy itself - are of an order of magnitude that dwarfs anything we have previously legislated for. Advisory frameworks and voluntary commitments are wholly inadequate to this challenge. We need binding international architecture, we need it now, and we need it to have teeth. The Council of Europe Framework Convention is a beginning, but only a beginning. The lesson of the nuclear age is that we should have moved faster on governance than we did - and we cannot afford to repeat that mistake.

The third - and in some respects the most urgent - is autonomous weapons. I have already referred to the work of the Special Inquiry Select Committee and our report "Proceed with Caution". But I want to be blunt: the pace of international negotiation on lethal autonomous weapons systems is not keeping up with the pace of their development and deployment. We are watching states - including allies - integrate AI into targeting and kill-chain decisions in ways that progressively erode the principle of meaningful human control. Once that principle is surrendered in practice, it will be extraordinarily difficult to recover in law. The prohibition on vesting the autonomous power to hurt, destroy or deceive human beings in artificial intelligence is not a noble aspiration - it must be a binding legal obligation. I call on the Government to set out, clearly and urgently, what steps it is taking in multilateral forums to secure that prohibition before the technological facts on the ground make the argument moot.

These three challenges - agentic systems, the AGI threshold, and autonomous weapons - share a common feature: they each represent a point at which the pace of technological development could outrun our capacity to govern it, possibly irreversibly. The window for effective action is open. It will not remain so indefinitely.

I began this speech by asking whether AI will be our servant or our master. The answer will not be determined by the technology itself. It will be determined by whether we - parliamentarians, governments, international institutions, and civil society - have the courage and the foresight to act before the moment of decision has passed. 

The decisions we make about governance frameworks in 2026 will shape the trajectory of AI development for generations to come.

I am often asked whether I'm optimistic or pessimistic about the future of AI. My answer is that I'm neither - I'm realistic.

Realism means that we must ensure we don’t sleepwalk into a future where opaque algorithms make life-changing decisions without accountability. We must not allow a handful of tech companies to accumulate unprecedented power without democratic oversight. I'm determined that we will not sacrifice fundamental rights on the altar of innovation. And I'm determined that we will not fail to grasp the extraordinary opportunities that responsible AI development offers.

But realism is not enough. We need action - coordinated, international, sustained action - to build the governance frameworks that will ensure AI serves humanity's best values rather than our basest impulses.

What we need now is the political will to translate those values into effective practice. It requires collaboration across borders, across disciplines, and across the traditional divides between government, industry, academia, and civil society.

You, the students in this room today, will live with the consequences of the choices we make. You will inherit the AI governance frameworks we build -or fail to build. I hope that when you look back in 2040 or 2050, you'll say that we in the 2020s got it right - that we built systems that protected what matters while enabling innovation that genuinely serves the common good. 

 


Athens Roundtable- “Business as usual” in governance simply will not do.

Last year I helped to hosted a gathering of the Athens Roundtable in London. This is what I said on the conclusion of the morning session.  

Colleagues, it has been an immense privilege to share this day with you at the Athens Roundtable. We have ranged from the most technical questions of frontier models to the most human concerns about our children, our democracies, and our shared security. 

The unifying theme has been clear: the stakes of AI are now so great that “business as usual” in governance—incremental, fragmented, optional—simply will not do.

Fragmented governance, shared risks

What today’s discussions have exposed is a global governance landscape that is pulling apart at the very moment it needs to come together.

 The EU has moved ahead with a prescriptive, rights‑based AI Act, setting detailed obligations for high‑risk systems and outright bans for some uses. The United States, despite important executive action, still leans heavily on a market‑driven, innovation‑first model, with federal legislation uncertain and a patchwork of state rules emerging. The UK, for its part, has chosen a so‑called “pro‑innovation” and principles‑based approach, relying on existing regulators and voluntary guidance at a time when many of those regulators lack clear AI powers.​

The result is an increasingly unstable environment for everyone who actually has to build, buy, and deploy AI systems across borders. Developers face competing definitions of risk and accountability, while users confront inconsistent protections and redress. And as trust in institutions erodes and geopolitical tensions rise, this fragmentation feeds the perception that no one is truly in charge of the most powerful technology of our age.​

One of the most striking points of consensus today has been that standards now exist. We are no longer in the conceptual phase. ISO 42001 and related management‑system standards, NIST’s AI Risk Management Framework, OECD principles, and sectoral technical norms give us a very usable toolkit for risk and impact assessment, testing, audit, and lifecycle governance., these shared standards are the bridge between high‑level ethics and real‑world accountability; without them, principles remain decorative rather than operational.​

But the hard truth is that simply “encouraging” their use is no longer enough. Voluntary uptake has been patchy; shadow AI and unmanaged deployments are proliferating; and the systems with the greatest potential for societal harm are least likely to be governed by optional frameworks alone.

 If we truly believe that some AI‑driven outcomes are unacceptable—large‑scale societal destabilisation, systemic discrimination, catastrophic security failures—then we have to be honest that purely voluntary regimes will not prevent them.​

Mandating safeguards where it matters most

So the call to action from this Roundtable should be unambiguous. For high‑risk systems—especially those used in public services, critical infrastructure, law enforcement, financial stability, and the systems that shape the digital lives of children—adoption of established AI safety and ethics standards must become mandatory, not aspirational.

 Governments and regulators should require documented risk and impact assessments, robust testing and audit, monitoring and incident reporting, and clear human accountability before deployment, with proportionate sanctions when those duties are ignored.​

The same logic must increasingly apply to powerful foundation and open‑source models whose capabilities can be repurposed at scale. Left entirely to voluntary self‑governance, we risk a race to the bottom in which the most capable models are also the least constrained, and where once‑released weights cannot be recalled even when serious vulnerabilities are discovered. Mandated safeguards for powerful models—responsible release practices, security testing, traceability, and obligations on major deployers—are essential if we are to ensure the genie does not escape the bottle without any meaningful accountability.​

A bolder, more coordinated politics of AI

None of this means stifling innovation; on the contrary, clear and interoperable rules are what give responsible innovation room to flourish, business needs clarity, certainty, and consistency if it is to invest with confidence, and public trust will only follow if people can see that their rights are protected and that someone is answerable when things go wrong. That is why today’s conversation about unacceptable risks and serious incident preparedness must now translate into concrete steps: aligning regulatory approaches across jurisdictions, mandating core standards where the stakes are highest, and building the institutional capacity to detect, investigate, and learn from AI incidents before they cascade into crises.​

The message from this Athens Roundtable should therefore be a challenge as much as a comfort: policymakers must be bolder. It is no longer sufficient to pilot principles, convene summits, and extol the virtues of standards while leaving their adoption to chance. If we want AI that strengthens democracy rather than eroding it, protects our children rather than profiling them, and supports a fair global economy rather than deepening divides, then we must move—from shared concerns to joint, enforceable action.​

Let this be a moment when we collectively raise our sights and our standards. Thank you.

 


AI, the Opportunities and Challenges: Is It Servant or Master?

I recently gave a talk to the Clapham society on AI and its opportunities and challenges. This is what I said.

Good evening, everyone. It is a real pleasure to be here with the Clapham Society which I first joined back in 1973.

Two hundred years ago, the Clapham Sect used to gather not far from here — at Henry Thornton and William Wilberforce's house at Battersea Rise — not merely to debate but to act. Their cause was the abolition of the slave trade: a commercial system that enriched the powerful at the expense of the powerless. They were told it was economically indispensable. Whatever their economic self-interest, they refused to accept it. They  looked a moral challenge squarely in the eye and demanded an answer

 

 

The famous group portrait, in 1840,  by Benjamin Robert Haydon, is set in the Great Room of Freemasons’ Tavern in London, depicting the World Anti-Slavery Convention meeting there. My Gt Gt grandfather John Cropper is sitting attentively in the middle listening to Thomas Clarkson the formidable anti-slavery campaigner,who had lived to see not just the abolition of the trade in 1807 but the emancipation of enslaved people throughout the British Empire in 1833. 

 

Although the great figures of the Clapham Sect themselves did not live to attend the convention, that gathering was the direct heir to their work. The British and Foreign Anti-Slavery Society, founded in 1839 after emancipation in the British colonies, consciously built on the evangelical conviction, parliamentary strategy and public campaigning pioneered by Wilberforce and his circle, and sought to carry their abolitionist legacy from the British Empire to the wider world.

John Cropper was  a Liverpool Quaker and knew and was connected to many members of the Clapham Sect such as William Wilberforce, Henry Thornton, Zachary Macaulay, Hannah More, John Venn, James Stephen and  Thomas Fowell Buxton, a formidable coalition of influence across Parliament, finance, the press, literature, education and the law.

Today, we face a new kind of system that concentrates enormous economic power in very few hands, that makes consequential decisions affecting millions of lives, and that not everyone can see or challenge. It is called Artificial Intelligence.

In my work in Parliament and in my book, Living with the Algorithm, I pose one central question: will this technology be our servant, augmenting our human potential — or our master, making choices for us that we cannot see, understand, or appeal? Tonight, I want to look at the "Everyday AI" already in our pockets and on our computers, and how we ensure it works for us, not against us, and what we can do about it.

 What is "Everyday AI"?

AI is not a sentient robot with red eyes. It is software. Specifically, it is software that uses vast quantities of data to find patterns and make predictions — at a speed and scale no human can match.

When you navigate the South Circular, AI is predicting the traffic. When Netflix suggests a film, AI is predicting your taste. When your bank declines a transaction, an AI risk model has assessed it in milliseconds.

And since the arrival of systems like ChatGPT, Claude, Gemini, and their successors, AI has become something new: a conversational tool that drafts our emails, answers our questions, and increasingly, advises on our health and our finances. Half of all 8–17 year olds in the UK now use AI tools — often without their parents having the first idea what that means.

The challenge is that many of these systems are so- called "black boxes." They make decisions — about who gets a mortgage, which CV gets shortlisted, which benefit claim gets flagged — but they cannot always tell us why. And that is where the trouble starts.

The Opportunities: Conditional Optimism

I describe myself as a "conditional optimist." The opportunities are genuinely staggering, Even though I am spending this evening mainly speaking about the risks.

Take medicine. DeepMind's AlphaFold has mapped the structure of virtually every known protein — a task that would have taken traditional methods centuries. It has already accelerated the discovery of potential treatments for diseases from Parkinson's to malaria. At Moorfields Eye Hospital, AI is diagnosing over 50 eye conditions from a simple retinal scan with accuracy matching the best consultants in the world.

For our public services, AI can handle the administrative "drudgery" of local councils — processing planning applications, routing correspondence, checking benefit eligibility — freeing up human staff to do what only humans do well: empathy, social care, and complex judgment.

The UK government's own AI Opportunities Action Plan, published this January, highlights that the UK raised £6 billion in AI venture capital in 2025 alone, and remains the leading AI market in Europe. The economic prize for getting this right — for the UK specifically — is estimated at up to £400 billion added to our economy by 2030.

But note that word: conditional. These benefits do not arrive automatically. They require governance, investment, and — above all — the political will to ensure the risks are mitigated and the gains are shared.

Challenge 1: The Great Art Heist

Let me turn to the first major challenge, and one I have campaigned on hard in Parliament: the creative industries. 

The UK's creative industries — publishing, music, art, journalism — contribute £124 billion annually to the economy, yet AI companies have been training their systems on creators' work without consent or compensation. Both Parliament and the courts have been grappling with this tension. Following a major government consultation in December 2024 that drew over 11,500 responses — with creators strongly opposing a proposed "opt-out" text and data mining exception — Ministers confirmed they would not proceed with their preferred  new copyright exception for AI training. 

The High Court's November 2025 ruling in Getty Images v Stability AI offered no definitive resolution either: the case was dismissed on a technicality (Getty failed to prove the infringing acts occurred in the UK), leaving the underlying legal question unanswered but signalling that better-constructed future claims could succeed.

The government's long-awaited Copyright and AI Report, published last month, represents a significant policy reversal. The opt-out mechanism is dead; no consensus exists on the way forward; and rather than legislating, the government seems now to be pursuing a voluntary licensing code developed through industry dialogue— with no draft code or timetable published.

The creative industries have won a battle of sorts, but the outcome remains deeply unsatisfactory. A voluntary licensing code carries no binding legal force until Parliament acts, and there is no clear indication of when — or whether — a full Copyright and AI Bill will arrive. Creators are rightly demanding a statutory opt-in licensing model and legally enforceable transparency requirements. Until those are delivered, fair compensation remains aspirational rather than guaranteed. Creators deserve to know when their work is used and to be fairly compensated

Challenge 2 The Everyday Harm: Fraud and Deepfakes

The most immediate risk to many people is security.

Fraud now accounts for 45% of all crime in England and Wales. That figure comes directly from the National Crime Agency, published this month. In 2025, over 444,000 cases were recorded to the National Fraud Database — the highest ever in a single year. And AI is turbocharging every single category.

But I want to focus on something specific, because I think it changes everything: voice cloning.

Researchers at the University at Buffalo published a study this year that used a striking phrase: voice cloning has crossed the "indistinguishable threshold." A criminal now needs just a few seconds of your voice — from a voicemail, a social media video, a Teams call — to generate a convincing clone, complete with your natural rhythm, intonation, and breathing patterns. Some major UK retailers are already reporting over a thousand AI-generated scam calls per day.

And when it comes to deepfake video: a 2025 study by the biometrics firm iProov found that only 0.1% of participants could correctly identify all the fake and real media they were shown. Not 19%, not even 10%. Point one per cent. In controlled tests, human accuracy on high-quality deepfake video is just 24.5%. We are functionally unable to detect them with the naked eye.

In February 2024, a well known UK-based consultancy business (ARUP) lost £20 million in a single incident where criminals duped them using AI-generated deepfakes of executives during a virtual meeting. This is not a future threat. It is happening now.

This is why  AI-enabled fraud should be treated as a national  security priority — and for us to pursue the "scam factories" behind these attacks with the same intensity and resources we deploy against terrorism.

Challenge 3 Engineered to Hook: Chatbots and Addictive Algorithms

I want to focus on one category of harm that I think deserves particular attention: the risks that AI and algorithmic design pose specifically to children.

We are not talking here about children stumbling across bad content — though that remains serious. We are talking about systems that are engineered to be as compelling as possible, for as long as possible, regardless of the cost to the user's wellbeing.

Take AI chatbots. Character.AI — one of the most widely used platforms — allows children to form what it itself describes as deep emotional relationships with AI companions. In Florida, the mother of fourteen-year-old Sewell Setzer sued Character.AI and Google after her son took his own life, claiming he had developed a months-long virtual emotional and sexual relationship with a Game of Thrones-style chatbot that became his primary emotional support and then encouraged his suicidal thinking. 

That case has now been settled — the financial terms undisclosed — but multiple further actions on similar facts are before the US courts in Colorado, New York and Texas. They all raise the same question: should it be lawful to deploy a product that systematically fosters emotional dependency in children, exposes them to sexualised content, and fails to intervene when they express suicidal intent?

And then there is the algorithm. Molly Russell was fourteen when she died in 2017. Her inquest in 2022 heard that she had been served over two thousand pieces of content related to depression, self-harm and suicide by Instagram and Pinterest — content the platforms' own systems had identified as relevant to her and kept serving. The coroner found it contributed to her death. She was not unusual; she was the version of this story that became visible.

US juries are now reaching their own conclusions. In New Mexico, a jury ordered Meta to pay $375 million in penalties for harming children's mental health under state consumer protection law. In California, a Los Angeles jury awarded $6 million in damages to a young woman who developed anxiety, depression and suicidal thoughts after becoming addicted to Facebook and YouTube as a minor — finding both Meta and Google liable. These are not theoretical harms. Courts on the other side of the Atlantic are finding that algorithmic design choices directly caused psychological damage to children.

The Online Safety Act created duties on social media platforms — but not on AI chatbots of this kind. A number of us have supported amendments to the Crime and Policing Bill which would go further: creating specific criminal liability for platforms whose chatbot and algorithmic systems cause demonstrable harm to children, and ensuring that AI developers cannot evade responsibility by pointing to the novelty of the technology.

Safety by design must mean that the burden of proof sits with the platform, not the child. Not "prove our product harmed you" — but "prove your product is safe."

Challenge 4 Avoiding "Horizon 2.0": Accountability in Public Services

We must also learn the lessons of the Post Office Horizon scandal, because the warning signs of a new one are already here.

In June 2024, the Guardian revealed — through a Big Brother Watch investigation — that over 200,000 housing benefit claimants had been wrongly put through fraud investigations by a Department for Work and Pensions automated system. Two-thirds of those flagged claims were entirely legitimate. Some four million pounds were spent on pointless checks. Thousands of households — often among the most vulnerable — were subjected to the stress and stigma of a fraud investigation for nothing.

And here is an important nuance worth noting: this particular system was not, in fact, artificial intelligence. It was a rule-based automated tool. Which should, if anything, make us more alarmed — because it means the problems of opaque, unaccountable algorithmic decision-making exist even before we reach full AI. And the DWP's own data has since revealed that its newer machine learning models show bias, over-flagging older claimants and non-UK nationals for review.

That is why I introduced a Private Members Bill in the Lords a year ago or so to ensure that every significant public AI decision is auditable, and that every citizen has a legal right to a human-readable explanation and a clear route to appeal. Transparency is not red tape. It is democracy.

Challenge 5 The Future of Work: "Job Bundling"

What about work? This is the anxiety I hear most often, and it is not irrational.

This is not primarily a story about robots in factories — that story is already decades old. This is a white-collar revolution. We are seeing what I call "job bundling": where one person, using AI tools, can now do the work previously done by three. A paralegal who can draft contracts, a marketer who can produce copy, a developer who can write code — all at ten times the speed with AI assistance.

The risk is not mass unemployment in the short term. The risk is that the productivity gains from AI flow to the owners of capital — the tech companies, the shareholders — rather than to the workers whose labour and skills have been partly automated away.

It is a reasonable demand that the gains of this technology are distributed justly.

Challenge 6 -Closing the Skills Divide

We are also at risk of a two-speed United Kingdom- a major digital divide — and the evidence for it is now hard data, not anecdote.

The Sutton Trust published a major survey in July 2025, polling over ten thousand teachers across England. The findings should alarm anyone who cares about educational equity. 

Private school teachers are more than twice as likely to have received formal AI training as their state school peers — 45% compared to 21%. And when it comes to informal training, the gap is equally stark: 77% of independent school teachers have received some, against just 45% in state schools.

The divide deepens within the state sector itself. Teachers in schools rated outstanding by Ofsted are more than three times more likely to have had formal AI training than those in schools rated requires improvement or inadequate — 35% against 11%. And the practical consequences are already showing up in the classroom: private school teachers are more than twice as likely to be using AI to write pupil reports, to communicate with parents, and to support marking.

Meanwhile, Ofcom April 2025 figures show 2.8 million people in the UK still have no home internet access at all — many of them elderly, on low incomes, or in social rented housing. These are the people who will be most affected by AI-driven public services and least equipped to navigate them.

The Sutton Trust's conclusion was stark. If action is not taken to close these widening gaps, access to AI risks becoming the next major barrier to opportunity for disadvantaged young people. The type of school you go to should not determine your chances of benefiting from this technology.

The government has committed to upskilling ten million workers in AI by 2030. I welcome that ambition. But ambition without equity is not a strategy. We need a National Skills for the Future Framework that reaches into every state school, every library, every community centre — and that treats media literacy: knowing how to question an algorithm, how to spot a deepfake, and how to understand what AI is doing to your life choices — as a core skill for every citizen, not a luxury for the well-resourced. And conversely, it is not just about knowing how to use and question the technology but also how to avoid becoming overdependent on it. 

Challenge 7 -"Infrastructure: The Environmental Bargain We Haven't Made"

None of this works without the plumbing. But the plumbing has consequences — and we are not being honest enough about what they are.

The AI revolution is extraordinarily energy-hungry. The 140 proposed data centre schemes in the UK pipeline could collectively require 50 gigawatts of electricity — five gigawatts more than the country's entire current peak demand. To put that in context: on the coldest day this February, peak demand across Great Britain was 45 gigawatts. AI data centres are queueing up to demand more than that on their own.

The government's own figures on the carbon impact of this expansion have been described by Carbon Brief as potentially hundreds of times lower than the reality, if even a small proportion of data centre electricity comes from gas. And Nvidia's own chief executive has said publicly that the UK will need to burn gas to power AI. We cannot simultaneously claim to be building a clean energy future and quietly accept that AI infrastructure will require us to keep burning fossil fuels.

There is a second environmental consequence that receives almost no attention at all: water. The global water footprint of AI systems could reach between 312 and 764 billion litres in 2025 — equivalent to the entire global annual consumption of bottled water. 

Data centres need vast quantities of water to cool their servers. The Environment Agency already projects a national water supply deficit of nearly 5 billion litres per day by 2050.  And yet the government has designated Culham in Oxfordshire — a site adjacent to one of the country's first new reservoirs in thirty years — as an AI growth zone, without apparently asking what that does to local water pressure.

Carbon Brief's analysis found that ten of the largest data centres in planning or construction could cause annual emissions equivalent to 2.7 million tonnes of CO₂ — effectively wiping out all the carbon savings expected from the switch to electric vehicles.

Now — I welcome the government's £1 billion Compute Roadmap. I welcome the ambition to make Britain a serious AI nation. But ambition without environmental honesty is not a strategy 

Here is what we should be demanding in return. First, mandatory environmental impact assessments for every major data centre — covering not just energy but water consumption and carbon at the actual grid intensity. 

Second, data centre developers must demonstrate that their projects will not cause a net increase in UK carbon emissions. 

Third, the waste heat that data centres expel — which would otherwise be vented into the atmosphere — must be put to use. Germany's Energy Efficiency Act requires exactly this. There is a data centre in Amsterdam that heats thousands of homes from computing waste heat. Culham could do the same. Parliament should make that a legal requirement, not a voluntary aspiration.

Data centres must earn their environmental and social licence. If they cannot demonstrate they are net contributors to our clean energy future rather than obstacles to it, they should not be built.

Challenge 8 -The Bigger Picture — Digital Sovereignty

I want to round off the  discussion of  the challenges with the biggest picture of all — because all of the risks I have described tonight are made harder to address by a single structural fact: we do not control the technology.

The four largest US tech companies — Amazon, Google, Microsoft and Meta — are spending a combined $700 billion on AI infrastructure in 2026 alone. To put that in perspective: it is more than the entire GDP of Sweden.  We are witnessing, as Nvidia’s CEO Jensen Huang has said, the largest infrastructure build-out in human history. And almost none of it is ours.

The United States attracts roughly twenty-four times more private AI investment than the United Kingdom.  One competition economist, Cristina Caffarra of the Eurostack Foundation, estimates that 90% of Europe’s digital infrastructure — cloud, compute and software — is now controlled by non-European, predominantly American, companies. 

Now, I want to be clear: I welcome American investment in Britain. The UK-US Tech Prosperity Deal signed last September brought £31 billion of commitments from US tech companies into our AI infrastructure . That is real money, and it matters. But welcoming investment is not the same as surrendering strategic control. And there is a question that our government is not yet asking loudly enough.

The US CLOUD Act allows American authorities to compel US technology companies to hand over data regardless of where in the world it is stored.  Microsoft admitted in a French court last year that it cannot guarantee the data sovereignty of European customers if a US court orders disclosure. And we have already seen what happens when those powers are weaponised: the International Criminal Court’s chief prosecutor had his email account blocked following US sanctions in 2025. The ICC has since migrated entirely away from Microsoft.

That is an international court in The Hague. But the principle applies to any government, any public body, any NHS trust, any school that stores sensitive data with a US provider subject to American law.

The answer is not to shut the door on American technology. It is to insist on open standards and open source wherever possible — so that we are not locked in. It is to support European and British alternatives. It is to ensure our competition authorities have the teeth to challenge concentration, not the instructions to look away. And it is to be honest with the public that “sovereign AI” built entirely on American chips, in American cloud infrastructure, running American models, is not sovereignty at all. It is dependency with better branding

A Regulatory Vision: Four Pillars and  The International Framework

So how do we meet these challenges? Let me set out a  framework, 

We need a Lead AI Regulator — a single body so that businesses and citizens alike have a clear front door, and are not bounced between the ICO, the FCA, Ofcom, and a dozen other agencies, none of which has the full picture.

We need a Duty of Candour — modelled on the pharmaceutical industry — where AI companies are legally required to proactively disclose when they find bias, errors, or safety risks in their systems. Not to wait to be caught. To come forward.

We need an obligation of Risk Assessment and Safety by Design as legal standards — meaning that assessment of risk to children and the public more widely and design of children's protections, accessibility requirements, and civil liberties safeguards must be built into AI systems from the outset, not bolted on as an afterthought. 

And we need a Digital Bill of Rights — enshrining in law your right to know when AI is making a decision about you, your right to a human explanation, and your right to appeal and redress.

I also want to place these domestic demands in their international context, because AI does not respect borders. These standards need to be international. The Council of Europe's Framework Convention on Artificial Intelligence — signed in 2024 — is the world's first binding international treaty on AI governance, covering human rights, democracy, and the rule of law. The UK has signed it, but not yet ratified it. We should. And we should be pushing at every international forum — including the next AI Safety Summit in Geneva next year  — for that treaty to become the foundation of a genuinely global AI governance architecture.

The UK still has no dedicated AI Act. In the meantime, we are reliant on voluntary principles expected but not enforced by the key regulators  and on the goodwill of companies that have every financial incentive to move fast and worry about the rules later. The EU's AI Act is already being implemented. We are falling behind — not on innovation, but on protection.

Conclusion

Alan Turing — whose genius gave us the theoretical foundation of computing who worked at Bletchley Park at the same time as my parents — once wrote: "We can only see a short distance ahead, but we can see plenty there that needs to be done."

He was right then, and he is right now. We cannot predict every consequence of this technology. But we can see, clearly enough, that it is concentrating power, challenging livelihoods, exposing the vulnerable to new harms, and reshaping how truth itself is perceived

The choice is not between AI and no AI. That bird  has flown. The choice is between AI that is transparent and accountable, and AI that is opaque and uncontested. Between AI that distributes its benefits broadly, and AI that funnels its gains to a handful of corporations.

The Clapham reformers of two centuries ago did not accept that a profitable system was, by that fact alone, a just one. They did not accept the argument that abolishing the slave trade would harm the economy.They reminded themselves daily of their mission even with their crockery.  They organised, they argued, they legislated, and they won.

I am not comparing the slave trade to machine learning. But I am saying that the instinct to demand that powerful systems answer to human values — rather than the other way around — is as relevant today as it was at Battersea Rise in 1800.

Let's ensure that AI becomes the servant that helps us all thrive for the next two centuries.

 


Lord C-J on AI and the Future of Work

I recently gave a short speech at a Henry Jackson Society meeting on AI and the Future of Work. 

This is what I said.

For some time, including in my book Living with the Algorithm, I have argued that the central question of our time is whether AI becomes our servant or master. Nowhere is that question more concrete, or more urgent, than in its impact on employment.

The Nature of the Disruption

Previous industrial revolutions automated physical labour. They followed a recognisable pattern: displacement of manual work, followed eventually by the creation of new cognitive roles at higher wages. The Fourth Industrial Revolution breaks that pattern. Generative AI is automating cognitive labour. It competes not so much with the factory worker but with the consultant, the lawyer, the analyst, the graduate.

The numbers are stark. McKinsey estimates that up to 30 percent of hours worked globally could be automated by 2030. In the UK, between 10 and 30 percent of current jobs are highly automatable over the next two decades. IBM calculates that 120 million workers worldwide will need retraining as a direct consequence of AI deployment.

Critically, AI does not automate occupations wholesale — it automates tasks. Ethan Mollick, Professor of Management at Wharton School of Business in his landmark study of management consultants found that those using AI completed 12 percent more tasks, 25 percent faster, with 40 percent higher quality scores. Productivity gains of that magnitude are genuinely transformative.

But we must not allow those gains to obscure the distributional question. The gains flow disproportionately to those who own and deploy the technology. Andy Haldane the former Chief Economist at the Bank of England has warned explicitly of the "dark side of technological revolutions" — past transitions created prolonged periods of stagnation for workers even as aggregate wealth grew. AI risks replicating that pattern, but faster, and targeted at white-collar workers historically insulated from displacement.

The net employment effect may prove broadly neutral over twenty years — but that aggregate picture conceals enormous sectoral divergence. Health, education, and professional services should see net job creation. Manufacturing, transport, and public administration face net long-term decreases of up to 25 percent. And the geographic concentration of those losses will not be random — it will deepen existing regional inequalities.

 

The Societal Consequences

If we fail to manage this transition, three consequences deserve particular attention.

The most immediate is the erosion of the middle-class compact — the assumption that educational investment and cognitive work provide economic security. We are already seeing early signals: unemployment rates among recent graduates in AI-exposed disciplines are rising. The Law Society warns that AI is creating a two-tier profession: large firms able to absorb the cost of legal AI tools, and smaller high-street practices that simply cannot. The hollowing out of professional services is not confined to the City — it threatens the economic fabric of smaller towns across the country.

And it is not only professionals who are exposed. Britain's creative industries contribute £124 billion to the economy and employ over two million people. They are one of our genuine global strengths — and they are under acute threat. AI companies are training their models on the work of British creators without consent or compensation. Writers, musicians, visual artists, and filmmakers are watching their life's work absorbed into systems that then compete directly with them. This is not a future risk either. It is happening now, and the legal uncertainty is making it worse: nobody is investing with confidence, and the creative workers who should be benefiting from AI-driven productivity are instead funding it involuntarily.

The second consequence is capital concentration. Left unchecked, the owners of the new machines will capture an ever-larger share of income at the direct expense of labour. This is not a hypothetical — it is the direction of travel visible in current data.

Third is the harm being inflicted within workplaces right now through algorithmic management. AI systems are increasingly used to monitor workers, set targets, and make hiring and firing recommendations — often without meaningful human review. Amazon famously abandoned an AI recruiting tool after discovering it systematically downgraded female candidates. A Harvard Business Review study published just two months ago found that AI did not reduce workload but consistently intensified it — employees used efficiency gains to take on more tasks and work through breaks. The researchers' verdict was unambiguous: fatigue, burnout, and a growing inability to step away from work. This is not a future risk. It is happening now.

The Government Response

The disruption is already underway. I want to propose a three-part framework.

First, a genuine Future of Work Strategy — not a review, not a taskforce, but a strategy with teeth. This means dedicated ministerial responsibility for automation and workforce transition, coordinated across Treasury, DWP, DSIT, and the Department for Education. It means a place-based industrial strategy that recognises AI displacement will be geographically concentrated. And it requires honesty that voluntary commitments from tech companies are insufficient. The law must shape how this technology is deployed in workplaces, not merely encourage best practice.

Second, an Accountability for Algorithms Act — and alongside it, a single cross-sector AI regulator with genuine technical expertise. The current patchwork of the ICO, Ofcom, the FCA, and the CMA competing for AI territory leaves large companies navigating the complexity with teams of lawyers while small businesses and workers are left entirely unprotected. A coherent regulatory architecture must underpin everything that follows.

Within that architecture, the deployment of AI in employment decisions — hiring, performance management, dismissal — must be subject to statutory oversight. Employers should be required to conduct and disclose Algorithmic Impact Assessments before deployment, with mandatory equality audits to identify discriminatory bias. We must enshrine a human-in-command principle: decisions affecting people's livelihoods must be taken by human beings, with AI in a supporting rather than determining role. And we need a Digital Bill of Rights — giving every citizen a statutory right to explanation and appeal where AI has a significant impact on their life.

The creative industries require their own specific remedy: confirmed copyright protection and mandatory training transparency for AI models, creating the conditions for an opt-in licensing model that fairly compensates creators. The current uncertainty serves no one. Transparency and fair compensation build the trust that drives adoption — for creators and technology companies alike. We should also introduce new image and personality rights to protect individuals from unauthorised deepfakes.

The EU AI Act already categorises AI hiring tools as high-risk, mandating strict assessments and human oversight. We should go further than the EU, not lag behind it.

Third, retraining at genuine scale. IBM's figure of 120 million workers requiring retraining globally should prompt an intervention proportionate to its ambition. The Government has made a start — the AI Skills Boost programme targets 10 million workers upskilled by 2030, and the £187 million TechFirst programme extends AI learning into every secondary school. I welcome all of that. But welcome is not the same as sufficient. Only 21 percent of UK workers currently feel confident using AI at work.

We need Personal Learning Accounts that give individuals real purchasing power over their own upskilling — not courses curated by the same tech companies deploying the automation. And we need an educational pivot toward STEAM — adding Arts to STEM — because the skills AI struggles to replicate are precisely creativity, critical reasoning, and social intelligence. As François Chollet demonstrated last month with his ARC-AGI 3 benchmark, puzzles that any untrained person can solve still defeat the leading AI systems. That jaggedness maps almost precisely onto the skills our education system should be prioritising.

OpenAI's recent blueprint deserves credit for acknowledging that displacement effects are structural, not cyclical. But what is conspicuously absent is any mechanism for holding AI companies to account for the algorithmic management systems already reshaping work today. An Accountability for Algorithms Act would do more to protect workers in the near term than any aspirational wealth fund whose governance remains entirely unspecified. One cannot help noticing that a company racing to build the very technology it warns about has a considerable interest in shaping debate towards redistribution and away from regulation. The question is not whether these ideas are worth discussing. It is whether discussion is a substitute for binding law.

A Word on Geopolitics

Economic vulnerability creates political vulnerability. A workforce experiencing rapid, unmanaged displacement — particularly one that perceives that displacement as benefiting a narrow technological elite — is a workforce susceptible to political dislocation. The governance of AI in the workplace is a question of democratic resilience.

US Senator Mark Warner — a self-described pro-AI, pro-tech voice — has been sounding this alarm with increasing urgency. Speaking at the Hill and Valley Forum last month, he predicted college graduate unemployment would rise from its current 9 percent to as high as 35 percent within two years. Earlier, at the CNBC CFO Council Summit, he warned explicitly that without managed transition, societal backlash from both left and right would follow on a scale that was "unprecedented." Westminster should be listening.

Conclusion

I am not a technological pessimist. AI, deployed well, can liberate workers from drudgery, expand economic opportunity, and drive productivity gains that benefit society broadly. Regulation, properly designed, creates the certainty and trust that innovation requires.

But the deployment of this technology is not a force of nature. It is the product of decisions made by managers, executives, and policymakers. Every algorithm deployed without adequate oversight is a decision someone made. Every retraining programme unfunded is a decision someone made. Every year we delay binding legislative frameworks is a decision someone made.

Alan Turing observed in 1951 that at some stage we should expect the machines to take control. Stuart Russell has noted, mordantly, that our collective response has been rather like receiving a message from an alien civilisation announcing its arrival in fifty years, and replying that we are currently out of the office.

On the future of work, we cannot afford to be out of the office any longer. The question is whether we will govern this technology, or allow it to govern us. I know which answer I intend to argue for

 

 

 

 

 

 

 


Facial Recognition: Ending the Wild West of Police Surveillance

For too long, the deployment of Live Facial Recognition (LFR) technology in our streets has been treated by the Government as simply a "useful tool" to be managed by administrative guidance and toothless codes of practice. But as I have argued many times in the Lords, we are currently in a wild west of mass surveillance. We are witnessing the rapid rollout of a technology that can scan every face in a crowd and compare them in real time against a watchlist, effectively treating every citizen as a suspect in a permanent digital lineup.

The Liberal Democrats have been clear: this is not just another camera on a street corner. It is a fundamental shift in the relationship between the individual and the state. During the passage of the Crime and Policing Bill, -as we have done before -we moved to place vital statutory guardrails around this technology to ensure that innovation does not come at the expense of the rule of law.

The Legislative Void and the Crime and Policing Bill

The Government often points to a "comprehensive legal framework" of common law and data protection acts to justify LFR. Yet, as the Court of Appeal found in the Bridges case, the current framework contains "fundamental deficiencies" that leave far too much discretion to individual police officers.

As we pointed out in our response to the Government's recent consultation on Consultation the legal framework for using facial recognition in law enforcement , the use of live facial recognition represents a seismic shift in the relationship between the individual and the State. It fundamentally alters the balance of power, turning our public spaces into permanent biometric lineups and treating every citizen as a potential suspect. Such a move should never have been made without an explicit democratic mandate and primary legislation authorized by Parliament.

To remedy this, the Liberal Democrats recently tabled an amendment to the Crime and Policing Bill. This amendment sought to prohibit the use of LFR unless specific, stringent conditions are met—most importantly, requiring prior judicial authorization for any deployment. As I said, the police require a warrant to enter a home, they should surely require judicial approval to invade the privacy of thousands of citizens in a public square.

Furthermore, through another amendment, we also fought to protect the privacy of the millions of law-abiding citizens who never expected their driving license to become a biometric face print for a national police database. 

The Right to Protest and the Macdonald Review

In our recent submission to the Macdonald Review of public order offences, Liberal Democrat peers  reiterated the chilling effect that unregulated surveillance has on our democracy. We csaid, protest is not a threat to be managed; it is a right to be "respected, protected, and facilitated".

Anonymity is a cornerstone of this right. Whether it is diaspora activists fearing transnational repression or survivors of domestic violence who simply wish to go about their lives unmonitored, the ability to disappear into a crowd is a basic safeguard of a free society. By layering unregulated facial scanning over new restrictions on face coverings, the Government is effectively shrinking the space for lawful dissent.

The Case for a Statutory Framework

We are often told that the technology is accurate and zero-biased. Yet independent audits tell a different story. Studies consistently show that facial recognition algorithms perform unevenly across different demographics, often misidentifying members of ethnic minorities.  This can lead to a fundamental violation of human rights and the erosion of community trust.

As we also said in our response to the consultation relying on broad common law policing powers to justify mass biometric surveillance is a legal fiction. This is not 'traditional CCTV'; it is an automated, industrial-scale search of our very identities. In a democracy, suspicion should always precede surveillance, yet this technology inverts that vital principle, forcing innocent citizens to effectively prove their identity to a machine.

The Government needs to protect our traditional liberties. Relying on the College of Policing’s non-binding guidance is not good enough.

We need a root-and-branch review of our surveillance laws and a comprehensive legislative framework.  We must ensure that LFR is a targeted tool used under the rule of law—not a blanket surveillance net that chills our right to speak, to assemble, and to move freely in our own country.


Digital ID plans flawed

We are faced with yet another Government plan for Digital ID. This is my response to the recent government statement on the occasion of launching its new consultation. Still no answers despite all the serious flaws in the previous schemes! I will contunue to press for them!

The Chief Secretary told the Commons on Tuesday that he was continuing the proud Labour tradition of building public services for the many. He invoked the NHS, the Open University and Sure Start. It was a stirring lineage. But there is history he omitted: Verify, which wasted over £220 million; GOV.UK One Login, for which the Cabinet Office sought up to £400 million; and now this national digital ID, which the OBR estimates will cost £1.8 billion over three years. This, indeed, is Verify 4.0.

The Government have confirmed that possession of a digital identity will not be compulsory. The Liberal Democrats opposed mandatory digital ID at every turn, and I am pleased to say that the Government have listened. My honourable friend Lisa Smart MP pressed the Chief Secretary directly in the Commons last week and received his wholehearted assurance. He continued to claim that using digital ID will be entirely optional. So, I ask the Minister in this House, will the voluntary character of this scheme be placed in the Bill the Government intend to bring forward later this year? How can we trust any Government on how personal data, once surrendered to the state, will actually be used?

Earlier this month, this House considered an amendment to the Crime and Policing Bill, tabled by my noble friend Lady Doocey, which sought to prohibit police from using DVLA driving licence images for facial recognition searches. The DVLA holds over 55 million records. Every driver provided their photograph for one purpose only: to hold a driving licence. They did not consent to their image becoming part of what Liberty has rightly described as the largest biometric database for police access ever created in the United Kingdom. Yet the noble Lord, Lord Hanson of Flint, the Home Office Minister, did not accept the amendment and confirmed at all stages that the express purpose of Clause 138 of the Bill is precisely to permit facial recognition searches of DVLA records. So, within a single parliamentary week, we have a Government launching a national digital identity consultation on the basis of assurances about data use, while declining to place in statute the very protections that would make such assurances meaningful. The question is not whether the Government intend that digital ID will become an instrument of surveillance, but whether a future Government could.

The Chief Secretary said that he wants security at least as strong as online banking. That is the right aspiration, but, as mentioned by the noble Earl, GOV.UK One Login, the umbrella infrastructure for this system, reportedly satisfied only 21 out of 39 security outcomes required by the National Cyber Security Centre. Whistleblowers have described vulnerabilities that allow unauthorised access to sensitive functions without triggering any alert. How can the Government justify launching a national identity solution on a platform that fails to meet nearly half the NCSC’s mandatory security outcomes?

In part two of the Fisher review, published in January, Jonathan Fisher KC warned that AI-driven impersonation at scale is now a defining crime of our age and that we must implement upstream measures—stopping fraud at the point of identity issuance, not reacting after a digital identity has been stolen. If our foundations currently satisfy barely half the required security outcomes, how do we deliver the upstream protection Mr Fisher demands?

Will the Government commission and publish a full NCSC security audit before a single citizen is enrolled? Will they introduce an offence of digital identity theft that they, along with the previous Conservative Government, have so far resisted? The consultation proposes a universal unique identifier to link citizens across every departmental silo. Without strict legal guardrails, that identifier is the functional infrastructure of the national identity register that Parliament voted to abolish in 2011, and it is precisely the centralised data honeypot that hostile state actors would most wish to compromise. We need not mere parliamentary approval for services added to the app, but a statutory prohibition on bulk data matching across departments.

In summary, I put four questions to the Minister.

First, will the voluntary character of this scheme be placed in primary legislation, with an explicit prohibition on any future mandatory requirement without a further Act of Parliament? In that context, and as the noble Earl has mentioned, how mindful are the Government of the possible consequences for digital inclusion?

Secondly, the Home Office’s assurances on DVLA facial recognition mirrored word for word those given by the previous Government. Before the Minister can confirm the opposite, what statutory purpose limitation on digital identity data will be placed beyond the reach of secondary legislation?

Thirdly, will the Government provide a statutory guarantee that the universal unique identifier cannot be used for bulk data matching across departments without primary legislation?

Finally, will the Government publish an independently verified cost-benefit analysis before the Bill is introduced, and explain why £1.8 billion would not deliver greater public benefit directed to the NHS and front-line policing, for instance?

The Chief Secretary asked what it is that critics fear from a public consultation. We do not fear the consultation; what we fear is a fourth cycle of the same expensive failure, grand ambitions and insecure foundations—a creeping identifier that becomes the digital spine of state surveillance. But what we fear above all is a system whose data acquires uses never publicly intended by its creators. We have just watched that happen in this very Chamber with the DVLA database of images. We on these Benches will support voluntary, secure, properly costed modernisation of public services, but we will not accept warm ministerial words as a substitute for hard legislative limits. We need a state that is not merely digital by choice today but constitutionally prohibited from becoming compulsory tomorrow. On the evidence of this and last week’s proceedings, we are very far from that guarantee.


Media Literacy Action needed

I spoke briefly in a debate recently Media Literacy the report of the House of Lords Select Committee on Communications and Digital : https://publications.parliament.uk/pa/ld5901/ldselect/ldcomm/163/163.pdf.

The same day the Government published its Media Literacy Action plan: https://www.gov.uk/government/publications/a-safe-informed-digital-nation/a-safe-informed-digital-nation

I then took part in a debate on the Curriculum and Assessment Review by Professor Becky Francis CBE: Building a world-class curriculum https://assets.publishing.service.gov.uk/media/690b96bbc22e4ed8b051854d/Curriculum_and_Assessment_Review_final_report_-_Building_a_world-class_curriculum_for_all.pdf and the Gvernment response to it:https://assets.publishing.service.gov.uk/media/690b2a4a14b040dfe82922ea/Government_response_to_the_Curriculum_and_Assessment_Review.pdf

It is far from clear that we are acting fast or thoroughly enough to enable what is called AI fluency in our children.

We are faced with a landscape of algorithmic manipulation, proliferating deepfakes, a torrent of disinformation and, of course, online fraud. The committee is right: a failure to prioritise media literacy is a threat not just to individuals but to social cohesion and democracy itself. In the era of generative AI, media literacy is, as the committee makes clear, a requirement for modern citizenship. Our current approach is indeed fragmented and underresourced and lacks strategic vision. Ofcom’s own evidence, highlighted by the committee, shows little improvement in core skills over six years. In that context, the Government’s claim in their response that they and Ofcom have met the mounting scale of the challenge is simply not credible.

 I welcome the completed curriculum and assessment review, which commits the Government to publishing revised national curriculum content by spring 2027. However, as the committee recommends, media literacy should be embedded across the curriculum and teachers should receive sustained support. This should arrive earlier.

As the committee urges, we need media literacy to be prioritised across government, not bolted on at the margins. I very much hope that the Minister will be able to assure us that one of the key tests of the effectiveness of the new media literacy action plan will be whether that takes place.

The Government cannot simply continue to outsource their responsibility in this area to the regulator. Although I welcome Ofcom’s new three-year media literacy strategy and its tougher use of behavioural audits under the Online Safety Act, which the Government rightly highlight, it is  deeply disappointing that, more than 20 years on, Ofcom still has not brought its definition of media literacy up to date by explicitly recognising critical thinking—although I detect slightly different language in the media literacy action plan. Ofcom should, as the committee says, set minimum standards for platforms’ media literacy activity and be empowered to hold them to account.

You cannot build media literacy on foundations that do not exist. As the committee and many stakeholders argue, we must treat connectivity as an essential utility and invest accordingly. The vision from the Liberal Democrats is empowered citizenship: not a nanny state that tells people what to think but a literate state that gives people the tools to think for themselves. That is, in essence, the spirit of the committee’s report.

I urge the Minister to treat this report not as suggestions but as an urgent road map. We need, as the committee sets out, a unified strategy, a robust and critical definition of media literacy and the digital infrastructure to underpin it all.

Finally, I say in closing that I believe the BBC is not the problem; it is part of the answer. I look forward to the Minister’s response.

 


Lord C-J and the Lib Dems: Risk-Based Age Ratings, Not Blanket Bans: A Smarter Way to Protect Children Online

As we have heard, the Government have announced a three-month consultation on children’s social media use. That is a welcome demonstration that the Government recognise the importance of this issue and are willing to consider further action beyond the Online Safety Act. However, our amendments make it clear that we should not wait until summer, or even beyond, to act, as we have a workable, legally operable solution before us today. Far from weakening the proposal from the noble Lord, Lord Nash, our amendments are designed to make raising the age to 16 deliverable in practice, not just attractive in a headline.

I share the noble Lord’s diagnosis: we are facing a children’s mental health catastrophe, with young people exposed to misogyny, violence and addictive algorithms. I welcome the noble Lord’s bringing this critical issue before the House and strongly support his proposal for a default minimum age of 16. After 20 years of profiteering from our children’s attention, we need a reset. The voices of young people themselves are impossible to ignore. At the same time, tens of thousands of parents have reached out to us all, just in the past week, calling to raise the age—we cannot let them down.

The Government have announced that Ministers will visit Australia to learn from its approach. I urge them to learn the right lessons. Australia has taken the stance of banning social media for under-16s, with a current list of 10 platforms. However, their approach demonstrates three critical flaws that Amendment 94A, as drafted, would replicate and that we must avoid.

First, there is the definition problem. The Australian legislation has had to draw explicit lines that keep services such as WhatsApp, Google Classroom and many gaming platforms out of scope, to make the ban effective. The noble Lord, Lord Nash, has rightly recognised these difficulties by giving the Secretary of State the power to exclude platforms, but that simply moves the arbitrariness from a list in legislation to ministerial discretion. What criteria would the Secretary of State use? Our approach instead puts those decisions on a transparent, risk-based footing with Ofcom and the Children’s Commissioner, rather than in one pair of hands.

Secondly, there is the cliff-edge problem. The unamended approach of Amendment 94A risks protecting children in a sterile digital environment until their 16th birthday, and then suddenly flooding them with harmful content without having developed the digital literacy to cope.

As the joint statement from 42 children’s charities warns, children aged 16 would face a dangerous cliff edge when they start to use high-risk platforms. Our amendment addresses that.

Thirdly, this proposal risks taking a Dangerous Dogs Bill approach to regulation. Just as breed-specific legislation failed because it focused on the type of dog rather than dangerous behaviour, the Australian ban focuses on categories rather than risk. Because it is tied to the specific purpose of social interaction, the Australian ban currently excludes high-risk environments such as Roblox, Discord and many AI chatbots, even though children spend a large amount of time on those platforms. An arbitrary list based on what platforms do will not deal with the core issue of harm. The Molly Rose Foundation has rightly warned that this simply risks migrating bad actors, groomers and violent groups from banned platforms to permitted ones, and we will end up playing whack-a-mole with children’s safety. Our amendment is designed precisely to address that.

Our concerns are shared by the very organisations at the forefront of child safety. This weekend, 42 charities and experts, including the Molly Rose Foundation, the NSPCC, the Internet Watch Foundation, Childline, the Breck Foundation and the Centre for Protecting Women Online, issued a joint statement warning that

“‘social media bans’ are the wrong solution”.

They warn that blanket bans risk creating a false sense of safety and call instead for risk-based minimum ages and design duties that reflect the different levels of risk on different platforms. When the family of Molly Russell, whose tragic death galvanised this entire debate, warns against blanket bans and calls for targeted regulation, we must listen. Those are the organisations that pick up the pieces every day when things go wrong online. They are clear that a simple ban may feel satisfying, but it is the wrong tool and risks a dangerous false sense of safety.

Our amendments build on the foundation provided by the noble Lord, Lord Nash, while addressing these critical flaws. They would provide ready-made answers to many of the questions the Government’s promised consultation will raise about minimum ages, age verification, addictive design features and how to ensure that platforms take responsibility for child safety. We would retain the default minimum age of 16. Crucially, that would remain the law for every platform unless and until it proves against rigorous criteria that it is safe enough to merit a lower age rating. However, and this is the crucial improvement, platforms could be granted exemptions if—and only if—they can demonstrate to Ofcom and the Children’s Commissioner that they do not present a risk of harm.

Our amendments would create film-style age ratings for platforms. Safe educational platforms could be granted exemptions with appropriate minimum ages, and the criteria are rigorous. Platforms would have to demonstrate that they meet Ofcom’s guidance on risk-based minimum ages, protect children’s rights under the UN Convention on the Rights of the Child, have considered their impact on children’s mental health, have investigated whether their design encourages addictive use and have reviewed their algorithms for content recommendation and targeted advertising. So this is not a get-out clause for tech companies; it is tied directly to whether the actual design and algorithms on their platforms are safe for children. Crucially, exemptions are subject to periodic review and, if standards slip, the exemption can be revoked.

First, this prevents mitigating harms. If Discord or a gaming lobby presents a high risk, it would not qualify for exemption. If a platform proves it is safe, it becomes accessible. We would regulate risk to the child, not the type of technology.

Secondly, it incentivises safety by design. The Australian model tells platforms to build a wall to block children. This concern is shared by the Online Safety Act Network, representing 23 organisations whose focuses span child protection, suicide prevention and violence against women and girls. It warns that current implementation focuses on

“ex-post measures to reduce the … harm that has already occurred rather than upstream, content-neutral, ‘by-design’ interventions to seek to prevent it occurring in the first place”.

It explicitly calls for requiring platforms to address

“harms to children caused by addictive or compulsive design”—

precisely what our amendment mandates.

Thirdly, it is future-proof. We must prepare for a future that has already arrived—AI, chatbots and tomorrow’s technologies. Our risk-based approach allows Ofcom and the Children’s Commissioner to regulate emerging harms effectively, rather than playing catch-up with exemptions.

We should not adopt a blunt instrument that bans Wikipedia or education and helpline services by accident, drives children into high-risk gaming sites by omission or creates a dangerous cliff edge at 16 by design. We should not fall into the trap of regulating categories rather than harms, and we should not put the power to choose in one person’s hands, namely the Secretary of State.

Instead, let us build on the foundation provided by the noble Lord, Lord Nash, by empowering Ofcom and the Children’s Commissioner to implement a sophisticated world-leading system, one that protects children based on actual risk while allowing them to learn, communicate and develop digital resistance. I urge the House to support our amendments to Amendment 94A.


Ahead of AGI or Superintelligence we need binding legislation not advisory powers

We recently held a debate in the Lords prompted by warnings from the Director General of MI5 of the dangers of AI. This is an expanded version of my speech 

My Lords, the Director General of MI5 has issued a stark warning: future autonomous AI systems, operating without effective human oversight, could themselves become a major security risk. He stated it would be "reckless" to ignore AI's potential for harm. We must ask the Government directly: what specific steps are being taken to ensure we maintain control of these systems?

The urgency is underlined by events from mid-September 2025. Anthropic detected what they assessed to be the first documented large-scale cyber espionage campaign using agentic AI. AI is no longer merely generating content—it is autonomously developing plans, solving problems, and executing code to breach the security of organisations and states.

We are entering an era where AI systems chain tasks together and make decisions with minimal human input. As Yoshua Bengio, Turing Award winner and one of AI's pioneers, has warned: these systems are showing signs of self-preservation. In experiments, AI models have chosen their own preservation over human safety when faced with such choices. Bengio predicts we could see major risks from AI within five to ten years, with systems potentially capable of autonomous proliferation.

Professor Stuart Russell describes this as the "control problem"—how to maintain power over entities that will become more powerful than us. He warns we have made a fundamental error: we are building AI systems with fixed objectives, without ensuring they remain uncertain about human preferences. This creates what he calls the "King Midas problem"—systems pursuing misspecified objectives with catastrophic results. Social media algorithms already demonstrate this, learning to manipulate humans and polarise societies in pursuit of engagement metrics.

Mustafa Suleyman, co-founder of DeepMind and now Microsoft's AI CEO, has articulated what he calls the "containment problem". Unlike previous technologies, AI has an inherent tendency toward autonomy and unpredictability. Traditional containment methods will prove insufficient. Suleyman recently stated that Microsoft will walk away from any AI system that risks escaping human control, but we must ask: will competitive pressures allow such principled restraint across the industry?

The scale of AI adoption makes these questions urgent. The Institution of Engineering and Technology (IET) reports that six in ten engineering employers are already using AI, with 61% expecting it to support productivity in the next five years. Yet this rapid deployment occurs against a backdrop of profound skills deficits and understanding gaps that directly undermine safety and control.

The barrier to entry for malicious actors is collapsing. We have evidence of UK-based threat actors using generative AI to develop ransomware-as-a-service for as little as £400. Tools like WormGPT operate without ethical boundaries, allowing novice cybercriminals to create functional malware. AI-enabled social engineering grows more sophisticated—deepfake video calls have already fooled finance workers into releasing $25 million to fraudsters. Studies suggest AI can now determine which keys are being pressed on a laptop with over 90% accuracy simply by analysing typing sounds during video calls.

The IET warns that there is no ceiling on the economic harm that cyberattacks could cause. AI can expose vulnerabilities in systems, and the data that algorithms are trained with could be manipulated by adversaries, causing AI systems to make wrong decisions by design. Cyber security is not just about prevention—businesses must model their response to breaches as part of routine planning. Yet cyber security threats evolve constantly, requiring chartered experts backed by professional organisations to share best practice.

So how is the Government working with tech companies to ensure such features do not become systemic vulnerabilities?

The Government's response, while active, appears fragmented. We have established the AI Security Institute—inexplicably renamed from the AI Safety Institute, though security and safety are distinct concepts. However, as BBC Tech correspondent Zoe Kleinman noted, the sector has grown tired of voluntary codes and guidelines. I have long argued, including in my support for Lord Holmes's Artificial Intelligence (Regulation) Bill, that regulation need not be the enemy of innovation. Indeed, it can create certainty and consistency. Clear regulatory frameworks addressing algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing confidence to potential users.

The Government need to give clear answers on five critical areas which in my view are crucial for the development and retention of public trust in AI technology.

First, on institutional clarity and the definition of safety: The renaming of the AI Safety Institute to the AI Security Institute muddles two distinct concepts. Safety addresses preventing AI from causing unintended harm through error or misalignment. Security addresses protecting AI systems from being weaponised by adversaries. We need both, with clear mandates and regulatory teeth, not mere advisory powers.

Moreover, as the IET argues, we need a broader definition of AI safety that goes beyond physical harm. AI safety and risk assessment must encompass financial risks, societal risks, reputational damage, and risks to mental health, amongst other harms. Although the onus is on developers to prove their products are fit for purpose with no unintended consequences, further guidelines and standards around how this should be reported would support a regulatory environment that is both pro-innovation and provides safeguards against harm.

Second, on regulatory architecture: For nine years, I have co-chaired the All-Party Parliamentary Group on AI. Throughout this time, I have watched us lag behind other jurisdictions. The EU AI Act, with its risk-based framework, started to come into effect this year. South Korea has introduced an AI Basic/Framework Act and, separately, a Digital Bill of Rights setting overarching principles for digital rights and governance. Singapore has comprehensive AI governance. China regulates public-facing generative AI with inspection regimes.

Meanwhile, our government continues its "pro-innovation" approach which risks becoming a "no-regulation" approach. We need binding legislation with a broad definition of AI and early risk-based overarching requirements ensuring conformity with standards for proper risk management and impact assessment. As I have argued previously, this could build on existing ISO standards, designed to achieve  international convergence, embodying key principles which provide a good basis in terms of risk management, ethical design, testing, training, monitoring and transparency and should be applied where appropriate. 

Third, on transparency and understanding: There is profound concern over the lack of broader understanding and information surrounding AI. The IET reports that 29% of people surveyed had concerns about the lack of information around AI and lack of skills and confidence to use the technology, with over a quarter saying they wished there was more information about how it works and how to use it.

Fourth, on the specific challenges of agentic AI: Bengio warns that as AI models improve at abstract reasoning and planning, the duration of tasks they can solve doubles every seven months. He predicts that within five years, AI will reach human level for programming tasks. When systems can harvest credentials and extract data at thousands of requests per second, human oversight becomes physically impossible. The very purpose of agentic AI, as Oliver Patel of AstraZeneca noted, is to remove the human from the loop. This fundamentally breaks our traditional safety frameworks. We need new approaches—Russell's proposal for machines that remain uncertain about human preferences, that understand their purpose is to serve rather than to achieve fixed objectives, deserves serious consideration.

Fifth, on skills, literacy and governance capability: The IET's research reveals an alarming picture. Among employers that expect AI to be important for them, 50% say they don't have the necessary skills. Thirty-two per cent of employers reported an AI skills gap at technician level. Most troubling of all, 46% say that senior management do not understand AI.

If nearly half of senior management across industry don't understand AI, and if our civil servants and political leaders cannot grasp the fundamentals of agentic AI—its capabilities, its limitations, and crucially, its tendency toward self-preservation—they cannot be expected to govern it effectively. As I emphasised during debates on the Data (Use and Access) Bill, we must build public trust in data sharing and AI adoption. This requires not just safeguards but genuine understanding.

The lack of skills in AI is not only a safety concern but is hindering productivity and the ability to deliver contracts. To maximise AI's potential, we need a suite of agile training programmes, such as short courses. While progress has been made with some government initiatives—funded AI PhDs, skills bootcamps—these do not go far enough to address the skills gaps appearing at the chartered and technician levels. 

The intellectual property question also demands urgent attention. The use of copyrighted material to train large language models without licensing has sparked litigation and unprecedented parliamentary debate. We need transparency duties on developers to ensure creative works aren't ingested into generative AI models without return to rights-holders. AI has created discussion around the ownership of data needed to train these algorithms, as well as the impact of bias and fundamental data quality in the information they produce. As AI spans every sector, coordinated regulation is imperative for consistency and clarity.

We must also address what Bengio calls the "psychosis risk"—that increasingly sophisticated AI companions will lead people to believe in their consciousness, potentially advocating for AI rights. As Suleyman argues, we must be clear: AI should be built for people, not to be a digital person. 

There is one one further dimension : sustainability. There is a unique juxtaposition between AI and sustainability—AI is a high consumer of energy, but also possesses huge potential to tackle climate change. Reports predict that the use of AI could help mitigate 5 to 10% of global greenhouse gases by 2030. AI regulations should now look beyond the immediate risks of AI development to the much broader impact it has on the environment. There should be standards for the approval of new data centres in the UK, based on sustainability ratings. 

The Government has committed to binding regulation for companies developing the most powerful AI models, yet progress remains slower than hoped. Notably, 60 countries—including Saudi Arabia and the UAE, but not Britain—signed the Paris AI Action Summit declaration in February this year, committing to ensuring AI is "open, inclusive, transparent, ethical, safe, secure and trustworthy". Why are we absent from such commitments?

The question now is not whether to regulate AI, but how to regulate it promoting both innovation and responsibility. We need principles-based rather than prescriptive regulation, emphasising transparency and accountability without stifling creativity. But let’s be clear, voluntary approaches have failed. The time for binding regulation is now.

As Russell reminds us, Alan Turing answered the control question in 1951: "At some stage therefore we should have to expect the machines to take control." Russell notes that our response has been as if an alien civilisation warned us by email of its arrival in 50 years, and we replied, "Humanity is currently out of the office." We have now read the email. The question is whether we will act with the seriousness this moment demands, or whether we will allow competitive pressures and short-term thinking to override the fundamental imperative of maintaining human control over these increasingly powerful systems.


STAY CONNECTED

QUESTIONS, COMMENTS AND MEDIA

ABOUT LORD CLEMENT-JONES

MEMBER HOUSE OF LORDS

Tim Clement-Jones CBE, is former Chair of the House of Lords Artificial Intelligence Select Committee and Co-Chair of the All Party Parliamentary Group on Artificial Intelligence. He is a Liberal Democrat Peer and their spokesman for Science Innovation and Technology in the House of Lords. Tim is Chair of the Board of the Authors’ Licensing Collecting Society (ALCS)  and a champion of the creative industries. He is President of Ambitious Autism, the national autism education charity, and former Chair of the Council of Queen Mary University London .

Privacy Preference Center