I recently took up the role of Honorary Professor of Practice at Queen Mary University of London after having served as Chair of its Governning Council for 8 years. A great privilege. This is the lecture I recently gave to Students in the first year of the new Applied AI Bsc Programme 

When we started the House of Lords Select Committee special enquiry on Artificial Intelligence back in 2017, I had no idea we were standing at the threshold of one of the most extensive technological transformations in human history. We knew AI mattered. We understood it had economic potential. But I don’t think any of us truly grasped just how rapidly this technology would reshape every aspect of our lives – from the mundane decisions about what we watch on streaming services to the profound questions about who gets a mortgage, who receives medical treatment, what jobs are available and increasingly, who lives or dies in conflict zones.

Eight years later, as I look back at our report “AI in the UK: Ready, Willing and Able?” and forward to where we are now, the question is no longer whether we need to regulate AI. The question is whether we can regulate it effectively before the costs of inaction become unsustainable.

When our Select Committee published its findings in April 2018, we proposed five fundamental principles:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

These principles were not revolutionary. They drew heavily on the OECD’s work and on liberal democratic values. What was revolutionary was the context: we were trying to articulate was a comprehensive ethical framework for a technology that didn’t yet fully exist.

As I wrote in my book “Living with the Algorithm: Servant or Master?“, the central question we must answer is deceptively simple: how do we ensure that AI remains our servant and does not become our master?

But principles without enforcement mechanisms are merely aspirations. And aspirations, however noble, do not prevent algorithmic discrimination. They do not stop the deployment of facial recognition systems that misidentify people of colour at alarming rates. They do not protect workers from unfair automated hiring decisions. And they certainly do not prevent the kinds of catastrophic failures we saw with the UK Post Office Horizon scandal – a tragedy that should serve as a warning to us all about what happens when we allow complex automated systems to operate without transparency, accountability, or effective challenge mechanisms.

Too often in the UK, we legislate when the damage has already been done. When it comes to protecting citizens and their interactions with new technologies, we need to be proactive, not reactive. We cannot risk another Horizon scandal.

This is why I introduced the Public Authority Algorithmic and Automated Decision-Making Systems Bill. If “computer says no” to a benefit decision, an immigration decision, or any other significant automated determination, citizens must have the right to understand why that happened and to challenge it effectively. We need automatic logging capabilities, transparent procurement standards, and independent dispute resolution mechanisms. These are not burdens on innovation, they are the prerequisites for public trust.

AI, as we all know however, respects no borders, so the international dimension is crucial. 

In the past two years, we’ve witnessed an extraordinary flowering of regulatory approaches worldwide:

The EU AI Act represents the most comprehensive attempt yet to regulate AI through binding legislation. Its risk-based approach, with prohibited practices at one end and minimal-risk systems at the other, provides a clear framework. But it also demonstrates the challenges of regulating a rapidly evolving technology – by the time the Act was finalised, the AI landscape had already shifted dramatically with the emergence of generative AI systems like ChatGPT at the end of 2022. 

The Council of Europe Framework Convention on AI, which opened for signature in September 2024, takes a different approach. As a framework treaty, it establishes broad commitments while leaving implementation details to national legislation. I was pleased to contribute to this process, and I believe the Convention represents an important step toward international convergence on AI governance.

The Convention’s strength lies in its inclusivity – bringing together not just the 46 Council of Europe member states but also includes countries like the United States, Canada, Japan, and others who are COE observer members.

Other jurisdictions are also moving forward. China has implemented specific regulations on algorithmic recommendation and generative AI. The United States is pursuing a deregulatory, hands-off federal approach to AI, but with an increasingly contentious federal-state battle playing out — with various states like Colorado and California passing their own AI legislation, and the Trump administration actively using litigation to push back against them.

South Korea has already enacted comprehensive AI legislation, and countries like Canada and Australia are moving fast to develop their own frameworks — the global momentum towards regulation is unmistakeable.

But here’s the crucial question: are we moving toward convergence or fragmentation? And does it matter?

I would argue that it matters enormously. The tech companies developing these systems are global actors. Data flows across borders. AI systems trained in one jurisdiction are deployed in others. If we end up with radically different regulatory standards, we risk creating compliance nightmares that genuinely could stifle innovation, while simultaneously failing to protect citizens because regulatory arbitrage allows companies to exploit gaps between jurisdictions.

This brings me to what I call the innovation paradox – and here I want to challenge conventional wisdom.

We are constantly told that regulation stifles innovation. The tech industry has been remarkably successful in propagating this narrative. But I want to turn that conventional wisdom on its head.

Responsible regulation does not stifle innovation – it channels it, focuses it, and ultimately sustains it by building and maintaining public trust. Without trust in the technology being developed, there is no sustainable innovation.

Consider the pharmaceutical industry. We don’t allow drug companies to release new medicines without rigorous testing and approval processes. These regulations don’t prevent pharmaceutical innovation – they ensure that innovation is safe and effective. Why should AI be any different when it can have equally profound impacts on human health, liberty, and wellbeing?

Or consider the financial services sector. After the 2008 financial crisis, we strengthened regulations around banking and financial products. Has this killed innovation in fintech? Quite the opposite. The UK has one of the world’s most vibrant fintech sectors precisely because clear, well-designed regulation creates certainty and builds confidence.

The problem is not regulation per se – it’s poorly designed regulation that is either too prescriptive, trying to specify technical solutions that quickly become obsolete, or too vague, providing no real guidance to either developers or users.

What we need is agile, principles-based regulation that sets clear objectives – transparency, accountability, fairness, human rights protection – while allowing flexibility in how those objectives are achieved. This is the approach we advocated in our original Select Committee report, and I remain convinced it’s the right one.

From Theory to Practice: Implementation Challenges

So how do we move from principles to practice? How do we implement these frameworks effectively?

Let me identify four critical challenges and some potential solutions:

1. The Technical Challenge: Making AI Explainable and Auditable

One of the fundamental problems with AI systems, particularly deep learning models, is their opacity. Even the developers often cannot fully explain why a system made a particular decision. This is a serious problem when those decisions affect people’s lives.

We need to mandate that AI systems deployed in high-stakes contexts – criminal justice, healthcare, employment, access to public services – must be auditable and, to the extent technically possible, explainable. This means requiring comprehensive documentation, logging of decisions, and the ability to conduct meaningful algorithmic audits.

The EU AI Act’s requirements for technical documentation and transparency are a good start, but we need to go further in developing standardised audit methodologies and potentially creating independent AI audit institutions. Perhaps through the evolution of the AI Security and Safety Institutes already established. 

2. The Skills Challenge: Building AI Literacy Across Society

As I noted in “Living with the Algorithm“, we face a serious digital skills gap. If citizens don’t understand even the basics of how AI systems work, how can they exercise their rights effectively? How can they challenge unfair decisions? How can they participate meaningfully in democratic debates about AI governance?

We need a massive investment in digital literacy at all levels – from primary education through to lifelong learning. But this isn’t just about teaching people to code. It’s about creating critical consumers of technology who can ask the right questions and demand accountability.

Policymakers and regulators also need to upskill dramatically. If we are to decide on regulation we need to understand the technology. That’s why, 10 years ago, I co-founded the All-Party Parliamentary AI Group. It’s still a work in progress!

3. The Enforcement Challenge: Who Guards the Guardians?

Principles and laws are meaningless without effective enforcement. But who enforces AI regulations, and how?

The UK’s approach of distributing responsibility across existing sector regulators — the ICO, Ofcom, the CMA, the FCA — without giving them new powers or adequate resources is, frankly, inadequate. It creates gaps, inconsistencies, and confusion.

We need either a dedicated AI regulator or a much more coherent, properly resourced framework for coordination among existing regulators. The recent establishment of various advisory bodies — the AI Security Institute, the Central AI Risk Function, the DRCF’s AI and Digital Hub, the Centre for Data Ethics and Innovation, and the Regulatory Innovation Office — is welcome, and each has a role to play. But advisory and coordinative powers are not enough. None of them has genuine enforcement teeth. Ahead of AGI or superintelligence, we need binding legislation, not just advisory frameworks.

We also need meaningful penalties for non-compliance. The EU’s approach of fines up to 6% of global turnover for the most serious violations provides real deterrence. Regulatory enforcement must have teeth. We should – and this may be more controversial – think about developer codes of conduct too, a kind of Hippocratic oath for digital engineers. 

4. The Democratic Challenge: Keeping Pace with Technological Change

Perhaps the most fundamental challenge is the mismatch between the pace of technological change and the pace of democratic governance.

AI capabilities are advancing exponentially. Parliamentary processes move more slowly.  By the time we’ve debated and passed legislation, the technology has moved on. How do we solve this problem?

I don’t have a perfect answer, but I believe part of the solution, lies in creating more agile regulatory mechanisms – frameworks that can be updated through secondary legislation or regulatory guidance without requiring full parliamentary process every time.

We also need better mechanisms for ongoing dialogue between technologists, policymakers, civil society, and the public. The OECD’s Global Parliamentary Network on AI, which I helped found, is one example of how we can facilitate these conversations across borders.

Let me highlight three areas where I believe urgent action is needed:

Governments cannot credibly regulate private sector AI use if they cannot ensure responsible AI use in the public sector. Yet we’ve seen repeated failures – from algorithmic bias in benefits systems to questionable uses of facial recognition technology.

My Public Authority Algorithmic and Automated Decision-Making Systems Bill Private Members’ Bill addressed this directly. We need mandatory impact assessments, bias testing, transparency requirements, and robust appeal mechanisms for all significant public sector AI deployments. Government should be leading by example, not lagging behind.

Then – as I’ve argued consistently – we need a duty of transparency regarding the use of copyrighted material in training AI systems.

The tech companies’ position that they should be able to scrape and use any content available online without permission or compensation is untenable. It would destroy the creative industries and undermine the very concept of intellectual property. We need clear legal frameworks that balance innovation with creators’ rights, and that means mandatory transparency about training data, licensing requirements, and fair compensation mechanisms.

Finally, we must address what may be the most profound ethical question: autonomous weapons systems capable of making kill decisions without meaningful human control.

I initiated and served on the Special Inquiry Select Committee into AI in Weapons Systems, and our report “Proceed with Caution” was clear: we must establish and enforce international prohibitions on fully autonomous weapons. The autonomous power to hurt, destroy, or deceive human beings should never be vested in artificial intelligence.

This isn’t just about the distant future. These systems are being developed and deployed now as we speak in the US-Iran war and in Ukraine. We need urgent international action to prevent an AI arms race that could make the nuclear arms race look tame by comparison.

So where do we go from here?

First, we need international convergence on core principles. The Council of Europe Framework Convention provides a foundation. We should build on it, working toward greater harmonisation of standards while respecting different legal traditions and governance models.

Second, we need to create interoperable regulatory frameworks. This doesn’t mean identical regulations everywhere, but it does mean ensuring that compliance with one jurisdiction’s requirements substantially satisfies others. Technical standards bodies like the ISO and The National Institute for Standards and Technology in the US  have a crucial role to play here.

Third, we need to invest in enforcement capacity. Regulatory bodies need resources, expertise, and powers adequate to the challenge. This includes funding for AI audits, algorithmic impact assessments, and compliance monitoring.

Fourth, we need to mandate transparency and explainability. High-risk AI systems should be required to provide clear documentation, enable meaningful audits, and offer explanations for their decisions that are comprehensible to affected individuals.

Fifth, we need meaningful public participation. AI governance cannot be left to tech companies and government officials. We need robust mechanisms for civil society engagement, public consultation, and democratic oversight.

Sixth, we need to get serious about education and skills. Digital literacy must become as fundamental as reading, writing, and arithmetic. And we need specialised training programs to ensure that policymakers, regulators, judges, and other key decision-makers understand the technology they’re governing.

Looking to the Horizon: The Challenges We Cannot Afford to Defer

Before I conclude, I want to turn from the governance frameworks we are building today to the relatively new threats that are already emerging on the horizon – because if we wait until they are fully upon us, we will have left it too late.

The first is the question of agentic AI. We are already moving beyond systems that respond to prompts into systems that act autonomously – planning, executing, and adapting across complex tasks with minimal human intervention. These agentic systems will operate in our financial markets, our healthcare systems, our critical national infrastructure. The question of meaningful human oversight – who is responsible when an autonomous AI agent causes harm, and how that accountability is enforced in real time – is not a theoretical problem. It’s an immediate regulatory gap. Our existing frameworks were designed for tools, not for actors. We urgently need to revisit them.

The second is the trajectory toward artificial general intelligence. I want to be direct about this, because there is a tendency in parliamentary debate to treat AGI as a distant science-fiction scenario rather than a live policy question. It is not. The leading AI developers themselves publish timelines measured in years, not decades. The implications – for employment, for democratic governance, for the balance of power between states and corporations, and for human autonomy itself – are of an order of magnitude that dwarfs anything we have previously legislated for. Advisory frameworks and voluntary commitments are wholly inadequate to this challenge. We need binding international architecture, we need it now, and we need it to have teeth. The Council of Europe Framework Convention is a beginning, but only a beginning. The lesson of the nuclear age is that we should have moved faster on governance than we did – and we cannot afford to repeat that mistake.

The third – and in some respects the most urgent – is autonomous weapons. I have already referred to the work of the Special Inquiry Select Committee and our report “Proceed with Caution“. But I want to be blunt: the pace of international negotiation on lethal autonomous weapons systems is not keeping up with the pace of their development and deployment. We are watching states – including allies – integrate AI into targeting and kill-chain decisions in ways that progressively erode the principle of meaningful human control. Once that principle is surrendered in practice, it will be extraordinarily difficult to recover in law. The prohibition on vesting the autonomous power to hurt, destroy or deceive human beings in artificial intelligence is not a noble aspiration – it must be a binding legal obligation. I call on the Government to set out, clearly and urgently, what steps it is taking in multilateral forums to secure that prohibition before the technological facts on the ground make the argument moot.

These three challenges – agentic systems, the AGI threshold, and autonomous weapons – share a common feature: they each represent a point at which the pace of technological development could outrun our capacity to govern it, possibly irreversibly. The window for effective action is open. It will not remain so indefinitely.

I began this speech by asking whether AI will be our servant or our master. The answer will not be determined by the technology itself. It will be determined by whether we – parliamentarians, governments, international institutions, and civil society – have the courage and the foresight to act before the moment of decision has passed. 

The decisions we make about governance frameworks in 2026 will shape the trajectory of AI development for generations to come.

I am often asked whether I’m optimistic or pessimistic about the future of AI. My answer is that I’m neither – I’m realistic.

Realism means that we must ensure we don’t sleepwalk into a future where opaque algorithms make life-changing decisions without accountability. We must not allow a handful of tech companies to accumulate unprecedented power without democratic oversight. I’m determined that we will not sacrifice fundamental rights on the altar of innovation. And I’m determined that we will not fail to grasp the extraordinary opportunities that responsible AI development offers.

But realism is not enough. We need action – coordinated, international, sustained action – to build the governance frameworks that will ensure AI serves humanity’s best values rather than our basest impulses.

What we need now is the political will to translate those values into effective practice. It requires collaboration across borders, across disciplines, and across the traditional divides between government, industry, academia, and civil society.

You, the students in this room today, will live with the consequences of the choices we make. You will inherit the AI governance frameworks we build -or fail to build. I hope that when you look back in 2040 or 2050, you’ll say that we in the 2020s got it right – that we built systems that protected what matters while enabling innovation that genuinely serves the common good. 

 

STAY CONNECTED

QUESTIONS, COMMENTS AND MEDIA

ABOUT LORD CLEMENT-JONES

MEMBER HOUSE OF LORDS

Tim Clement-Jones CBE, is former Chair of the House of Lords Artificial Intelligence Select Committee and Co-Chair of the All Party Parliamentary Group on Artificial Intelligence. He is a Liberal Democrat Peer and their spokesman for Science Innovation and Technology in the House of Lords. Tim is Chair of the Board of the Authors’ Licensing Collecting Society (ALCS)  and a champion of the creative industries. He is President of Ambitious Autism, the national autism education charity, and former Chair of the Council of Queen Mary University London .

Privacy Preference Center