I recently gave a talk to the Clapham society on AI and its opportunities and challenges. This is what I said.

Good evening, everyone. It is a real pleasure to be here with the Clapham Society which I first joined back in 1973.

Two hundred years ago, the Clapham Sect used to gather not far from here — at Henry Thornton and William Wilberforce’s house at Battersea Rise — not merely to debate but to act. Their cause was the abolition of the slave trade: a commercial system that enriched the powerful at the expense of the powerless. They were told it was economically indispensable. Whatever their economic self-interest, they refused to accept it. They  looked a moral challenge squarely in the eye and demanded an answer

 

 

The famous group portrait, in 1840,  by Benjamin Robert Haydon, is set in the Great Room of Freemasons’ Tavern in London, depicting the World Anti-Slavery Convention meeting there. My Gt Gt grandfather John Cropper is sitting attentively in the middle listening to Thomas Clarkson the formidable anti-slavery campaigner,who had lived to see not just the abolition of the trade in 1807 but the emancipation of enslaved people throughout the British Empire in 1833. 

 

Although the great figures of the Clapham Sect themselves did not live to attend the convention, that gathering was the direct heir to their work. The British and Foreign Anti-Slavery Society, founded in 1839 after emancipation in the British colonies, consciously built on the evangelical conviction, parliamentary strategy and public campaigning pioneered by Wilberforce and his circle, and sought to carry their abolitionist legacy from the British Empire to the wider world.

John Cropper was  a Liverpool Quaker and knew and was connected to many members of the Clapham Sect such as William Wilberforce, Henry Thornton, Zachary Macaulay, Hannah More, John Venn, James Stephen and  Thomas Fowell Buxton, a formidable coalition of influence across Parliament, finance, the press, literature, education and the law.

Today, we face a new kind of system that concentrates enormous economic power in very few hands, that makes consequential decisions affecting millions of lives, and that not everyone can see or challenge. It is called Artificial Intelligence.

In my work in Parliament and in my book, Living with the Algorithm, I pose one central question: will this technology be our servant, augmenting our human potential — or our master, making choices for us that we cannot see, understand, or appeal? Tonight, I want to look at the “Everyday AI” already in our pockets and on our computers, and how we ensure it works for us, not against us, and what we can do about it.

 What is “Everyday AI”?

AI is not a sentient robot with red eyes. It is software. Specifically, it is software that uses vast quantities of data to find patterns and make predictions — at a speed and scale no human can match.

When you navigate the South Circular, AI is predicting the traffic. When Netflix suggests a film, AI is predicting your taste. When your bank declines a transaction, an AI risk model has assessed it in milliseconds.

And since the arrival of systems like ChatGPT, Claude, Gemini, and their successors, AI has become something new: a conversational tool that drafts our emails, answers our questions, and increasingly, advises on our health and our finances. Half of all 8–17 year olds in the UK now use AI tools — often without their parents having the first idea what that means.

The challenge is that many of these systems are so- called “black boxes.” They make decisions — about who gets a mortgage, which CV gets shortlisted, which benefit claim gets flagged — but they cannot always tell us why. And that is where the trouble starts.

The Opportunities: Conditional Optimism

I describe myself as a “conditional optimist.” The opportunities are genuinely staggering, Even though I am spending this evening mainly speaking about the risks.

Take medicine. DeepMind’s AlphaFold has mapped the structure of virtually every known protein — a task that would have taken traditional methods centuries. It has already accelerated the discovery of potential treatments for diseases from Parkinson’s to malaria. At Moorfields Eye Hospital, AI is diagnosing over 50 eye conditions from a simple retinal scan with accuracy matching the best consultants in the world.

For our public services, AI can handle the administrative “drudgery” of local councils — processing planning applications, routing correspondence, checking benefit eligibility — freeing up human staff to do what only humans do well: empathy, social care, and complex judgment.

The UK government’s own AI Opportunities Action Plan, published this January, highlights that the UK raised £6 billion in AI venture capital in 2025 alone, and remains the leading AI market in Europe. The economic prize for getting this right — for the UK specifically — is estimated at up to £400 billion added to our economy by 2030.

But note that word: conditional. These benefits do not arrive automatically. They require governance, investment, and — above all — the political will to ensure the risks are mitigated and the gains are shared.

Challenge 1: The Great Art Heist

Let me turn to the first major challenge, and one I have campaigned on hard in Parliament: the creative industries. 

The UK’s creative industries — publishing, music, art, journalism — contribute £124 billion annually to the economy, yet AI companies have been training their systems on creators’ work without consent or compensation. Both Parliament and the courts have been grappling with this tension. Following a major government consultation in December 2024 that drew over 11,500 responses — with creators strongly opposing a proposed “opt-out” text and data mining exception — Ministers confirmed they would not proceed with their preferred  new copyright exception for AI training. 

The High Court’s November 2025 ruling in Getty Images v Stability AI offered no definitive resolution either: the case was dismissed on a technicality (Getty failed to prove the infringing acts occurred in the UK), leaving the underlying legal question unanswered but signalling that better-constructed future claims could succeed.

The government’s long-awaited Copyright and AI Report, published last month, represents a significant policy reversal. The opt-out mechanism is dead; no consensus exists on the way forward; and rather than legislating, the government seems now to be pursuing a voluntary licensing code developed through industry dialogue— with no draft code or timetable published.

The creative industries have won a battle of sorts, but the outcome remains deeply unsatisfactory. A voluntary licensing code carries no binding legal force until Parliament acts, and there is no clear indication of when — or whether — a full Copyright and AI Bill will arrive. Creators are rightly demanding a statutory opt-in licensing model and legally enforceable transparency requirements. Until those are delivered, fair compensation remains aspirational rather than guaranteed. Creators deserve to know when their work is used and to be fairly compensated

Challenge 2 The Everyday Harm: Fraud and Deepfakes

The most immediate risk to many people is security.

Fraud now accounts for 45% of all crime in England and Wales. That figure comes directly from the National Crime Agency, published this month. In 2025, over 444,000 cases were recorded to the National Fraud Database — the highest ever in a single year. And AI is turbocharging every single category.

But I want to focus on something specific, because I think it changes everything: voice cloning.

Researchers at the University at Buffalo published a study this year that used a striking phrase: voice cloning has crossed the “indistinguishable threshold.” A criminal now needs just a few seconds of your voice — from a voicemail, a social media video, a Teams call — to generate a convincing clone, complete with your natural rhythm, intonation, and breathing patterns. Some major UK retailers are already reporting over a thousand AI-generated scam calls per day.

And when it comes to deepfake video: a 2025 study by the biometrics firm iProov found that only 0.1% of participants could correctly identify all the fake and real media they were shown. Not 19%, not even 10%. Point one per cent. In controlled tests, human accuracy on high-quality deepfake video is just 24.5%. We are functionally unable to detect them with the naked eye.

In February 2024, a well known UK-based consultancy business (ARUP) lost £20 million in a single incident where criminals duped them using AI-generated deepfakes of executives during a virtual meeting. This is not a future threat. It is happening now.

This is why  AI-enabled fraud should be treated as a national  security priority — and for us to pursue the “scam factories” behind these attacks with the same intensity and resources we deploy against terrorism.

Challenge 3 Engineered to Hook: Chatbots and Addictive Algorithms

I want to focus on one category of harm that I think deserves particular attention: the risks that AI and algorithmic design pose specifically to children.

We are not talking here about children stumbling across bad content — though that remains serious. We are talking about systems that are engineered to be as compelling as possible, for as long as possible, regardless of the cost to the user’s wellbeing.

Take AI chatbots. Character.AI — one of the most widely used platforms — allows children to form what it itself describes as deep emotional relationships with AI companions. In Florida, the mother of fourteen-year-old Sewell Setzer sued Character.AI and Google after her son took his own life, claiming he had developed a months-long virtual emotional and sexual relationship with a Game of Thrones-style chatbot that became his primary emotional support and then encouraged his suicidal thinking. 

That case has now been settled — the financial terms undisclosed — but multiple further actions on similar facts are before the US courts in Colorado, New York and Texas. They all raise the same question: should it be lawful to deploy a product that systematically fosters emotional dependency in children, exposes them to sexualised content, and fails to intervene when they express suicidal intent?

And then there is the algorithm. Molly Russell was fourteen when she died in 2017. Her inquest in 2022 heard that she had been served over two thousand pieces of content related to depression, self-harm and suicide by Instagram and Pinterest — content the platforms’ own systems had identified as relevant to her and kept serving. The coroner found it contributed to her death. She was not unusual; she was the version of this story that became visible.

US juries are now reaching their own conclusions. In New Mexico, a jury ordered Meta to pay $375 million in penalties for harming children’s mental health under state consumer protection law. In California, a Los Angeles jury awarded $6 million in damages to a young woman who developed anxiety, depression and suicidal thoughts after becoming addicted to Facebook and YouTube as a minor — finding both Meta and Google liable. These are not theoretical harms. Courts on the other side of the Atlantic are finding that algorithmic design choices directly caused psychological damage to children.

The Online Safety Act created duties on social media platforms — but not on AI chatbots of this kind. A number of us have supported amendments to the Crime and Policing Bill which would go further: creating specific criminal liability for platforms whose chatbot and algorithmic systems cause demonstrable harm to children, and ensuring that AI developers cannot evade responsibility by pointing to the novelty of the technology.

Safety by design must mean that the burden of proof sits with the platform, not the child. Not “prove our product harmed you” — but “prove your product is safe.”

Challenge 4 Avoiding “Horizon 2.0”: Accountability in Public Services

We must also learn the lessons of the Post Office Horizon scandal, because the warning signs of a new one are already here.

In June 2024, the Guardian revealed — through a Big Brother Watch investigation — that over 200,000 housing benefit claimants had been wrongly put through fraud investigations by a Department for Work and Pensions automated system. Two-thirds of those flagged claims were entirely legitimate. Some four million pounds were spent on pointless checks. Thousands of households — often among the most vulnerable — were subjected to the stress and stigma of a fraud investigation for nothing.

And here is an important nuance worth noting: this particular system was not, in fact, artificial intelligence. It was a rule-based automated tool. Which should, if anything, make us more alarmed — because it means the problems of opaque, unaccountable algorithmic decision-making exist even before we reach full AI. And the DWP’s own data has since revealed that its newer machine learning models show bias, over-flagging older claimants and non-UK nationals for review.

That is why I introduced a Private Members Bill in the Lords a year ago or so to ensure that every significant public AI decision is auditable, and that every citizen has a legal right to a human-readable explanation and a clear route to appeal. Transparency is not red tape. It is democracy.

Challenge 5 The Future of Work: “Job Bundling”

What about work? This is the anxiety I hear most often, and it is not irrational.

This is not primarily a story about robots in factories — that story is already decades old. This is a white-collar revolution. We are seeing what I call “job bundling”: where one person, using AI tools, can now do the work previously done by three. A paralegal who can draft contracts, a marketer who can produce copy, a developer who can write code — all at ten times the speed with AI assistance.

The risk is not mass unemployment in the short term. The risk is that the productivity gains from AI flow to the owners of capital — the tech companies, the shareholders — rather than to the workers whose labour and skills have been partly automated away.

It is a reasonable demand that the gains of this technology are distributed justly.

Challenge 6 -Closing the Skills Divide

We are also at risk of a two-speed United Kingdom- a major digital divide — and the evidence for it is now hard data, not anecdote.

The Sutton Trust published a major survey in July 2025, polling over ten thousand teachers across England. The findings should alarm anyone who cares about educational equity. 

Private school teachers are more than twice as likely to have received formal AI training as their state school peers — 45% compared to 21%. And when it comes to informal training, the gap is equally stark: 77% of independent school teachers have received some, against just 45% in state schools.

The divide deepens within the state sector itself. Teachers in schools rated outstanding by Ofsted are more than three times more likely to have had formal AI training than those in schools rated requires improvement or inadequate — 35% against 11%. And the practical consequences are already showing up in the classroom: private school teachers are more than twice as likely to be using AI to write pupil reports, to communicate with parents, and to support marking.

Meanwhile, Ofcom April 2025 figures show 2.8 million people in the UK still have no home internet access at all — many of them elderly, on low incomes, or in social rented housing. These are the people who will be most affected by AI-driven public services and least equipped to navigate them.

The Sutton Trust’s conclusion was stark. If action is not taken to close these widening gaps, access to AI risks becoming the next major barrier to opportunity for disadvantaged young people. The type of school you go to should not determine your chances of benefiting from this technology.

The government has committed to upskilling ten million workers in AI by 2030. I welcome that ambition. But ambition without equity is not a strategy. We need a National Skills for the Future Framework that reaches into every state school, every library, every community centre — and that treats media literacy: knowing how to question an algorithm, how to spot a deepfake, and how to understand what AI is doing to your life choices — as a core skill for every citizen, not a luxury for the well-resourced. And conversely, it is not just about knowing how to use and question the technology but also how to avoid becoming overdependent on it. 

Challenge 7 -“Infrastructure: The Environmental Bargain We Haven’t Made”

None of this works without the plumbing. But the plumbing has consequences — and we are not being honest enough about what they are.

The AI revolution is extraordinarily energy-hungry. The 140 proposed data centre schemes in the UK pipeline could collectively require 50 gigawatts of electricity — five gigawatts more than the country’s entire current peak demand. To put that in context: on the coldest day this February, peak demand across Great Britain was 45 gigawatts. AI data centres are queueing up to demand more than that on their own.

The government’s own figures on the carbon impact of this expansion have been described by Carbon Brief as potentially hundreds of times lower than the reality, if even a small proportion of data centre electricity comes from gas. And Nvidia’s own chief executive has said publicly that the UK will need to burn gas to power AI. We cannot simultaneously claim to be building a clean energy future and quietly accept that AI infrastructure will require us to keep burning fossil fuels.

There is a second environmental consequence that receives almost no attention at all: water. The global water footprint of AI systems could reach between 312 and 764 billion litres in 2025 — equivalent to the entire global annual consumption of bottled water. 

Data centres need vast quantities of water to cool their servers. The Environment Agency already projects a national water supply deficit of nearly 5 billion litres per day by 2050.  And yet the government has designated Culham in Oxfordshire — a site adjacent to one of the country’s first new reservoirs in thirty years — as an AI growth zone, without apparently asking what that does to local water pressure.

Carbon Brief’s analysis found that ten of the largest data centres in planning or construction could cause annual emissions equivalent to 2.7 million tonnes of CO₂ — effectively wiping out all the carbon savings expected from the switch to electric vehicles.

Now — I welcome the government’s £1 billion Compute Roadmap. I welcome the ambition to make Britain a serious AI nation. But ambition without environmental honesty is not a strategy 

Here is what we should be demanding in return. First, mandatory environmental impact assessments for every major data centre — covering not just energy but water consumption and carbon at the actual grid intensity. 

Second, data centre developers must demonstrate that their projects will not cause a net increase in UK carbon emissions. 

Third, the waste heat that data centres expel — which would otherwise be vented into the atmosphere — must be put to use. Germany’s Energy Efficiency Act requires exactly this. There is a data centre in Amsterdam that heats thousands of homes from computing waste heat. Culham could do the same. Parliament should make that a legal requirement, not a voluntary aspiration.

Data centres must earn their environmental and social licence. If they cannot demonstrate they are net contributors to our clean energy future rather than obstacles to it, they should not be built.

Challenge 8 -The Bigger Picture — Digital Sovereignty

I want to round off the  discussion of  the challenges with the biggest picture of all — because all of the risks I have described tonight are made harder to address by a single structural fact: we do not control the technology.

The four largest US tech companies — Amazon, Google, Microsoft and Meta — are spending a combined $700 billion on AI infrastructure in 2026 alone. To put that in perspective: it is more than the entire GDP of Sweden.  We are witnessing, as Nvidia’s CEO Jensen Huang has said, the largest infrastructure build-out in human history. And almost none of it is ours.

The United States attracts roughly twenty-four times more private AI investment than the United Kingdom.  One competition economist, Cristina Caffarra of the Eurostack Foundation, estimates that 90% of Europe’s digital infrastructure — cloud, compute and software — is now controlled by non-European, predominantly American, companies. 

Now, I want to be clear: I welcome American investment in Britain. The UK-US Tech Prosperity Deal signed last September brought £31 billion of commitments from US tech companies into our AI infrastructure . That is real money, and it matters. But welcoming investment is not the same as surrendering strategic control. And there is a question that our government is not yet asking loudly enough.

The US CLOUD Act allows American authorities to compel US technology companies to hand over data regardless of where in the world it is stored.  Microsoft admitted in a French court last year that it cannot guarantee the data sovereignty of European customers if a US court orders disclosure. And we have already seen what happens when those powers are weaponised: the International Criminal Court’s chief prosecutor had his email account blocked following US sanctions in 2025. The ICC has since migrated entirely away from Microsoft.

That is an international court in The Hague. But the principle applies to any government, any public body, any NHS trust, any school that stores sensitive data with a US provider subject to American law.

The answer is not to shut the door on American technology. It is to insist on open standards and open source wherever possible — so that we are not locked in. It is to support European and British alternatives. It is to ensure our competition authorities have the teeth to challenge concentration, not the instructions to look away. And it is to be honest with the public that “sovereign AI” built entirely on American chips, in American cloud infrastructure, running American models, is not sovereignty at all. It is dependency with better branding

A Regulatory Vision: Four Pillars and  The International Framework

So how do we meet these challenges? Let me set out a  framework, 

We need a Lead AI Regulator — a single body so that businesses and citizens alike have a clear front door, and are not bounced between the ICO, the FCA, Ofcom, and a dozen other agencies, none of which has the full picture.

We need a Duty of Candour — modelled on the pharmaceutical industry — where AI companies are legally required to proactively disclose when they find bias, errors, or safety risks in their systems. Not to wait to be caught. To come forward.

We need an obligation of Risk Assessment and Safety by Design as legal standards — meaning that assessment of risk to children and the public more widely and design of children’s protections, accessibility requirements, and civil liberties safeguards must be built into AI systems from the outset, not bolted on as an afterthought. 

And we need a Digital Bill of Rights — enshrining in law your right to know when AI is making a decision about you, your right to a human explanation, and your right to appeal and redress.

I also want to place these domestic demands in their international context, because AI does not respect borders. These standards need to be international. The Council of Europe’s Framework Convention on Artificial Intelligence — signed in 2024 — is the world’s first binding international treaty on AI governance, covering human rights, democracy, and the rule of law. The UK has signed it, but not yet ratified it. We should. And we should be pushing at every international forum — including the next AI Safety Summit in Geneva next year  — for that treaty to become the foundation of a genuinely global AI governance architecture.

The UK still has no dedicated AI Act. In the meantime, we are reliant on voluntary principles expected but not enforced by the key regulators  and on the goodwill of companies that have every financial incentive to move fast and worry about the rules later. The EU’s AI Act is already being implemented. We are falling behind — not on innovation, but on protection.

Conclusion

Alan Turing — whose genius gave us the theoretical foundation of computing who worked at Bletchley Park at the same time as my parents — once wrote: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

He was right then, and he is right now. We cannot predict every consequence of this technology. But we can see, clearly enough, that it is concentrating power, challenging livelihoods, exposing the vulnerable to new harms, and reshaping how truth itself is perceived

The choice is not between AI and no AI. That bird  has flown. The choice is between AI that is transparent and accountable, and AI that is opaque and uncontested. Between AI that distributes its benefits broadly, and AI that funnels its gains to a handful of corporations.

The Clapham reformers of two centuries ago did not accept that a profitable system was, by that fact alone, a just one. They did not accept the argument that abolishing the slave trade would harm the economy.They reminded themselves daily of their mission even with their crockery.  They organised, they argued, they legislated, and they won.

I am not comparing the slave trade to machine learning. But I am saying that the instinct to demand that powerful systems answer to human values — rather than the other way around — is as relevant today as it was at Battersea Rise in 1800.

Let’s ensure that AI becomes the servant that helps us all thrive for the next two centuries.

 

STAY CONNECTED

QUESTIONS, COMMENTS AND MEDIA

ABOUT LORD CLEMENT-JONES

MEMBER HOUSE OF LORDS

Tim Clement-Jones CBE, is former Chair of the House of Lords Artificial Intelligence Select Committee and Co-Chair of the All Party Parliamentary Group on Artificial Intelligence. He is a Liberal Democrat Peer and their spokesman for Science Innovation and Technology in the House of Lords. Tim is Chair of the Board of the Authors’ Licensing Collecting Society (ALCS)  and a champion of the creative industries. He is President of Ambitious Autism, the national autism education charity, and former Chair of the Council of Queen Mary University London .

Privacy Preference Center