"The conventional wisdom that regulation stifles innovation needs to be turned on its head"

I recently wrote a piece for Chamber UK on Regulation and Innovation. An attempt to dispel a pervasive myth!

"Regulation as an Enabler: The Case for Responsible AI"

The conventional wisdom that regulation stifles innovation needs to be turned on its head in the artificial intelligence sector. AI technology now impacts a vast array of sectors including healthcare, finance, transport, and more, influencing decisions that can drastically affect individuals and communities

 As AI systems become more powerful and pervasive, there is  growing recognition that appropriate regulation isn't just about restricting harmful practices – it's actually key to driving widespread adoption and sustainable growth.

There is a clear parallel with the early automotive industry. In the early 20th century, the  introduction of safety standards, driver licensing, and traffic rules didn't kill the car industry – it enabled its explosive growth by building public confidence and creating predictable conditions for manufacturers. Similarly, thoughtful AI regulation can create the trust and stability needed for the technology to flourish.

In the current landscape many potential AI adopters – from healthcare providers to financial institutions – are hesitating not because of technological limitations, but due to uncertainties about liability, ethical boundaries, and public acceptance. Clear regulatory frameworks that address issues like algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing clarity and confidence and generating public trust. 

The inherent risks of AI, such as biases in decision-making, invasion of privacy, and potential job displacement, make it clear that unregulated AI can lead to significant ethical and societal repercussions. The call for regulation is about ensuring that AI systems operate within boundaries that protect human values and rights. Without this framework, the potential misuse or unintended consequences of AI could lead to public distrust and resistance against the technology

Far from being a brake on progress, well-designed regulation can be a catalyst for AI adoption and innovation. Regulation can drive innovation in the right direction. Just as environmental regulations spurred the development of cleaner technologies, AI regulations focusing on explainability and fairness could push developers to create more sophisticated and responsible systems. 

Regulation can stimulate innovation by defining the rules of the game, giving companies the confidence to invest in AI technologies without fear of future legal repercussions for unforeseen misuses. In markets where regulation is clear and aligned with global standards, companies can also find easier paths to expand internationally. This not only drives growth but also fosters international collaboration on global AI standards, leading to broader advancements in the field.

The question isn't whether to regulate AI, but how to regulate it in a way that promotes both innovation and responsibility. Get this right, and regulation becomes a powerful enabler of AI's future growth.

The EU's AI Act and the UK's proposed pro-innovation approach to AI regulation are contrasting and imperfect attempts to strike this balance. 

Regulation should be principles-based rather than overly prescriptive, allowing for technological evolution while maintaining focus on outcomes. It should emphasize transparency and accountability without stifling creativity. And critically, it must be developed with input from both technical experts and broader stakeholders to ensure it's both practical and effective.

The journey towards responsible AI is not solely about technological achievement but also about how these technologies are integrated into society through thoughtful regulation. By establishing a robust regulatory framework, we can ensure that AI serves the public interest while also fostering an environment where trust and innovation lead to technological growth. The goal is to create a future where AI's potential is fully realized in a way that is beneficial and safe for all. This is not just a possibility but a necessity as we step into an increasingly AI-driven world.

There is some growing recognition of this in the recently published AI Opportunities Plan in the UK. In particular the language around regulation assisting innovation is refreshing:

 ‘Well-designed and implemented regulation, alongside effective assurance tools, can fuel fast, wide and safe development and adoption of AI.

We  must now make that a reality!

 


Getting the use of AI in hiring right

I recently took part in the Launcjh of the National Hiring Strategy by the newly formed Association of RecTech Providers. This is what I said.

Good afternoon. It is a real privilege to welcome 200 of the UK's leading HR, talent acquisition, and hiring professionals to the Terrace Pavilion for the launch of the first National Hiring Strategy.

This is an important moment . This is a collective commitment to make UK hiring fundamentally faster, fairer, and safer. The current state of UK hiring presents both an economic and a social challenge. On average, hiring takes almost 50 days. The outcomes speak for themselves: roughly 40 percent of new hires quit their jobs within three months. This inefficiency costs our economy millions annually and represents human potential squandered.

The National Hiring Strategy aims to tackle these issues head-on. The RecTech Roadmap—a key component of this strategy—provides the strategic blueprint for deploying technology to revolutionise how we hire. I welcome the formation of the Association of RecTech Providers. They will steer this change, set industry standards, and help ensure the UK gains global leadership.

Artificial Intelligence sits at the heart of this transformation..AI offers extraordinary opportunities. The efficiency gains are real and significant. AI tools can handle high-volume, repetitive tasks—screening CVs, scheduling interviews, processing applications—dramatically reducing time-to-hire. Some examples show reductions of up to 70 percent. That's remarkable.

But speed alone isn't the goal. What excites me most is AI's potential to drive genuine inclusion. Technology, particularly AI combined, can enable greater labour market participation for those currently shut out: carers, people with disabilities or chronic illnesses, neurodiverse individuals, older workers, parents..AI can help us match people based on skills, passions, and circumstances—not just past work experience. It can help us create a world where work fits around people's lives, rather than the other way around. That's the vision I want to see realised.

However—and this is crucial—AI also has the potential to make hiring more problematic, more unfair, and more unsafe if we're not careful. We must build robust ethical guardrails around these powerful tools.

 I've always believed that AI has to be our servant, not our master..

Fairness must be a key goal. The core ethical challenge is that machine learning models trained on historical data often reproduce past patterns of opportunity and disadvantage. They can penalise groups previously excluded—candidates with career gaps, for instance, or underrepresented minorities.

This isn't hypothetical. We've seen AI systems reduce the representation of ethnic minorities and women in hiring pipelines. Under the Equality Act 2010, individuals are legally protected from discrimination caused by automated AI tools..

But we need proactive auditing. Regular, detailed bias assessments to identify, monitor, and mitigate unintended discrimination. These audits aren't bureaucratic box-ticking—they're critical checks and balances for ethical use.

While we don't yet have specific AI legislation in the UK, recruiters must comply with existing data protection laws. Data minimisation is essential.. Audits have raised concerns when AI tools scrape far more information than needed from job networking sites, sometimes without candidates' knowledge.

Transparency matters profoundly. Recruiters must inform candidates when AI tools are used, explaining what data is processed, the logic behind predictions, and how data is used for training. If this processing isn't clearly communicated, it becomes "invisible"—and likely breaches GDPR fairness principles. Explanations should be simple and understandable, not buried in technical jargon.

And then the human touch should always maintained.  AI should complement, not replace, the human aspects of recruitment.

This should the case despite more nuanced provisions introduced under the Data Use and Access Act. Now the strict prohibition on significant decisions based solely on automated processing now applies only to decisions involving special category data (e.g. health, racial origin, genetics, biometrics but of course  recruiters will have some of that kind of information. 

But even where personal data is not “special category,” organisations must provide specific safeguards. Of :

  • Individuals must be informed about the automated decision, have the right to make representations and contest the decision and  intervention must be offered upon request or as required by law.

Judgment, empathy, and responsible innovation should remain at the core of how we attract and engage talent.

Businesses also need clear policies for accountability and redress. Individuals must be able to contest decisions where their rights have been violated..

The launch of this National Hiring Strategy provides a critical opportunity. The firms that succeed will be those that blend machine efficiency with human empathy. They will recognise that technology is a means to an end: creating opportunities, unlocking potential, and building a labour market that works for everyone.

They ensure we reach a faster, fairer, and safer UK labour market—without taking destructive shortcuts that leave people behind.

We stand at a moment of genuine possibility. The technology exists. The expertise is in this room. The Strategy provides the framework.. Let's embrace AI's potential with optimism but the end of the day, hiring isn't about algorithms or efficiency metrics—it's about people, their livelihoods, and their futures. Thank you.


Media literacy has never been more urgent.

This is a speech I recently gave at the launch of the Digital Policy Alliance's new report on Media literacy in Education

With continuing Government efforts to see public services online alongside expanding AI usage, media literacy has never been more urgent. Debates surrounding media literacy typically focus on visible risks rather than the deeper structural issues that determine who cannot understand, interpret and contribute in the digital age.

I have the honour of serving as an Officer of the Digital Inclusion All-Party Parliamentary Group (APPG), and previously as Treasurer of the predecessor Data Poverty APPG. This issue—ensuring digital opportunities are universal- is crucial for many of us in Parliament. 

The Urgent Case for Digital Inclusion

As many of us in this room know, digital inclusion is not an end in itself; it is a vital route to better education, to employment, to improved healthcare, and a key means of social connection. Beyond the social benefits, there are also huge economic benefits of achieving a fully digitally capable society. Research suggests that increased digital inclusion could result in a £13.7 billion uplift to UK GDP.

Yet, while the UK aspires to global digital leadership, digital exclusion remains a serious societal problem. The figures are sobering:

  • 1.7 million households have no mobile or broadband internet at home.
  • Up to a million people have cut back or cancelled internet packages in the past year as cost of living challenges bite.
  • Around 2.4 million people are unable to complete a single basic digital task required to get online.
  • Over 5 million employed adults cannot complete essential digital work tasks.
  • Basic digital skills are set to become the UK’s largest skills gap by 2030.
  • And four in ten households with children do not meet the Minimum Digital Living Standard (MDLS).

The consequence of this is that millions of people are prevented from living a full, active, and productive life, which is bad for them and bad for the country. This is why the core mission of the DPA—to tackle device, data, and skills poverty—is so essential.

Media Literacy: Addressing the Structural Roots of Exclusion

Today, the DPA is launching its Media Literacy Report, and its timing could not be more important. With continuing Government efforts to move public services online, coupled with the rapid expansion of AI usage, media literacy has never been more urgent.

The DPA report wisely moves beyond focusing solely on the visible risks of the internet, such as misinformation, and addresses the deeper structural issues. Media literacy is inextricably linked to digital exclusion: the ability to understand, interpret, and contribute in the digital age is determined by access to devices, socio-economic background, and school policy. 

  • School phone bans must be accompanied by extensive media literacy education, which is iterated and revisited at multiple stages. 
  • Teachers must receive meaningful training on media literacy. 
  • Parents must be supported by received accessible guidance on media literacy. 
  • Schools should consider peer-to-peer learning opportunities. 
  • Tech companies must disclose information on how recommendation algorithms function and select content.
  • AI generated information must be labelled as such. 
  • Verification ticks should be removed from accounts spreading misinformation, especially related to health. 

We risk consigning people to a world of second-class services if we do not provide the foundational skills required to engage critically, confidently, and safely with the online world. Crucially, the DPA’s work keeps those with lived experience of digital exclusion at the heart of the analysis, providing real-life stories from parents, teachers, and young people.

Tackling Data Poverty: The Affordability Challenge

One of the most immediate and significant barriers to inclusion is affordability—what we often refer to as data poverty. Two million households in the UK are currently struggling to pay for broadband, and Age UK hears from older people who find essential services—like checking bus times or dealing with benefits—impossible due to lack of digital confidence and the pressure to manage costs.

The current system relies heavily on broadband social tariffs as the primary fix, but uptake has been sluggish, with only 5% of eligible customers having signed up previously. This is due to confusion, low awareness, cost, and complexity.

The solution requires radical, coordinated action:

  1. Standardisation: All operators should offer social tariffs to an agreed industry standard on speed, price, and terms. This will make it easier for customers to compare and take advantage of these vital packages.
  2. Simplified Access: We welcome the work being done by the DWP to develop a consent model that uses Application Programming Interfaces (APIs) to allow internet service providers (ISPs) to confirm a customer's eligibility for benefits, such as Universal Credit. This drastically simplifies the application journey for the customer.
  3. Sustainable Funding: My colleagues in Parliament and I have been keen to explore innovative funding methods. One strong proposal is to reduce VAT on broadband social tariffs to align with other essential goods (at least 5% or 0%). It has been calculated that reinvesting the tax receipts received from VAT on all broadband into a social fund could provide an estimated £2.1 billion per year to provide all 6.8 million UK households receiving means-tested benefits with equitable access.

Creating a Systemic, Rights-Based Approach

If we are to achieve a 'Digital Britain by 2030', we need more than fragmented, short-term solutions. We need a systematic, rights-based approach.

First, we must demand better data and universal standards. The current definition of digital inclusion, based on whether someone has accessed the internet in the past three months, is completely outdated. We should replace this outdated ONS definition with a more holistic and up-to-date approach, such as the Minimum Digital Living Standard (MDLS). This gives the entire sector a common goal.

Second, we must formally recognize internet access as an essential utility. We should think of the internet as critical infrastructure, like the water or power system. This would ensure better consumer protection.

Third, we must embed offline and physical alternatives. While encouraging digital use, we must ensure that people who cannot or do not wish to get online—such as many older people who prefer interacting with services like banking in person—have adequate, easy-to-access, non-digital options. Essential services like telephone helplines for government services, such as HMRC, and the national broadcast TV signal must be protected so the digital divide is not widened further.

Fourth, we must empower local and community infrastructure. Tackling exclusion must happen on the ground. We need to boost digital inclusion hubs and support place-based initiatives. This involves increasing the capacity and use of libraries and community centres as digital support centres and providing free Wi-Fi provision in public spaces. 

We should stand ready to support the Government's Digital Inclusion Action Plan, but we must continue to emphasize the need for a longer-term strategy that has central oversight, such as a dedicated cross-government unit, to ensure that every policy decision is digitally inclusive from the outset.

The commitment demonstrated by the Digital Poverty Alliance today, and by everyone in this room, proves that we can and must eliminate digital poverty and ensure no one is left behind.


Lord C-J at Writers' All Party Group Annual Reception: We need duty of transparency

This evening’s winter reception of the All Party Writers Group takes place at an important moment for authors and writers. It is therefore especially appropriate that we are joined by Dr Clementine Collett, whose important new report, The Impact of Generative AI on the Novel, sets out in clear terms the risks and opportunities that generative technologies present for long‑form fiction

 

 

 

Her work reinforces a message that writers, agents and publishers have been giving Parliament for some time: that generative AI must develop within a framework that protects the integrity of original work, the viability of creative careers and the trust of readers.

The starting point is the change of direction we have already seen. Following an overwhelming response to its consultation on copyright and AI, the Government has stepped back from its previously stated preferred option of a broad copyright exception for text and data mining. That proposal was regarded by authors and rightsholders as unfair, unworkable and difficult to reconcile with international norms. The decision to move away from it has been widely welcomed across the creative industries, and rightly so. 

The government has recognised that the copyright creative content is not an input to be taken for granted, but an asset that needs clear, enforceable rights.

From the outset, rightsholders have been remarkably consistent in what they ask for. They want a regime based on transparency, licensing and choice. Transparency, so that authors know whether and how their works have been used in training AI systems and their rights can be enforced.  

Licensing, so that companies seeking to build powerful models on the back of that material do so on lawful terms. 

And choice, so that individual creators can decide whether their work is used in this way and, if so, on what conditions and at what price. Dr Collett’s report underlines just how crucial these principles are for novelists, whose livelihoods depend on the distinctiveness of their voice and the long‑term value of their backlist.

In parliamentary terms, much of this came into sharp relief during the passage of the Data (Use and Access) Bill, where many of us in both houses were proud to  support the amendments brought forward by Baroness Beeban Kidron. Those amendments reflected the concerns of musicians, authors, journalists and visual artists that their works were already being used to train AI models without their permission and without remuneration. They made it clear that they were not anti‑technology, but that innovation had to be grounded in respect for copyright and for the moral and economic rights that underpin creative work. 

Those concerns are echoed in Dr Collett’s analysis of how unlicensed training can erode both the economic prospects of writers and the incentive to invest in new writing.

Since then, there have been some modest but important advances. We have seen a renewed emphasis from the Secretaries of State at DSIT and DCMS on supporting UK creatives and the wider creative industries. Preliminary and then technical working groups on copyright and AI have been convened, alongside new engagement forums on intellectual property for Members of both Houses. 

The Creative Industries Sector Vision, and the announcement of a Freelance Champion, signal an acceptance that the conditions for freelance writers must be improved if we want a sustainable pipeline of new work. For novelists in particular, whose incomes are often precarious and long‑term, the policy choices made now in relation to AI will have lasting consequences.

In parallel, the international context has moved rapidly. High‑profile litigation in the United States has demonstrated that the boundary between lawful and unlawful use of works for training models is real and enforceable, with significant financial consequences when it is crossed. The European Union has moved ahead with guidelines for general‑purpose AI under the AI Act, designed in part to give practical effect to copyright‑related provisions. 

Courts in the EU have begun to address the legality of training on protected works such as song lyrics. Other jurisdictions, including Australia and South Korea, are clarifying that there will be no blanket copyright exemptions for AI training and are setting out how AI‑generated material will sit within their systems.

Here in Parliament, the Lords Communications and Digital Committee has continued its inquiry into AI and copyright, taking evidence from leading legal experts. A number of points have emerged strongly from that work: that transparency is indispensable if rightsholders are to know when their works have been used; that purely voluntary undertakings in codes of practice are not sufficient; and that there is, as yet, no compelling evidence that the existing UK text and data mining exception in section 29A of the Copyright, Designs and Patents Act should be widened. Dr Collett’s report adds a vital literary dimension to this picture, examining how the widespread deployment of generative AI could reshape the market for fiction, the expectations of readers and the discovery of new voices if left unchecked.

Against this backdrop, the position of writers’ organisations has been clear. The Authors’ Licensing and Collecting Society, reflecting a survey of over 13,500 members, is firmly opposed to any new copyright exception that would weaken protection for works used in AI training. We argue instead for licensing models that give technology companies access to content while preserving genuine choice and control for creators. 

Working with the Copyright Licensing Agency, ALCS is developing a specific licence for training generative AI systems, initially focused on professional, academic and business content, where licensing is already well embedded and where small language models can be tested in a controlled way. There is strong concern that, if left entirely to market forces, generative systems could flood the ecosystem with derivative material, making it harder for original voices to be heard and weakening the economic foundation of literary careers. That is why many in the sector argue that fiction should be approached with particular care, and that any licensing solutions must be robust, transparent and genuinely optional.

Looking ahead, several priorities suggest themselves. First, Government should make clear that it will not re‑open the door to a broad copyright exception for AI training.

Secondly, it should actively support the development of practical licensing routes, including those being taken forward by ALCS and CLA, while recognising that fiction may require distinct treatment. 

Thirdly, transparency and record‑keeping obligations on AI developers should be strengthened so that rightsholders, including novelists, can identify when and how their works have been used.

Finally, Parliament should continue to scrutinise this area closely, informed by expert work such as Dr Collett’s and by the lived experience of writers represented through this All-Party Group.

The past year has shown what can be achieved when writers organise and speak with a united voice. The Government has shifted away from its most problematic proposals and has begun to engage more seriously with the issues.

But for authors the destination has not yet been reached. The aim must be a settlement in which creators can be confident that their rights will be respected, that they have meaningful choice over the use of their work in AI, and that they can share fairly in any new value created. This evening’s discussion, and the findings of Dr Collett’s report, are an important contribution to that task. This work must continue, but I believe we are now on the right path: one of balance, respect and creative confidence for and by our creators in the digital age.When the Government launched its consultation on copyright and artificial intelligence, there was a strong sense of unease among creators and rights holders. Their response was overwhelming—and decisive. The Government quite rightly moved away from its original proposal to introduce a copyright exception for text and data mining. That so‑called “preferred option” would have been unfair to authors, unworkable in practice, and at odds with our international obligations under the Berne Convention and other frameworks.

Instead, the clear message from those who create—from writers and composers to journalists, artists and performers—was that transparency and choice must guide the use of their work in the age of AI. As many rightsholders stressed, a transparent licensing system would allow AI companies to gain legitimate access to creative material while ensuring that authors can exercise control and be remunerated fairly for the use of their works.

My Lords, I was proud to support the amendment tabled by Baroness Kidron to the Data (Use and Access) Bill earlier this year. I said then, and I say again tonight, that musicians, authors, journalists and visual artists have every right to be concerned about their work being used in the training of AI models without permission, transparency or remuneration. These creators are not seeking to halt innovation, but to ensure that innovation is lawful, ethical and sustainable. Only through trust and fairness can we achieve that balance.

Since then, welcome signs have emerged. A change of personnel at DSIT and DCMS has brought, I hope, a more vigorous commitment to our creative sectors. New engagement groups and technical working groups have been established, including those for Members of both Houses, to consider the complex interactions between copyright and AI. I commend that spirit of dialogue—but now we need to see outcomes, not just ongoing discussion.

The Government’s Creative Industries Sector Vision also set out ambitions that we can all share. The appointment of a Freelance Champion, long advocated by many of us, is especially welcome. We await news of how the role will evolve, but it is another step toward strengthening the creative economy that underpins so much of Britain’s soft power and international reputation.

Developments abroad remind us that we are not alone in this debate. In the United States, the landmark settlement between Anthropic and authors earlier this year, worth 1.5 billion dollars, demonstrates that AI companies cannot simply appropriate creative works without consequence. In Europe, the Commission is advancing guidelines for general-purpose AI under the AI Act, including measures to enforce copyright obligations. The Regional Court of Munich has likewise held OpenAI to account for reproducing protected lyrics in training outputs. Elsewhere, Australia has confirmed that it will not introduce a copyright exception, while South Korea moves ahead with its own AI-copyright framework.

Internationally, then, we see convergence around one simple idea: respect for copyright remains essential to confidence in creative and AI innovation alike.

That position is reflected clearly in the work of the Authors’ Licensing and Collecting Society. Its recent survey of over 13,000 members shows a striking consensus: loosening copyright rules would be counterproductive and unfair to writers. By contrast, licensing systems give creators choice and control, enabling them to decide whether—and on what terms—their works are used.

The ALCS, together with the Copyright Licensing Agency, is now developing an innovative licensing model for the training of generative AI systems. This is a pragmatic and forward-looking approach, beginning in areas like professional, academic and business publishing where licensing frameworks already operate successfully. It builds on systems that work, rather than tearing them down.

Of course, literary fiction is more sensitive territory, and the ALCS is right to proceed carefully. But experimentation in smaller, more structured datasets can be a valuable way to test principles and develop viable models. As the courts continue to deal with questions of historic misuse, this prospective route offers a constructive path forward.

The creative industries are united. They do not seek privilege, only parity. They oppose new copyright exceptions that would undermine markets and livelihoods, but they also recognise the need to make licensing work—so that ministers and AI companies cannot claim it is impractical or inadequate.

Much progress has been made. The Government is, at last, listening. But until creators can be confident that their rights will be respected, this campaign cannot rest.

Our writers, musicians and artists have given us immense cultural wealth. Ensuring that they share fairly in the new wealth created by artificial intelligence is not an impediment to innovation—it is the foundation of it. This work must continue, and I believe we are now on the right path: one of balance, respect and creative confidence in a digital age.


Liberal Democrats Say No to Compulsory Digital ID

The Government recently announced the introduction of a mandatory requirement for Digital Identity to be used in right to work checks.

The introduction of compulsory digital ID represents another fundamental error by this Government. The Liberal Democrats strongly oppose this proposal, which is a serious threat to privacy, civil liberties and social inclusion. We thank the Minister for bringing the Secretary of State’s Statement to this House today, but my disappointment and opposition to the Government’s plan more than mirrors that of my honourable friend Victoria Collins in the Commons yesterday.

The core issue here is not technology but freedom. The Government insist this scheme is non-compulsory, yet concurrently confirm that it will be mandatory for right-to-work checks by the end of this Parliament. This is mandatory digital ID in all but name, especially for working-age people. As my party leader Sir Ed Davey has stated, we cannot and will not support a system where citizens are forced to hand over private data simply to participate in everyday life. This is state overreach, plain and simple.

The Secretary of State quoted Finland and the ability of parents to register for daycare, but I think the Secretary of State needs to do a bit more research. That is a voluntary scheme, not a compulsory one. We have already seen the clear danger of mission creep. My honourable friend Victoria Collins rightly warned that the mere discussion of extending this scheme to 13 to 16 year-olds is sinister, unnecessary and a clear step towards state overreach. Where does this stop?

The Secretary of State sought to frame this as merely a digital key to unlock better services. This dangerously conflates genuine and desirable public service reform with a highly intrusive mandate. First, the claim that this will deliver fairness and security by tackling illegal migration is nothing more than a multibillion-pound gimmick. The Secretary of State suggests that it will deter illegal working, yet, as my colleagues have pointed out, rogue employers who operate cash-in-hand schemes will not look at ID on a phone. Mandatory digital ID for British citizens will not stop illegal migrants working in the black economy.

Secondly, the claim that the system will be free is disingenuous. As my honourable friend Max Wilkinson, our home affairs spokesman, demanded, the Government must come clean on the costs and publish a full impact assessment. Estimates suggest that creating this system will cost between £1 billion and £2 billion, with annual running costs of £100 million pounds. This is completely the wrong priority at a time when public services are crumbling.

Thirdly, the promise of inclusion rings hollow. This mandatory system risks entrenching discrimination against the millions of vulnerable people, such as older people and those on low incomes, who lack foundational digital skills, a smartphone or internet access.

The greatest concern is the Government’s insistence on building this mandatory system on GOV.UK’s One Login, a platform with security failures that have been repeatedly and publicly criticised, including in my own correspondence and meetings with government. There are significant concerns about One Login’s security. The Government claim that One Login adheres to the highest security standards. Despite this commitment, as of late 2024 and early 2025, the system was still not fully compliant. A GovAssure assessment found that One Login was meeting only about 21 of the 39 required outcomes in the NCSC cyber assessment framework. The GOV.UK One Login programme has told me that it is committed to achieving full compliance with the cyber assessment framework by 21 March 2026, yet officials have informed me that 500 services across 87 departments are already currently in scope for the One Login project.

There are other criticisms that I could make, but essentially the foundations of the digital ID scheme are extremely unsafe, to say the least. To press ahead with a mandatory digital ID system, described as a honeypot for hackers, based on a platform exhibiting such systemic vulnerabilities is not only reckless but risks catastrophic data breaches, identity theft and mass impersonation fraud. Concentrating the data of the entire population fundamentally concentrates the risk.

The Secretary of State must listen to the millions of citizens who have signed the petition against this policy. We on these Benches urge the Government to scrap this costly, intrusive and technologically unreliable scheme and instead focus on delivering voluntary, privacy-preserving digital public services that earn the public’s trust rather than demanding compliance.


A Defence of the Online Safety Act: Protecting Children While Ensuring Effective Implementation

Some recent commentary on the Online Safety Act seems to treat child protection online as an abstract policy preference. The evidence reveals something far more urgent. By age 11, 27% of children have already been exposed to pornography, with the average age of first exposure at just 13. Twitter (X) alone accounts for 41% of children’s pornography exposure, followed by dedicated sites at 37%.

The consequences are profound and measurable. Research shows that 79% of 18-21 year olds have seen content involving sexual violence before turning 18, and young people aged 16-21 are now more likely to assume that girls expect or enjoy physical aggression during sex. Close to half (47%) of all respondents aged 18-21 had experienced a violent sex act, with girls the most impacted.

When we know that childrens’ accounts on TikTok are shown harmful content every 39 seconds, with suicide content appearing within 2.6 minutes and eating disorder content within 8 minutes, the question is not whether we should act, but how we can act most effectively.

This is not “micromanaging” people’s rights - this is responding to a public health emergency that is reshaping an entire generation’s understanding of relationships, consent, and self-worth.

Abstract arguments about civil liberties need to be set against the voices of bereaved families who fought for the Online Safety Act . The parents of Molly Russell, Frankie Thomas, Olly Stephens, Archie Battersbee, Breck Bednar, and twenty other children who died following exposure to harmful online content did not campaign for theoretical freedoms - they campaigned for their children’s right to life itself.

These families faced years of stonewalling from tech companies who refused to provide basic information about the content their children had viewed before their deaths. The Act now requires platforms to support coroner investigations and provide clear processes for bereaved families to obtain answers. This is not authoritarianism - it is basic accountability

To repeal the Online Safety Act would indeed be a massive own-goal and a win for Elon Musk and the other tech giants who care nothing for our children’s safety. The protections of the Act were too hard won, and are simply too important, to turn our back on.

The conflation of regulating pornographic content with censoring legitimate information is neither accurate nor helpful, but we must remain vigilant against mission creep. As Victoria Collins MP and I have  highlighted in our recent letter to the Secretary of State, supporting the Act’s core mission does not mean we should ignore legitimate concerns about its implementation. Parliament must retain its vital role in scrutinising how this legislation is being rolled out to ensure it achieves its intended purpose without unintended consequences.

There are significant issues emerging that Parliament must address:

Age Assurance Challenges: The concern that children may use VPNs to sidestep age verification systems is real, though it should not invalidate the protection provided to the majority who do not circumvent these measures. We need robust oversight to ensure age assurance measures are both effective and proportionate.

Overreach in Content Moderation: The age-gating of political content and categorisation of educational resources like Wikipedia represents a concerning drift from the Act’s original intent. The legislation was designed to protect children from harmful content, not to restrict access to legitimate political discourse or educational materials. Wikimedia’s legal challenge regarding its categorisation illustrates this. While Wikipedia’s concerns about volunteer safety and editorial integrity are legitimate, their challenge does not oppose the Online Safety Act as a whole, but rather seeks clarity about how its unique structure should be treated under the regulations.

Protecting Vulnerable Communities: When important forums dealing with LGBTQ+ rights, sexual health, or other sensitive support topics are inappropriately age-gated, we risk cutting off vital lifelines for young people who need them most. This contradicts the Act’s protective purpose.

Privacy and Data Protection: While the Act contains explicit privacy safeguards, ongoing vigilance is needed to ensure age assurance systems truly operate on privacy-preserving principles with robust data minimisation and security measures.

The solution to these implementation challenges is not repeal, but proper parliamentary oversight. Parliament needs the opportunity to review the Act’s implementation through post-legislative scrutiny and  the chance to examine whether Ofcom is interpreting the legislation in line with its original intent and whether further legislative refinements may be necessary.

A cross-party Committee from both Houses, would provide the essential scrutiny needed to ensure the Act fulfils its central aim of keeping children safe online without unintended consequences.

Fundamentally and importantly, this approach aligns with core liberal principles. John Stuart Mill’s harm principle explicitly recognises that individual liberty must be constrained when it causes harm to others.

 

 

.

 


The Great AI Copyright Battle: Why Transparency Matters

We have recently had unprecedented "ping pong" between the Lords and Commons on whether to incorporate provisions in the Data Use and Access Bill ( now Act) which would ensure that AI developers would be required to be transparent about the copyright content used to train their models. Liberal Democrats in both the Lords and Commons consistently supported this change throughout. This is why.

As Co-chair of the All-Party Parliamentary Group on Artificial Intelligence and now Chair of the Authors' Licensing and Collecting Society (ALCS), I find myself at the epicentre of one of the most significant intellectual property debates of our time.

The UK's creative industries are economic powerhouses, contributing £126 billion annually while safeguarding our cultural identity. Yet they face an existential challenge: the wholesale scraping of copyrighted works from the web to train AI systems without permission or payment.

The statistics are stark. A recent ALCS survey revealed that 77% of writers don't even know if their work has been used to train AI systems. Meanwhile, 91% believe their permission should be required, and 96% want compensation for use of their work. This isn't anti-technology sentiment – it's about basic fairness.

From Sir Paul McCartney to Sir Elton John, hundreds of prominent creatives have demanded action. They're not opposing AI innovation; many already use AI in their work. They simply want their intellectual property rights respected so they can continue making a living.

December's government consultation on Copyright and AI proposed a text and data mining exception with an opt-out mechanism for rights holders. This approach fundamentally misunderstands the problem. It places the burden on creators to police the internet, protecting their own works – an impossible task given the scale and opacity of AI training.

The creative sector's opposition has been overwhelming. The proposed framework would undermine existing copyright law while making enforcement practically impossible. As I've consistently argued, existing copyright law is sufficient if properly enforced – what we need is mandatory transparency.

During debates on the Data (Use and Access) Bill, Baroness Kidron championed amendments requiring AI developers to disclose copyrighted material used in training data. These amendments received consistent support from all Liberal Democrat MP’s and peers, crossbench peers, and many  Labour and Conservative backbench peers.

The government's resistance has been remarkable. Despite inserting a requirement for an econimic impact assessment and a report on copyright use in AI development, they have opposed mandatory transparency, leading to an unprecedented "ping-pong" debate between the Houses.

Transparency isn't about stifling innovation – it's about enabling legitimate licensing. How can creators license their work if they don't know who's using it? How can fair compensation mechanisms develop without basic disclosure of what's being used?

The current system allows AI companies to harvest vast quantities of creative content while claiming ignorance about specific sources. This creates a fundamental power imbalance where billion-dollar tech companies benefit from the work of individual creators who remain entirely in the dark.

The solution isn't complex. Mandatory transparency requirements would enable:

  • Creators to understand how their work is being used
  • Development of fair licensing mechanisms
  • Preservation of existing copyright frameworks
  • Continued AI innovation within legal boundaries

This debate reflects deeper concerns about AI innovation coming at the expense of human creativity. The government talks about supporting creative industries while simultaneously weakening the intellectual property protections that sustain them.

We need policies that recognize the symbiotic relationship between human creativity and technological advancement. AI systems trained on creative works should provide some return to those creators, just as streaming platforms pay royalties for music usage.

The government has so far failed to rise to this challenge. But with continued parliamentary pressure and overwhelming creative sector support, we can still achieve a framework that protects both innovation and creativity.

The question isn't whether AI will transform creative industries – it's whether that transformation will be fair, transparent, and sustainable for the human creators whose work makes it all possible.

 

 

 


Less Talk More Action on Scale Ups


AI regulation does not stifle innovation

This is a piece I wrote recently published in the New Statesman's Spotlight on Technology Supplement

Achieving balance between human potential and machines isn’t just possible – it’s necessary.

Ever since co-founding the All-Party Parliamentary Group on AI nine years ago, still ably administered by the Big Innovation Centre, I’ve been deeply involved in debating and advising on the implications of artificial intelligence. My optimism about AI’s potential remains strong – from helping identify new Parkinson’s treatments to DeepMind’s protein structure predictions that could transform drug discovery and personalised medicine.

Yet this technology is unlike anything we’ve seen before. It’s potentially more autonomous, with greater impact on human creativity and employment, and more opaque in its decision-making processes.

The conventional wisdom that regulation stifles innovation needs turning on its head. As AI becomes more powerful and pervasive, appropriate regulation isn’t just about restricting harmful practices – it’s key to driving widespread adoption and sustainable growth. Many potential AI adopters are hesitating not due to technological limitations but to uncertainties about liability, ethical boundaries and public acceptance. Clear regulatory frameworks addressing algorithmic bias, data privacy and decision transparency can actually accelerate adoption by providing clarity and confidence.

Different jurisdictions are adopting varied approaches. The European Union’s AI Act, with its risk-based framework, started coming into effect this year. Singapore has established comprehensive AI governance through its model AI governance framework. Even China regulates public-facing generative AI models with fairly heavy inspection regimes.

The UK’s approach has been more cautious. The previous government held the AI Safety Summit at Bletchley Park and established the AI Safety Institute (now inexplicably renamed the AI Security Institute), but with no regulatory teeth. The current government has committed to binding regulation for companies developing the most powerful AI models, though progress remains slower than hoped. Notably, 60 countries – including Saudi Arabia and the UAE, but not Britain or the US – signed the Paris AI Action Summit declaration in February this year, committing to ensuring AI is “open, inclusive, transparent, ethical, safe, secure and trustworthy”.

Several critical issues demand urgent attention.

Intellectual property: the use of copyrighted material for training large language models without licensing has sparked substantial litigation and, in the UK, unprecedented parliamentary debate. Governments need to act decisively to ensure creative works aren’t ingested into generative AI models without return to rights-holders, with transparency duties on developers.

Digital citizenship: we must equip citizens for the AI age, ensuring they understand how their data is used and AI’s ethical implications. Beyond the UAE, Finland and Estonia, few governments are taking this seriously enough.

International convergence: despite differing regulatory regimes, we need developers to collaborate and commercialise innovations globally while ensuring consumer trust in common international ethical and safety standards.

Well-designed regulation can be a catalyst for AI adoption and innovation. Just as environmental regulations spurred cleaner technologies, AI regulations focusing on explainability and fairness could push developers toward more sophisticated, responsible systems.

The goal isn’t whether to regulate AI, but how to regulate it promoting both innovation and responsibility. We need principles-based rather than overly prescriptive regulation, assessing risk and emphasising transparency and accountability without stifling creativity.

Achieving the balance between human potential and machine innovation isn’t just possible – it’s necessary as we step into an increasingly AI-driven world. That’s what we must make a reality.


The new Council of Europe AI Framework Convention demonstrates that the principles of the European Convention on Human Rights are still highly relevant after 75 years

I recently took part in a debate in the House of Lords celebrating the 75th Anniversary of the European Convention on Human Right. This is what I said about the importance of the new Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law and how it links to the principles and objectives original Convention.

The new Council of Europe framework convention on artificial intelligence is another living demonstration that the principles of the European Convention on Human Rights are still highly relevant after 75 years. The AI framework convention does not seek to replace the ECHR but rather to extend its protections into the digital age. AI now permeates our daily lives, making decisions that affect our privacy, liberty and dignity. These systems can perpetuate discrimination, erode privacy and challenge fundamental freedoms in a way that demands new protections. Open for signature in September 2024, the AI framework convention is the first legally binding international instrument on AI, setting clear standards for risk assessment and impact management throughout the life cycle of AI systems.

The framework convention’s principles require transparency and oversight, ensuring that AI systems cannot operate as black boxes, making decisions that affect people’s lives without accountability. They require parties to adopt specific measures for identifying, assessing, preventing and mitigating risks posed by AI systems, and a specific human rights impact assessment has been developed. The convention recognises that, in the age of AI, protecting human rights requires more than individual remedies; it demands accessible and effective remedies for human rights violations resulting from AI systems. Rather than merely reacting to harms after they occur, the framework mandates consideration of society-scale effects before AI systems are deployed. I only wish, having heard what its director had to say on Tuesday, that our AI Security Institute had the same approach.

The framework convention was achieved through unprecedented consultation, involving not just the 46 member states of the Council of Europe but observer states, civil society, academia and industry representatives. Beyond European nations, it has attracted signatories including Israel, the United States—albeit under the previous Administration—and, most recently, Japan and Canada, in February this year.

However, a framework is only as good as its implementation, and this brings me to my central question to the Government. What is their plan? The Ministry of Justice’s Report to the Joint Committee on Human Rights on the Government’s Response to Human Rights Judgments 2023-24 said:

“Once the treaty is ratified and brought into effect in the UK, existing laws and measures to safeguard human rights from the risks of AI will be enhanced”.

How will existing UK law be amended to align with the framework convention? What additional resources and powers will be given to our regulatory bodies? What mechanisms will be put in place to monitor and assess the impact of AI systems on vulnerable groups? The convention offers us tools to prevent such problems, but only if we implement it effectively.

As we mark 75 years of the European Convention on Human Rights, we should remember that its enduring strength lies not just in its principles but in how nations have given those principles practical effect through domestic law and institutions. The UK has long been a leader in both human rights and technological innovation. I urge the Government to present a comprehensive implementation plan for the AI framework convention. Our response to this challenge will determine whether the digital age enhances or erodes our fundamental rights. I do not need to emphasise the immense power of big tech currently. We need to see this as a time when we are rising to meet new challenges with the same vision and commitment that created the European Convention on Human Rights, 75 years ago.