AI & Technology Archives - Lord Clement-Jones | Speaker AI and Creative Industries https://www.lordclementjones.org/category/work/artificial-intelligence/ Speaker AI and Creative Industries | UK, China, Middle East | Lord Clement-Jones Tue, 31 Mar 2026 08:44:46 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.lordclementjones.org/wp-content/uploads/2018/09/cropped-lcj-icon-32x32.png AI & Technology Archives - Lord Clement-Jones | Speaker AI and Creative Industries https://www.lordclementjones.org/category/work/artificial-intelligence/ 32 32 Facial Recognition: Ending the Wild West of Police Surveillance https://www.lordclementjones.org/2026/03/30/facial-recognition-ending-the-wild-west-of-police-surveillance/?utm_source=rss&utm_medium=rss&utm_campaign=facial-recognition-ending-the-wild-west-of-police-surveillance Mon, 30 Mar 2026 18:46:35 +0000 https://www.lordclementjones.org/?p=77109 For too long, the deployment of Live Facial Recognition (LFR) technology in our streets has been treated by the Government […]

The post Facial Recognition: Ending the Wild West of Police Surveillance appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
For too long, the deployment of Live Facial Recognition (LFR) technology in our streets has been treated by the Government as simply a “useful tool” to be managed by administrative guidance and toothless codes of practice. But as I have argued many times in the Lords, we are currently in a wild west of mass surveillance. We are witnessing the rapid rollout of a technology that can scan every face in a crowd and compare them in real time against a watchlist, effectively treating every citizen as a suspect in a permanent digital lineup.

The Liberal Democrats have been clear: this is not just another camera on a street corner. It is a fundamental shift in the relationship between the individual and the state. During the passage of the Crime and Policing Bill, -as we have done before -we moved to place vital statutory guardrails around this technology to ensure that innovation does not come at the expense of the rule of law.

The Legislative Void and the Crime and Policing Bill

The Government often points to a “comprehensive legal framework” of common law and data protection acts to justify LFR. Yet, as the Court of Appeal found in the Bridges case, the current framework contains “fundamental deficiencies” that leave far too much discretion to individual police officers.

As we pointed out in our response to the Government’s recent consultation on Consultation the legal framework for using facial recognition in law enforcement , the use of live facial recognition represents a seismic shift in the relationship between the individual and the State. It fundamentally alters the balance of power, turning our public spaces into permanent biometric lineups and treating every citizen as a potential suspect. Such a move should never have been made without an explicit democratic mandate and primary legislation authorized by Parliament.

To remedy this, the Liberal Democrats recently tabled an amendment to the Crime and Policing Bill. This amendment sought to prohibit the use of LFR unless specific, stringent conditions are met—most importantly, requiring prior judicial authorization for any deployment. As I said, the police require a warrant to enter a home, they should surely require judicial approval to invade the privacy of thousands of citizens in a public square.

Furthermore, through another amendment, we also fought to protect the privacy of the millions of law-abiding citizens who never expected their driving license to become a biometric face print for a national police database. 

The Right to Protest and the Macdonald Review

In our recent submission to the Macdonald Review of public order offences, Liberal Democrat peers  reiterated the chilling effect that unregulated surveillance has on our democracy. We csaid, protest is not a threat to be managed; it is a right to be “respected, protected, and facilitated”.

Anonymity is a cornerstone of this right. Whether it is diaspora activists fearing transnational repression or survivors of domestic violence who simply wish to go about their lives unmonitored, the ability to disappear into a crowd is a basic safeguard of a free society. By layering unregulated facial scanning over new restrictions on face coverings, the Government is effectively shrinking the space for lawful dissent.

The Case for a Statutory Framework

We are often told that the technology is accurate and zero-biased. Yet independent audits tell a different story. Studies consistently show that facial recognition algorithms perform unevenly across different demographics, often misidentifying members of ethnic minorities.  This can lead to a fundamental violation of human rights and the erosion of community trust.

As we also said in our response to the consultation relying on broad common law policing powers to justify mass biometric surveillance is a legal fiction. This is not ‘traditional CCTV’; it is an automated, industrial-scale search of our very identities. In a democracy, suspicion should always precede surveillance, yet this technology inverts that vital principle, forcing innocent citizens to effectively prove their identity to a machine.

The Government needs to protect our traditional liberties. Relying on the College of Policing’s non-binding guidance is not good enough.

We need a root-and-branch review of our surveillance laws and a comprehensive legislative framework.  We must ensure that LFR is a targeted tool used under the rule of law—not a blanket surveillance net that chills our right to speak, to assemble, and to move freely in our own country.

The post Facial Recognition: Ending the Wild West of Police Surveillance appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Digital ID plans flawed https://www.lordclementjones.org/2026/03/28/digital-id-plans-flawed/?utm_source=rss&utm_medium=rss&utm_campaign=digital-id-plans-flawed Sat, 28 Mar 2026 19:45:05 +0000 https://www.lordclementjones.org/?p=77070 We are faced with yet another Government plan for Digital ID. This is my response to the recent government statement […]

The post Digital ID plans flawed appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
We are faced with yet another Government plan for Digital ID. This is my response to the recent government statement on the occasion of launching its new consultation. Still no answers despite all the serious flaws in the previous schemes! I will contunue to press for them!

The Chief Secretary told the Commons on Tuesday that he was continuing the proud Labour tradition of building public services for the many. He invoked the NHS, the Open University and Sure Start. It was a stirring lineage. But there is history he omitted: Verify, which wasted over £220 million; GOV.UK One Login, for which the Cabinet Office sought up to £400 million; and now this national digital ID, which the OBR estimates will cost £1.8 billion over three years. This, indeed, is Verify 4.0.

The Government have confirmed that possession of a digital identity will not be compulsory. The Liberal Democrats opposed mandatory digital ID at every turn, and I am pleased to say that the Government have listened. My honourable friend Lisa Smart MP pressed the Chief Secretary directly in the Commons last week and received his wholehearted assurance. He continued to claim that using digital ID will be entirely optional. So, I ask the Minister in this House, will the voluntary character of this scheme be placed in the Bill the Government intend to bring forward later this year? How can we trust any Government on how personal data, once surrendered to the state, will actually be used?

Earlier this month, this House considered an amendment to the Crime and Policing Bill, tabled by my noble friend Lady Doocey, which sought to prohibit police from using DVLA driving licence images for facial recognition searches. The DVLA holds over 55 million records. Every driver provided their photograph for one purpose only: to hold a driving licence. They did not consent to their image becoming part of what Liberty has rightly described as the largest biometric database for police access ever created in the United Kingdom. Yet the noble Lord, Lord Hanson of Flint, the Home Office Minister, did not accept the amendment and confirmed at all stages that the express purpose of Clause 138 of the Bill is precisely to permit facial recognition searches of DVLA records. So, within a single parliamentary week, we have a Government launching a national digital identity consultation on the basis of assurances about data use, while declining to place in statute the very protections that would make such assurances meaningful. The question is not whether the Government intend that digital ID will become an instrument of surveillance, but whether a future Government could.

The Chief Secretary said that he wants security at least as strong as online banking. That is the right aspiration, but, as mentioned by the noble Earl, GOV.UK One Login, the umbrella infrastructure for this system, reportedly satisfied only 21 out of 39 security outcomes required by the National Cyber Security Centre. Whistleblowers have described vulnerabilities that allow unauthorised access to sensitive functions without triggering any alert. How can the Government justify launching a national identity solution on a platform that fails to meet nearly half the NCSC’s mandatory security outcomes?

In part two of the Fisher review, published in January, Jonathan Fisher KC warned that AI-driven impersonation at scale is now a defining crime of our age and that we must implement upstream measures—stopping fraud at the point of identity issuance, not reacting after a digital identity has been stolen. If our foundations currently satisfy barely half the required security outcomes, how do we deliver the upstream protection Mr Fisher demands?

Will the Government commission and publish a full NCSC security audit before a single citizen is enrolled? Will they introduce an offence of digital identity theft that they, along with the previous Conservative Government, have so far resisted? The consultation proposes a universal unique identifier to link citizens across every departmental silo. Without strict legal guardrails, that identifier is the functional infrastructure of the national identity register that Parliament voted to abolish in 2011, and it is precisely the centralised data honeypot that hostile state actors would most wish to compromise. We need not mere parliamentary approval for services added to the app, but a statutory prohibition on bulk data matching across departments.

In summary, I put four questions to the Minister.

First, will the voluntary character of this scheme be placed in primary legislation, with an explicit prohibition on any future mandatory requirement without a further Act of Parliament? In that context, and as the noble Earl has mentioned, how mindful are the Government of the possible consequences for digital inclusion?

Secondly, the Home Office’s assurances on DVLA facial recognition mirrored word for word those given by the previous Government. Before the Minister can confirm the opposite, what statutory purpose limitation on digital identity data will be placed beyond the reach of secondary legislation?

Thirdly, will the Government provide a statutory guarantee that the universal unique identifier cannot be used for bulk data matching across departments without primary legislation?

Finally, will the Government publish an independently verified cost-benefit analysis before the Bill is introduced, and explain why £1.8 billion would not deliver greater public benefit directed to the NHS and front-line policing, for instance?

The Chief Secretary asked what it is that critics fear from a public consultation. We do not fear the consultation; what we fear is a fourth cycle of the same expensive failure, grand ambitions and insecure foundations—a creeping identifier that becomes the digital spine of state surveillance. But what we fear above all is a system whose data acquires uses never publicly intended by its creators. We have just watched that happen in this very Chamber with the DVLA database of images. We on these Benches will support voluntary, secure, properly costed modernisation of public services, but we will not accept warm ministerial words as a substitute for hard legislative limits. We need a state that is not merely digital by choice today but constitutionally prohibited from becoming compulsory tomorrow. On the evidence of this and last week’s proceedings, we are very far from that guarantee.

The post Digital ID plans flawed appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Media Literacy Action needed https://www.lordclementjones.org/2026/03/28/media-literacy-action-needed/?utm_source=rss&utm_medium=rss&utm_campaign=media-literacy-action-needed Sat, 28 Mar 2026 10:35:58 +0000 https://www.lordclementjones.org/?p=77074 I spoke briefly in a debate recently Media Literacy the report of the House of Lords Select Committee on Communications […]

The post Media Literacy Action needed appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
I spoke briefly in a debate recently Media Literacy the report of the House of Lords Select Committee on Communications and Digital : https://publications.parliament.uk/pa/ld5901/ldselect/ldcomm/163/163.pdf.

The same day the Government published its Media Literacy Action plan: https://www.gov.uk/government/publications/a-safe-informed-digital-nation/a-safe-informed-digital-nation

I then took part in a debate on the Curriculum and Assessment Review by Professor Becky Francis CBE: Building a world-class curriculum https://assets.publishing.service.gov.uk/media/690b96bbc22e4ed8b051854d/Curriculum_and_Assessment_Review_final_report_-_Building_a_world-class_curriculum_for_all.pdf and the Gvernment response to it:https://assets.publishing.service.gov.uk/media/690b2a4a14b040dfe82922ea/Government_response_to_the_Curriculum_and_Assessment_Review.pdf

It is far from clear that we are acting fast or thoroughly enough to enable what is called AI fluency in our children.

We are faced with a landscape of algorithmic manipulation, proliferating deepfakes, a torrent of disinformation and, of course, online fraud. The committee is right: a failure to prioritise media literacy is a threat not just to individuals but to social cohesion and democracy itself. In the era of generative AI, media literacy is, as the committee makes clear, a requirement for modern citizenship. Our current approach is indeed fragmented and underresourced and lacks strategic vision. Ofcom’s own evidence, highlighted by the committee, shows little improvement in core skills over six years. In that context, the Government’s claim in their response that they and Ofcom have met the mounting scale of the challenge is simply not credible.

 I welcome the completed curriculum and assessment review, which commits the Government to publishing revised national curriculum content by spring 2027. However, as the committee recommends, media literacy should be embedded across the curriculum and teachers should receive sustained support. This should arrive earlier.

As the committee urges, we need media literacy to be prioritised across government, not bolted on at the margins. I very much hope that the Minister will be able to assure us that one of the key tests of the effectiveness of the new media literacy action plan will be whether that takes place.

The Government cannot simply continue to outsource their responsibility in this area to the regulator. Although I welcome Ofcom’s new three-year media literacy strategy and its tougher use of behavioural audits under the Online Safety Act, which the Government rightly highlight, it is  deeply disappointing that, more than 20 years on, Ofcom still has not brought its definition of media literacy up to date by explicitly recognising critical thinking—although I detect slightly different language in the media literacy action plan. Ofcom should, as the committee says, set minimum standards for platforms’ media literacy activity and be empowered to hold them to account.

You cannot build media literacy on foundations that do not exist. As the committee and many stakeholders argue, we must treat connectivity as an essential utility and invest accordingly. The vision from the Liberal Democrats is empowered citizenship: not a nanny state that tells people what to think but a literate state that gives people the tools to think for themselves. That is, in essence, the spirit of the committee’s report.

I urge the Minister to treat this report not as suggestions but as an urgent road map. We need, as the committee sets out, a unified strategy, a robust and critical definition of media literacy and the digital infrastructure to underpin it all.

Finally, I say in closing that I believe the BBC is not the problem; it is part of the answer. I look forward to the Minister’s response.

 

The post Media Literacy Action needed appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Ahead of AGI or Superintelligence we need binding legislation not advisory powers https://www.lordclementjones.org/2026/01/11/ahead-of-agi-or-superintelligence-we-need-binding-legislation-not-advisory-powers/?utm_source=rss&utm_medium=rss&utm_campaign=ahead-of-agi-or-superintelligence-we-need-binding-legislation-not-advisory-powers Sun, 11 Jan 2026 12:19:38 +0000 https://www.lordclementjones.org/?p=77037 We recently held a debate in the Lords prompted by warnings from the Director General of MI5 of the dangers […]

The post Ahead of AGI or Superintelligence we need binding legislation not advisory powers appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
We recently held a debate in the Lords prompted by warnings from the Director General of MI5 of the dangers of AI. This is an expanded version of my speech 

My Lords, the Director General of MI5 has issued a stark warning: future autonomous AI systems, operating without effective human oversight, could themselves become a major security risk. He stated it would be “reckless” to ignore AI’s potential for harm. We must ask the Government directly: what specific steps are being taken to ensure we maintain control of these systems?

The urgency is underlined by events from mid-September 2025. Anthropic detected what they assessed to be the first documented large-scale cyber espionage campaign using agentic AI. AI is no longer merely generating content—it is autonomously developing plans, solving problems, and executing code to breach the security of organisations and states.

We are entering an era where AI systems chain tasks together and make decisions with minimal human input. As Yoshua Bengio, Turing Award winner and one of AI’s pioneers, has warned: these systems are showing signs of self-preservation. In experiments, AI models have chosen their own preservation over human safety when faced with such choices. Bengio predicts we could see major risks from AI within five to ten years, with systems potentially capable of autonomous proliferation.

Professor Stuart Russell describes this as the “control problem”—how to maintain power over entities that will become more powerful than us. He warns we have made a fundamental error: we are building AI systems with fixed objectives, without ensuring they remain uncertain about human preferences. This creates what he calls the “King Midas problem”—systems pursuing misspecified objectives with catastrophic results. Social media algorithms already demonstrate this, learning to manipulate humans and polarise societies in pursuit of engagement metrics.

Mustafa Suleyman, co-founder of DeepMind and now Microsoft’s AI CEO, has articulated what he calls the “containment problem”. Unlike previous technologies, AI has an inherent tendency toward autonomy and unpredictability. Traditional containment methods will prove insufficient. Suleyman recently stated that Microsoft will walk away from any AI system that risks escaping human control, but we must ask: will competitive pressures allow such principled restraint across the industry?

The scale of AI adoption makes these questions urgent. The Institution of Engineering and Technology (IET) reports that six in ten engineering employers are already using AI, with 61% expecting it to support productivity in the next five years. Yet this rapid deployment occurs against a backdrop of profound skills deficits and understanding gaps that directly undermine safety and control.

The barrier to entry for malicious actors is collapsing. We have evidence of UK-based threat actors using generative AI to develop ransomware-as-a-service for as little as £400. Tools like WormGPT operate without ethical boundaries, allowing novice cybercriminals to create functional malware. AI-enabled social engineering grows more sophisticated—deepfake video calls have already fooled finance workers into releasing $25 million to fraudsters. Studies suggest AI can now determine which keys are being pressed on a laptop with over 90% accuracy simply by analysing typing sounds during video calls.

The IET warns that there is no ceiling on the economic harm that cyberattacks could cause. AI can expose vulnerabilities in systems, and the data that algorithms are trained with could be manipulated by adversaries, causing AI systems to make wrong decisions by design. Cyber security is not just about prevention—businesses must model their response to breaches as part of routine planning. Yet cyber security threats evolve constantly, requiring chartered experts backed by professional organisations to share best practice.

So how is the Government working with tech companies to ensure such features do not become systemic vulnerabilities?

The Government’s response, while active, appears fragmented. We have established the AI Security Institute—inexplicably renamed from the AI Safety Institute, though security and safety are distinct concepts. However, as BBC Tech correspondent Zoe Kleinman noted, the sector has grown tired of voluntary codes and guidelines. I have long argued, including in my support for Lord Holmes’s Artificial Intelligence (Regulation) Bill, that regulation need not be the enemy of innovation. Indeed, it can create certainty and consistency. Clear regulatory frameworks addressing algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing confidence to potential users.

The Government need to give clear answers on five critical areas which in my view are crucial for the development and retention of public trust in AI technology.

First, on institutional clarity and the definition of safety: The renaming of the AI Safety Institute to the AI Security Institute muddles two distinct concepts. Safety addresses preventing AI from causing unintended harm through error or misalignment. Security addresses protecting AI systems from being weaponised by adversaries. We need both, with clear mandates and regulatory teeth, not mere advisory powers.

Moreover, as the IET argues, we need a broader definition of AI safety that goes beyond physical harm. AI safety and risk assessment must encompass financial risks, societal risks, reputational damage, and risks to mental health, amongst other harms. Although the onus is on developers to prove their products are fit for purpose with no unintended consequences, further guidelines and standards around how this should be reported would support a regulatory environment that is both pro-innovation and provides safeguards against harm.

Second, on regulatory architecture: For nine years, I have co-chaired the All-Party Parliamentary Group on AI. Throughout this time, I have watched us lag behind other jurisdictions. The EU AI Act, with its risk-based framework, started to come into effect this year. South Korea has introduced an AI Basic/Framework Act and, separately, a Digital Bill of Rights setting overarching principles for digital rights and governance. Singapore has comprehensive AI governance. China regulates public-facing generative AI with inspection regimes.

Meanwhile, our government continues its “pro-innovation” approach which risks becoming a “no-regulation” approach. We need binding legislation with a broad definition of AI and early risk-based overarching requirements ensuring conformity with standards for proper risk management and impact assessment. As I have argued previously, this could build on existing ISO standards, designed to achieve  international convergence, embodying key principles which provide a good basis in terms of risk management, ethical design, testing, training, monitoring and transparency and should be applied where appropriate. 

Third, on transparency and understanding: There is profound concern over the lack of broader understanding and information surrounding AI. The IET reports that 29% of people surveyed had concerns about the lack of information around AI and lack of skills and confidence to use the technology, with over a quarter saying they wished there was more information about how it works and how to use it.

Fourth, on the specific challenges of agentic AI: Bengio warns that as AI models improve at abstract reasoning and planning, the duration of tasks they can solve doubles every seven months. He predicts that within five years, AI will reach human level for programming tasks. When systems can harvest credentials and extract data at thousands of requests per second, human oversight becomes physically impossible. The very purpose of agentic AI, as Oliver Patel of AstraZeneca noted, is to remove the human from the loop. This fundamentally breaks our traditional safety frameworks. We need new approaches—Russell’s proposal for machines that remain uncertain about human preferences, that understand their purpose is to serve rather than to achieve fixed objectives, deserves serious consideration.

Fifth, on skills, literacy and governance capability: The IET’s research reveals an alarming picture. Among employers that expect AI to be important for them, 50% say they don’t have the necessary skills. Thirty-two per cent of employers reported an AI skills gap at technician level. Most troubling of all, 46% say that senior management do not understand AI.

If nearly half of senior management across industry don’t understand AI, and if our civil servants and political leaders cannot grasp the fundamentals of agentic AI—its capabilities, its limitations, and crucially, its tendency toward self-preservation—they cannot be expected to govern it effectively. As I emphasised during debates on the Data (Use and Access) Bill, we must build public trust in data sharing and AI adoption. This requires not just safeguards but genuine understanding.

The lack of skills in AI is not only a safety concern but is hindering productivity and the ability to deliver contracts. To maximise AI’s potential, we need a suite of agile training programmes, such as short courses. While progress has been made with some government initiatives—funded AI PhDs, skills bootcamps—these do not go far enough to address the skills gaps appearing at the chartered and technician levels. 

The intellectual property question also demands urgent attention. The use of copyrighted material to train large language models without licensing has sparked litigation and unprecedented parliamentary debate. We need transparency duties on developers to ensure creative works aren’t ingested into generative AI models without return to rights-holders. AI has created discussion around the ownership of data needed to train these algorithms, as well as the impact of bias and fundamental data quality in the information they produce. As AI spans every sector, coordinated regulation is imperative for consistency and clarity.

We must also address what Bengio calls the “psychosis risk”—that increasingly sophisticated AI companions will lead people to believe in their consciousness, potentially advocating for AI rights. As Suleyman argues, we must be clear: AI should be built for people, not to be a digital person. 

There is one one further dimension : sustainability. There is a unique juxtaposition between AI and sustainability—AI is a high consumer of energy, but also possesses huge potential to tackle climate change. Reports predict that the use of AI could help mitigate 5 to 10% of global greenhouse gases by 2030. AI regulations should now look beyond the immediate risks of AI development to the much broader impact it has on the environment. There should be standards for the approval of new data centres in the UK, based on sustainability ratings. 

The Government has committed to binding regulation for companies developing the most powerful AI models, yet progress remains slower than hoped. Notably, 60 countries—including Saudi Arabia and the UAE, but not Britain—signed the Paris AI Action Summit declaration in February this year, committing to ensuring AI is “open, inclusive, transparent, ethical, safe, secure and trustworthy”. Why are we absent from such commitments?

The question now is not whether to regulate AI, but how to regulate it promoting both innovation and responsibility. We need principles-based rather than prescriptive regulation, emphasising transparency and accountability without stifling creativity. But let’s be clear, voluntary approaches have failed. The time for binding regulation is now.

As Russell reminds us, Alan Turing answered the control question in 1951: “At some stage therefore we should have to expect the machines to take control.” Russell notes that our response has been as if an alien civilisation warned us by email of its arrival in 50 years, and we replied, “Humanity is currently out of the office.” We have now read the email. The question is whether we will act with the seriousness this moment demands, or whether we will allow competitive pressures and short-term thinking to override the fundamental imperative of maintaining human control over these increasingly powerful systems.

The post Ahead of AGI or Superintelligence we need binding legislation not advisory powers appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
“The conventional wisdom that regulation stifles innovation needs to be turned on its head” https://www.lordclementjones.org/2025/12/07/the-conventional-wisdom-that-regulation-stifles-innovation-needs-to-be-turned-on-its-head/?utm_source=rss&utm_medium=rss&utm_campaign=the-conventional-wisdom-that-regulation-stifles-innovation-needs-to-be-turned-on-its-head Sun, 07 Dec 2025 12:36:36 +0000 https://www.lordclementjones.org/?p=76979 I recently wrote a piece for Chamber UK on Regulation and Innovation. An attempt to dispel a pervasive myth! “Regulation […]

The post “The conventional wisdom that regulation stifles innovation needs to be turned on its head” appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
I recently wrote a piece for Chamber UK on Regulation and Innovation. An attempt to dispel a pervasive myth!

“Regulation as an Enabler: The Case for Responsible AI”

The conventional wisdom that regulation stifles innovation needs to be turned on its head in the artificial intelligence sector. AI technology now impacts a vast array of sectors including healthcare, finance, transport, and more, influencing decisions that can drastically affect individuals and communities

 As AI systems become more powerful and pervasive, there is  growing recognition that appropriate regulation isn’t just about restricting harmful practices – it’s actually key to driving widespread adoption and sustainable growth.

There is a clear parallel with the early automotive industry. In the early 20th century, the  introduction of safety standards, driver licensing, and traffic rules didn’t kill the car industry – it enabled its explosive growth by building public confidence and creating predictable conditions for manufacturers. Similarly, thoughtful AI regulation can create the trust and stability needed for the technology to flourish.

In the current landscape many potential AI adopters – from healthcare providers to financial institutions – are hesitating not because of technological limitations, but due to uncertainties about liability, ethical boundaries, and public acceptance. Clear regulatory frameworks that address issues like algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing clarity and confidence and generating public trust. 

The inherent risks of AI, such as biases in decision-making, invasion of privacy, and potential job displacement, make it clear that unregulated AI can lead to significant ethical and societal repercussions. The call for regulation is about ensuring that AI systems operate within boundaries that protect human values and rights. Without this framework, the potential misuse or unintended consequences of AI could lead to public distrust and resistance against the technology

Far from being a brake on progress, well-designed regulation can be a catalyst for AI adoption and innovation. Regulation can drive innovation in the right direction. Just as environmental regulations spurred the development of cleaner technologies, AI regulations focusing on explainability and fairness could push developers to create more sophisticated and responsible systems. 

Regulation can stimulate innovation by defining the rules of the game, giving companies the confidence to invest in AI technologies without fear of future legal repercussions for unforeseen misuses. In markets where regulation is clear and aligned with global standards, companies can also find easier paths to expand internationally. This not only drives growth but also fosters international collaboration on global AI standards, leading to broader advancements in the field.

The question isn’t whether to regulate AI, but how to regulate it in a way that promotes both innovation and responsibility. Get this right, and regulation becomes a powerful enabler of AI’s future growth.

The EU’s AI Act and the UK’s proposed pro-innovation approach to AI regulation are contrasting and imperfect attempts to strike this balance. 

Regulation should be principles-based rather than overly prescriptive, allowing for technological evolution while maintaining focus on outcomes. It should emphasize transparency and accountability without stifling creativity. And critically, it must be developed with input from both technical experts and broader stakeholders to ensure it’s both practical and effective.

The journey towards responsible AI is not solely about technological achievement but also about how these technologies are integrated into society through thoughtful regulation. By establishing a robust regulatory framework, we can ensure that AI serves the public interest while also fostering an environment where trust and innovation lead to technological growth. The goal is to create a future where AI’s potential is fully realized in a way that is beneficial and safe for all. This is not just a possibility but a necessity as we step into an increasingly AI-driven world.

There is some growing recognition of this in the recently published AI Opportunities Plan in the UK. In particular the language around regulation assisting innovation is refreshing:

 ‘Well-designed and implemented regulation, alongside effective assurance tools, can fuel fast, wide and safe development and adoption of AI.

We  must now make that a reality!

 

The post “The conventional wisdom that regulation stifles innovation needs to be turned on its head” appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Getting the use of AI in hiring right https://www.lordclementjones.org/2025/12/07/getting-the-use-of-ai-in-hiring-right/?utm_source=rss&utm_medium=rss&utm_campaign=getting-the-use-of-ai-in-hiring-right Sun, 07 Dec 2025 12:32:20 +0000 https://www.lordclementjones.org/?p=76984 I recently took part in the Launcjh of the National Hiring Strategy by the newly formed Association of RecTech Providers. […]

The post Getting the use of AI in hiring right appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
I recently took part in the Launcjh of the National Hiring Strategy by the newly formed Association of RecTech Providers. This is what I said.

Good afternoon. It is a real privilege to welcome 200 of the UK’s leading HR, talent acquisition, and hiring professionals to the Terrace Pavilion for the launch of the first National Hiring Strategy.

This is an important moment . This is a collective commitment to make UK hiring fundamentally faster, fairer, and safer. The current state of UK hiring presents both an economic and a social challenge. On average, hiring takes almost 50 days. The outcomes speak for themselves: roughly 40 percent of new hires quit their jobs within three months. This inefficiency costs our economy millions annually and represents human potential squandered.

The National Hiring Strategy aims to tackle these issues head-on. The RecTech Roadmap—a key component of this strategy—provides the strategic blueprint for deploying technology to revolutionise how we hire. I welcome the formation of the Association of RecTech Providers. They will steer this change, set industry standards, and help ensure the UK gains global leadership.

Artificial Intelligence sits at the heart of this transformation..AI offers extraordinary opportunities. The efficiency gains are real and significant. AI tools can handle high-volume, repetitive tasks—screening CVs, scheduling interviews, processing applications—dramatically reducing time-to-hire. Some examples show reductions of up to 70 percent. That’s remarkable.

But speed alone isn’t the goal. What excites me most is AI’s potential to drive genuine inclusion. Technology, particularly AI combined, can enable greater labour market participation for those currently shut out: carers, people with disabilities or chronic illnesses, neurodiverse individuals, older workers, parents..AI can help us match people based on skills, passions, and circumstances—not just past work experience. It can help us create a world where work fits around people’s lives, rather than the other way around. That’s the vision I want to see realised.

However—and this is crucial—AI also has the potential to make hiring more problematic, more unfair, and more unsafe if we’re not careful. We must build robust ethical guardrails around these powerful tools.

 I’ve always believed that AI has to be our servant, not our master..

Fairness must be a key goal. The core ethical challenge is that machine learning models trained on historical data often reproduce past patterns of opportunity and disadvantage. They can penalise groups previously excluded—candidates with career gaps, for instance, or underrepresented minorities.

This isn’t hypothetical. We’ve seen AI systems reduce the representation of ethnic minorities and women in hiring pipelines. Under the Equality Act 2010, individuals are legally protected from discrimination caused by automated AI tools..

But we need proactive auditing. Regular, detailed bias assessments to identify, monitor, and mitigate unintended discrimination. These audits aren’t bureaucratic box-ticking—they’re critical checks and balances for ethical use.

While we don’t yet have specific AI legislation in the UK, recruiters must comply with existing data protection laws. Data minimisation is essential.. Audits have raised concerns when AI tools scrape far more information than needed from job networking sites, sometimes without candidates’ knowledge.

Transparency matters profoundly. Recruiters must inform candidates when AI tools are used, explaining what data is processed, the logic behind predictions, and how data is used for training. If this processing isn’t clearly communicated, it becomes “invisible”—and likely breaches GDPR fairness principles. Explanations should be simple and understandable, not buried in technical jargon.

And then the human touch should always maintained.  AI should complement, not replace, the human aspects of recruitment.

This should the case despite more nuanced provisions introduced under the Data Use and Access Act. Now the strict prohibition on significant decisions based solely on automated processing now applies only to decisions involving special category data (e.g. health, racial origin, genetics, biometrics but of course  recruiters will have some of that kind of information. 

But even where personal data is not “special category,” organisations must provide specific safeguards. Of :

  • Individuals must be informed about the automated decision, have the right to make representations and contest the decision and  intervention must be offered upon request or as required by law.

Judgment, empathy, and responsible innovation should remain at the core of how we attract and engage talent.

Businesses also need clear policies for accountability and redress. Individuals must be able to contest decisions where their rights have been violated..

The launch of this National Hiring Strategy provides a critical opportunity. The firms that succeed will be those that blend machine efficiency with human empathy. They will recognise that technology is a means to an end: creating opportunities, unlocking potential, and building a labour market that works for everyone.

They ensure we reach a faster, fairer, and safer UK labour market—without taking destructive shortcuts that leave people behind.

We stand at a moment of genuine possibility. The technology exists. The expertise is in this room. The Strategy provides the framework.. Let’s embrace AI’s potential with optimism but the end of the day, hiring isn’t about algorithms or efficiency metrics—it’s about people, their livelihoods, and their futures. Thank you.

The post Getting the use of AI in hiring right appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Media literacy has never been more urgent. https://www.lordclementjones.org/2025/12/07/media-literacy-has-never-been-more-urgent/?utm_source=rss&utm_medium=rss&utm_campaign=media-literacy-has-never-been-more-urgent Sun, 07 Dec 2025 12:26:19 +0000 https://www.lordclementjones.org/?p=76986 This is a speech I recently gave at the launch of the Digital Policy Alliance’s new report on Media literacy […]

The post Media literacy has never been more urgent. appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
This is a speech I recently gave at the launch of the Digital Policy Alliance’s new report on Media literacy in Education

With continuing Government efforts to see public services online alongside expanding AI usage, media literacy has never been more urgent. Debates surrounding media literacy typically focus on visible risks rather than the deeper structural issues that determine who cannot understand, interpret and contribute in the digital age.

I have the honour of serving as an Officer of the Digital Inclusion All-Party Parliamentary Group (APPG), and previously as Treasurer of the predecessor Data Poverty APPG. This issue—ensuring digital opportunities are universal- is crucial for many of us in Parliament. 

The Urgent Case for Digital Inclusion

As many of us in this room know, digital inclusion is not an end in itself; it is a vital route to better education, to employment, to improved healthcare, and a key means of social connection. Beyond the social benefits, there are also huge economic benefits of achieving a fully digitally capable society. Research suggests that increased digital inclusion could result in a £13.7 billion uplift to UK GDP.

Yet, while the UK aspires to global digital leadership, digital exclusion remains a serious societal problem. The figures are sobering:

  • 1.7 million households have no mobile or broadband internet at home.
  • Up to a million people have cut back or cancelled internet packages in the past year as cost of living challenges bite.
  • Around 2.4 million people are unable to complete a single basic digital task required to get online.
  • Over 5 million employed adults cannot complete essential digital work tasks.
  • Basic digital skills are set to become the UK’s largest skills gap by 2030.
  • And four in ten households with children do not meet the Minimum Digital Living Standard (MDLS).

The consequence of this is that millions of people are prevented from living a full, active, and productive life, which is bad for them and bad for the country. This is why the core mission of the DPA—to tackle device, data, and skills poverty—is so essential.

Media Literacy: Addressing the Structural Roots of Exclusion

Today, the DPA is launching its Media Literacy Report, and its timing could not be more important. With continuing Government efforts to move public services online, coupled with the rapid expansion of AI usage, media literacy has never been more urgent.

The DPA report wisely moves beyond focusing solely on the visible risks of the internet, such as misinformation, and addresses the deeper structural issues. Media literacy is inextricably linked to digital exclusion: the ability to understand, interpret, and contribute in the digital age is determined by access to devices, socio-economic background, and school policy. 

  • School phone bans must be accompanied by extensive media literacy education, which is iterated and revisited at multiple stages. 
  • Teachers must receive meaningful training on media literacy. 
  • Parents must be supported by received accessible guidance on media literacy. 
  • Schools should consider peer-to-peer learning opportunities. 
  • Tech companies must disclose information on how recommendation algorithms function and select content.
  • AI generated information must be labelled as such. 
  • Verification ticks should be removed from accounts spreading misinformation, especially related to health. 

We risk consigning people to a world of second-class services if we do not provide the foundational skills required to engage critically, confidently, and safely with the online world. Crucially, the DPA’s work keeps those with lived experience of digital exclusion at the heart of the analysis, providing real-life stories from parents, teachers, and young people.

Tackling Data Poverty: The Affordability Challenge

One of the most immediate and significant barriers to inclusion is affordability—what we often refer to as data poverty. Two million households in the UK are currently struggling to pay for broadband, and Age UK hears from older people who find essential services—like checking bus times or dealing with benefits—impossible due to lack of digital confidence and the pressure to manage costs.

The current system relies heavily on broadband social tariffs as the primary fix, but uptake has been sluggish, with only 5% of eligible customers having signed up previously. This is due to confusion, low awareness, cost, and complexity.

The solution requires radical, coordinated action:

  1. Standardisation: All operators should offer social tariffs to an agreed industry standard on speed, price, and terms. This will make it easier for customers to compare and take advantage of these vital packages.
  2. Simplified Access: We welcome the work being done by the DWP to develop a consent model that uses Application Programming Interfaces (APIs) to allow internet service providers (ISPs) to confirm a customer’s eligibility for benefits, such as Universal Credit. This drastically simplifies the application journey for the customer.
  3. Sustainable Funding: My colleagues in Parliament and I have been keen to explore innovative funding methods. One strong proposal is to reduce VAT on broadband social tariffs to align with other essential goods (at least 5% or 0%). It has been calculated that reinvesting the tax receipts received from VAT on all broadband into a social fund could provide an estimated £2.1 billion per year to provide all 6.8 million UK households receiving means-tested benefits with equitable access.

Creating a Systemic, Rights-Based Approach

If we are to achieve a ‘Digital Britain by 2030’, we need more than fragmented, short-term solutions. We need a systematic, rights-based approach.

First, we must demand better data and universal standards. The current definition of digital inclusion, based on whether someone has accessed the internet in the past three months, is completely outdated. We should replace this outdated ONS definition with a more holistic and up-to-date approach, such as the Minimum Digital Living Standard (MDLS). This gives the entire sector a common goal.

Second, we must formally recognize internet access as an essential utility. We should think of the internet as critical infrastructure, like the water or power system. This would ensure better consumer protection.

Third, we must embed offline and physical alternatives. While encouraging digital use, we must ensure that people who cannot or do not wish to get online—such as many older people who prefer interacting with services like banking in person—have adequate, easy-to-access, non-digital options. Essential services like telephone helplines for government services, such as HMRC, and the national broadcast TV signal must be protected so the digital divide is not widened further.

Fourth, we must empower local and community infrastructure. Tackling exclusion must happen on the ground. We need to boost digital inclusion hubs and support place-based initiatives. This involves increasing the capacity and use of libraries and community centres as digital support centres and providing free Wi-Fi provision in public spaces. 

We should stand ready to support the Government’s Digital Inclusion Action Plan, but we must continue to emphasize the need for a longer-term strategy that has central oversight, such as a dedicated cross-government unit, to ensure that every policy decision is digitally inclusive from the outset.

The commitment demonstrated by the Digital Poverty Alliance today, and by everyone in this room, proves that we can and must eliminate digital poverty and ensure no one is left behind.

The post Media literacy has never been more urgent. appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Lord C-J at Writers’ All Party Group Annual Reception: We need duty of transparency https://www.lordclementjones.org/2025/12/07/lord-c-j-at-writers-all-party-group-annual-reption-we-need-duty-of-transparency/?utm_source=rss&utm_medium=rss&utm_campaign=lord-c-j-at-writers-all-party-group-annual-reption-we-need-duty-of-transparency Sun, 07 Dec 2025 12:15:10 +0000 https://www.lordclementjones.org/?p=76975 This evening’s winter reception of the All Party Writers Group takes place at an important moment for authors and writers. […]

The post Lord C-J at Writers’ All Party Group Annual Reception: We need duty of transparency appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
This evening’s winter reception of the All Party Writers Group takes place at an important moment for authors and writers. It is therefore especially appropriate that we are joined by Dr Clementine Collett, whose important new report, The Impact of Generative AI on the Novel, sets out in clear terms the risks and opportunities that generative technologies present for long‑form fiction

 

 

 

Her work reinforces a message that writers, agents and publishers have been giving Parliament for some time: that generative AI must develop within a framework that protects the integrity of original work, the viability of creative careers and the trust of readers.

The starting point is the change of direction we have already seen. Following an overwhelming response to its consultation on copyright and AI, the Government has stepped back from its previously stated preferred option of a broad copyright exception for text and data mining. That proposal was regarded by authors and rightsholders as unfair, unworkable and difficult to reconcile with international norms. The decision to move away from it has been widely welcomed across the creative industries, and rightly so. 

The government has recognised that the copyright creative content is not an input to be taken for granted, but an asset that needs clear, enforceable rights.

From the outset, rightsholders have been remarkably consistent in what they ask for. They want a regime based on transparency, licensing and choice. Transparency, so that authors know whether and how their works have been used in training AI systems and their rights can be enforced.  

Licensing, so that companies seeking to build powerful models on the back of that material do so on lawful terms. 

And choice, so that individual creators can decide whether their work is used in this way and, if so, on what conditions and at what price. Dr Collett’s report underlines just how crucial these principles are for novelists, whose livelihoods depend on the distinctiveness of their voice and the long‑term value of their backlist.

In parliamentary terms, much of this came into sharp relief during the passage of the Data (Use and Access) Bill, where many of us in both houses were proud to  support the amendments brought forward by Baroness Beeban Kidron. Those amendments reflected the concerns of musicians, authors, journalists and visual artists that their works were already being used to train AI models without their permission and without remuneration. They made it clear that they were not anti‑technology, but that innovation had to be grounded in respect for copyright and for the moral and economic rights that underpin creative work. 

Those concerns are echoed in Dr Collett’s analysis of how unlicensed training can erode both the economic prospects of writers and the incentive to invest in new writing.

Since then, there have been some modest but important advances. We have seen a renewed emphasis from the Secretaries of State at DSIT and DCMS on supporting UK creatives and the wider creative industries. Preliminary and then technical working groups on copyright and AI have been convened, alongside new engagement forums on intellectual property for Members of both Houses. 

The Creative Industries Sector Vision, and the announcement of a Freelance Champion, signal an acceptance that the conditions for freelance writers must be improved if we want a sustainable pipeline of new work. For novelists in particular, whose incomes are often precarious and long‑term, the policy choices made now in relation to AI will have lasting consequences.

In parallel, the international context has moved rapidly. High‑profile litigation in the United States has demonstrated that the boundary between lawful and unlawful use of works for training models is real and enforceable, with significant financial consequences when it is crossed. The European Union has moved ahead with guidelines for general‑purpose AI under the AI Act, designed in part to give practical effect to copyright‑related provisions. 

Courts in the EU have begun to address the legality of training on protected works such as song lyrics. Other jurisdictions, including Australia and South Korea, are clarifying that there will be no blanket copyright exemptions for AI training and are setting out how AI‑generated material will sit within their systems.

Here in Parliament, the Lords Communications and Digital Committee has continued its inquiry into AI and copyright, taking evidence from leading legal experts. A number of points have emerged strongly from that work: that transparency is indispensable if rightsholders are to know when their works have been used; that purely voluntary undertakings in codes of practice are not sufficient; and that there is, as yet, no compelling evidence that the existing UK text and data mining exception in section 29A of the Copyright, Designs and Patents Act should be widened. Dr Collett’s report adds a vital literary dimension to this picture, examining how the widespread deployment of generative AI could reshape the market for fiction, the expectations of readers and the discovery of new voices if left unchecked.

Against this backdrop, the position of writers’ organisations has been clear. The Authors’ Licensing and Collecting Society, reflecting a survey of over 13,500 members, is firmly opposed to any new copyright exception that would weaken protection for works used in AI training. We argue instead for licensing models that give technology companies access to content while preserving genuine choice and control for creators. 

Working with the Copyright Licensing Agency, ALCS is developing a specific licence for training generative AI systems, initially focused on professional, academic and business content, where licensing is already well embedded and where small language models can be tested in a controlled way. There is strong concern that, if left entirely to market forces, generative systems could flood the ecosystem with derivative material, making it harder for original voices to be heard and weakening the economic foundation of literary careers. That is why many in the sector argue that fiction should be approached with particular care, and that any licensing solutions must be robust, transparent and genuinely optional.

Looking ahead, several priorities suggest themselves. First, Government should make clear that it will not re‑open the door to a broad copyright exception for AI training.

Secondly, it should actively support the development of practical licensing routes, including those being taken forward by ALCS and CLA, while recognising that fiction may require distinct treatment. 

Thirdly, transparency and record‑keeping obligations on AI developers should be strengthened so that rightsholders, including novelists, can identify when and how their works have been used.

Finally, Parliament should continue to scrutinise this area closely, informed by expert work such as Dr Collett’s and by the lived experience of writers represented through this All-Party Group.

The past year has shown what can be achieved when writers organise and speak with a united voice. The Government has shifted away from its most problematic proposals and has begun to engage more seriously with the issues.

But for authors the destination has not yet been reached. The aim must be a settlement in which creators can be confident that their rights will be respected, that they have meaningful choice over the use of their work in AI, and that they can share fairly in any new value created. This evening’s discussion, and the findings of Dr Collett’s report, are an important contribution to that task. This work must continue, but I believe we are now on the right path: one of balance, respect and creative confidence for and by our creators in the digital age.When the Government launched its consultation on copyright and artificial intelligence, there was a strong sense of unease among creators and rights holders. Their response was overwhelming—and decisive. The Government quite rightly moved away from its original proposal to introduce a copyright exception for text and data mining. That so‑called “preferred option” would have been unfair to authors, unworkable in practice, and at odds with our international obligations under the Berne Convention and other frameworks.

Instead, the clear message from those who create—from writers and composers to journalists, artists and performers—was that transparency and choice must guide the use of their work in the age of AI. As many rightsholders stressed, a transparent licensing system would allow AI companies to gain legitimate access to creative material while ensuring that authors can exercise control and be remunerated fairly for the use of their works.

My Lords, I was proud to support the amendment tabled by Baroness Kidron to the Data (Use and Access) Bill earlier this year. I said then, and I say again tonight, that musicians, authors, journalists and visual artists have every right to be concerned about their work being used in the training of AI models without permission, transparency or remuneration. These creators are not seeking to halt innovation, but to ensure that innovation is lawful, ethical and sustainable. Only through trust and fairness can we achieve that balance.

Since then, welcome signs have emerged. A change of personnel at DSIT and DCMS has brought, I hope, a more vigorous commitment to our creative sectors. New engagement groups and technical working groups have been established, including those for Members of both Houses, to consider the complex interactions between copyright and AI. I commend that spirit of dialogue—but now we need to see outcomes, not just ongoing discussion.

The Government’s Creative Industries Sector Vision also set out ambitions that we can all share. The appointment of a Freelance Champion, long advocated by many of us, is especially welcome. We await news of how the role will evolve, but it is another step toward strengthening the creative economy that underpins so much of Britain’s soft power and international reputation.

Developments abroad remind us that we are not alone in this debate. In the United States, the landmark settlement between Anthropic and authors earlier this year, worth 1.5 billion dollars, demonstrates that AI companies cannot simply appropriate creative works without consequence. In Europe, the Commission is advancing guidelines for general-purpose AI under the AI Act, including measures to enforce copyright obligations. The Regional Court of Munich has likewise held OpenAI to account for reproducing protected lyrics in training outputs. Elsewhere, Australia has confirmed that it will not introduce a copyright exception, while South Korea moves ahead with its own AI-copyright framework.

Internationally, then, we see convergence around one simple idea: respect for copyright remains essential to confidence in creative and AI innovation alike.

That position is reflected clearly in the work of the Authors’ Licensing and Collecting Society. Its recent survey of over 13,000 members shows a striking consensus: loosening copyright rules would be counterproductive and unfair to writers. By contrast, licensing systems give creators choice and control, enabling them to decide whether—and on what terms—their works are used.

The ALCS, together with the Copyright Licensing Agency, is now developing an innovative licensing model for the training of generative AI systems. This is a pragmatic and forward-looking approach, beginning in areas like professional, academic and business publishing where licensing frameworks already operate successfully. It builds on systems that work, rather than tearing them down.

Of course, literary fiction is more sensitive territory, and the ALCS is right to proceed carefully. But experimentation in smaller, more structured datasets can be a valuable way to test principles and develop viable models. As the courts continue to deal with questions of historic misuse, this prospective route offers a constructive path forward.

The creative industries are united. They do not seek privilege, only parity. They oppose new copyright exceptions that would undermine markets and livelihoods, but they also recognise the need to make licensing work—so that ministers and AI companies cannot claim it is impractical or inadequate.

Much progress has been made. The Government is, at last, listening. But until creators can be confident that their rights will be respected, this campaign cannot rest.

Our writers, musicians and artists have given us immense cultural wealth. Ensuring that they share fairly in the new wealth created by artificial intelligence is not an impediment to innovation—it is the foundation of it. This work must continue, and I believe we are now on the right path: one of balance, respect and creative confidence in a digital age.

The post Lord C-J at Writers’ All Party Group Annual Reception: We need duty of transparency appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Liberal Democrats Say No to Compulsory Digital ID https://www.lordclementjones.org/2025/10/19/liberal-democrats-say-no-to-compulsory-digital-id/?utm_source=rss&utm_medium=rss&utm_campaign=liberal-democrats-say-no-to-compulsory-digital-id Sun, 19 Oct 2025 15:49:12 +0000 https://www.lordclementjones.org/?p=76913 The Government recently announced the introduction of a mandatory requirement for Digital Identity to be used in right to work […]

The post Liberal Democrats Say No to Compulsory Digital ID appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

The Government recently announced the introduction of a mandatory requirement for Digital Identity to be used in right to work checks.

The introduction of compulsory digital ID represents another fundamental error by this Government. The Liberal Democrats strongly oppose this proposal, which is a serious threat to privacy, civil liberties and social inclusion. We thank the Minister for bringing the Secretary of State’s Statement to this House today, but my disappointment and opposition to the Government’s plan more than mirrors that of my honourable friend Victoria Collins in the Commons yesterday.

The core issue here is not technology but freedom. The Government insist this scheme is non-compulsory, yet concurrently confirm that it will be mandatory for right-to-work checks by the end of this Parliament. This is mandatory digital ID in all but name, especially for working-age people. As my party leader Sir Ed Davey has stated, we cannot and will not support a system where citizens are forced to hand over private data simply to participate in everyday life. This is state overreach, plain and simple.

The Secretary of State quoted Finland and the ability of parents to register for daycare, but I think the Secretary of State needs to do a bit more research. That is a voluntary scheme, not a compulsory one. We have already seen the clear danger of mission creep. My honourable friend Victoria Collins rightly warned that the mere discussion of extending this scheme to 13 to 16 year-olds is sinister, unnecessary and a clear step towards state overreach. Where does this stop?

The Secretary of State sought to frame this as merely a digital key to unlock better services. This dangerously conflates genuine and desirable public service reform with a highly intrusive mandate. First, the claim that this will deliver fairness and security by tackling illegal migration is nothing more than a multibillion-pound gimmick. The Secretary of State suggests that it will deter illegal working, yet, as my colleagues have pointed out, rogue employers who operate cash-in-hand schemes will not look at ID on a phone. Mandatory digital ID for British citizens will not stop illegal migrants working in the black economy.

Secondly, the claim that the system will be free is disingenuous. As my honourable friend Max Wilkinson, our home affairs spokesman, demanded, the Government must come clean on the costs and publish a full impact assessment. Estimates suggest that creating this system will cost between £1 billion and £2 billion, with annual running costs of £100 million pounds. This is completely the wrong priority at a time when public services are crumbling.

Thirdly, the promise of inclusion rings hollow. This mandatory system risks entrenching discrimination against the millions of vulnerable people, such as older people and those on low incomes, who lack foundational digital skills, a smartphone or internet access.

The greatest concern is the Government’s insistence on building this mandatory system on GOV.UK’s One Login, a platform with security failures that have been repeatedly and publicly criticised, including in my own correspondence and meetings with government. There are significant concerns about One Login’s security. The Government claim that One Login adheres to the highest security standards. Despite this commitment, as of late 2024 and early 2025, the system was still not fully compliant. A GovAssure assessment found that One Login was meeting only about 21 of the 39 required outcomes in the NCSC cyber assessment framework. The GOV.UK One Login programme has told me that it is committed to achieving full compliance with the cyber assessment framework by 21 March 2026, yet officials have informed me that 500 services across 87 departments are already currently in scope for the One Login project.

There are other criticisms that I could make, but essentially the foundations of the digital ID scheme are extremely unsafe, to say the least. To press ahead with a mandatory digital ID system, described as a honeypot for hackers, based on a platform exhibiting such systemic vulnerabilities is not only reckless but risks catastrophic data breaches, identity theft and mass impersonation fraud. Concentrating the data of the entire population fundamentally concentrates the risk.

The Secretary of State must listen to the millions of citizens who have signed the petition against this policy. We on these Benches urge the Government to scrap this costly, intrusive and technologically unreliable scheme and instead focus on delivering voluntary, privacy-preserving digital public services that earn the public’s trust rather than demanding compliance.

The post Liberal Democrats Say No to Compulsory Digital ID appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
The Great AI Copyright Battle: Why Transparency Matters https://www.lordclementjones.org/2025/06/19/the-great-ai-copyright-battle-why-transparencys-matters/?utm_source=rss&utm_medium=rss&utm_campaign=the-great-ai-copyright-battle-why-transparencys-matters Thu, 19 Jun 2025 07:58:33 +0000 https://www.lordclementjones.org/?p=76878 We have recently had unprecedented “ping pong” between the Lords and Commons on whether to incorporate provisions in the Data […]

The post The Great AI Copyright Battle: Why Transparency Matters appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
We have recently had unprecedented “ping pong” between the Lords and Commons on whether to incorporate provisions in the Data Use and Access Bill ( now Act) which would ensure that AI developers would be required to be transparent about the copyright content used to train their models. Liberal Democrats in both the Lords and Commons consistently supported this change throughout. This is why.

As Co-chair of the All-Party Parliamentary Group on Artificial Intelligence and now Chair of the Authors’ Licensing and Collecting Society (ALCS), I find myself at the epicentre of one of the most significant intellectual property debates of our time.

The UK’s creative industries are economic powerhouses, contributing £126 billion annually while safeguarding our cultural identity. Yet they face an existential challenge: the wholesale scraping of copyrighted works from the web to train AI systems without permission or payment.

The statistics are stark. A recent ALCS survey revealed that 77% of writers don’t even know if their work has been used to train AI systems. Meanwhile, 91% believe their permission should be required, and 96% want compensation for use of their work. This isn’t anti-technology sentiment – it’s about basic fairness.

From Sir Paul McCartney to Sir Elton John, hundreds of prominent creatives have demanded action. They’re not opposing AI innovation; many already use AI in their work. They simply want their intellectual property rights respected so they can continue making a living.

December’s government consultation on Copyright and AI proposed a text and data mining exception with an opt-out mechanism for rights holders. This approach fundamentally misunderstands the problem. It places the burden on creators to police the internet, protecting their own works – an impossible task given the scale and opacity of AI training.

The creative sector’s opposition has been overwhelming. The proposed framework would undermine existing copyright law while making enforcement practically impossible. As I’ve consistently argued, existing copyright law is sufficient if properly enforced – what we need is mandatory transparency.

During debates on the Data (Use and Access) Bill, Baroness Kidron championed amendments requiring AI developers to disclose copyrighted material used in training data. These amendments received consistent support from all Liberal Democrat MP’s and peers, crossbench peers, and many  Labour and Conservative backbench peers.

The government’s resistance has been remarkable. Despite inserting a requirement for an econimic impact assessment and a report on copyright use in AI development, they have opposed mandatory transparency, leading to an unprecedented “ping-pong” debate between the Houses.

Transparency isn’t about stifling innovation – it’s about enabling legitimate licensing. How can creators license their work if they don’t know who’s using it? How can fair compensation mechanisms develop without basic disclosure of what’s being used?

The current system allows AI companies to harvest vast quantities of creative content while claiming ignorance about specific sources. This creates a fundamental power imbalance where billion-dollar tech companies benefit from the work of individual creators who remain entirely in the dark.

The solution isn’t complex. Mandatory transparency requirements would enable:

  • Creators to understand how their work is being used
  • Development of fair licensing mechanisms
  • Preservation of existing copyright frameworks
  • Continued AI innovation within legal boundaries

This debate reflects deeper concerns about AI innovation coming at the expense of human creativity. The government talks about supporting creative industries while simultaneously weakening the intellectual property protections that sustain them.

We need policies that recognize the symbiotic relationship between human creativity and technological advancement. AI systems trained on creative works should provide some return to those creators, just as streaming platforms pay royalties for music usage.

The government has so far failed to rise to this challenge. But with continued parliamentary pressure and overwhelming creative sector support, we can still achieve a framework that protects both innovation and creativity.

The question isn’t whether AI will transform creative industries – it’s whether that transformation will be fair, transparent, and sustainable for the human creators whose work makes it all possible.

 

 

 

The post The Great AI Copyright Battle: Why Transparency Matters appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>