Tim Clement-Jones, Author at Lord Clement-Jones | Speaker AI and Creative Industries https://www.lordclementjones.org/author/tim/ Speaker AI and Creative Industries | UK, China, Middle East | Lord Clement-Jones Sun, 25 Jan 2026 18:22:02 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9 https://www.lordclementjones.org/wp-content/uploads/2018/09/cropped-lcj-icon-32x32.png Tim Clement-Jones, Author at Lord Clement-Jones | Speaker AI and Creative Industries https://www.lordclementjones.org/author/tim/ 32 32 Lord C-J and the Lib Dems: Risk-Based Age Ratings, Not Blanket Bans: A Smarter Way to Protect Children Online https://www.lordclementjones.org/2026/01/25/lord-c-j-and-the-lib-dems-risk-based-age-ratings-not-blanket-bans-a-smarter-way-to-protect-children-online/?utm_source=rss&utm_medium=rss&utm_campaign=lord-c-j-and-the-lib-dems-risk-based-age-ratings-not-blanket-bans-a-smarter-way-to-protect-children-online Sun, 25 Jan 2026 17:52:45 +0000 https://www.lordclementjones.org/?p=77055 The House of Lords recently held an important debate on accrss of under 16’s to social Media. Lord Nash proposed  […]

The post Lord C-J and the Lib Dems: Risk-Based Age Ratings, Not Blanket Bans: A Smarter Way to Protect Children Online appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

As we have heard, the Government have announced a three-month consultation on children’s social media use. That is a welcome demonstration that the Government recognise the importance of this issue and are willing to consider further action beyond the Online Safety Act. However, our amendments make it clear that we should not wait until summer, or even beyond, to act, as we have a workable, legally operable solution before us today. Far from weakening the proposal from the noble Lord, Lord Nash, our amendments are designed to make raising the age to 16 deliverable in practice, not just attractive in a headline.

I share the noble Lord’s diagnosis: we are facing a children’s mental health catastrophe, with young people exposed to misogyny, violence and addictive algorithms. I welcome the noble Lord’s bringing this critical issue before the House and strongly support his proposal for a default minimum age of 16. After 20 years of profiteering from our children’s attention, we need a reset. The voices of young people themselves are impossible to ignore. At the same time, tens of thousands of parents have reached out to us all, just in the past week, calling to raise the age—we cannot let them down.

The Government have announced that Ministers will visit Australia to learn from its approach. I urge them to learn the right lessons. Australia has taken the stance of banning social media for under-16s, with a current list of 10 platforms. However, their approach demonstrates three critical flaws that Amendment 94A, as drafted, would replicate and that we must avoid.

First, there is the definition problem. The Australian legislation has had to draw explicit lines that keep services such as WhatsApp, Google Classroom and many gaming platforms out of scope, to make the ban effective. The noble Lord, Lord Nash, has rightly recognised these difficulties by giving the Secretary of State the power to exclude platforms, but that simply moves the arbitrariness from a list in legislation to ministerial discretion. What criteria would the Secretary of State use? Our approach instead puts those decisions on a transparent, risk-based footing with Ofcom and the Children’s Commissioner, rather than in one pair of hands.

Secondly, there is the cliff-edge problem. The unamended approach of Amendment 94A risks protecting children in a sterile digital environment until their 16th birthday, and then suddenly flooding them with harmful content without having developed the digital literacy to cope.

As the joint statement from 42 children’s charities warns, children aged 16 would face a dangerous cliff edge when they start to use high-risk platforms. Our amendment addresses that.

Thirdly, this proposal risks taking a Dangerous Dogs Bill approach to regulation. Just as breed-specific legislation failed because it focused on the type of dog rather than dangerous behaviour, the Australian ban focuses on categories rather than risk. Because it is tied to the specific purpose of social interaction, the Australian ban currently excludes high-risk environments such as Roblox, Discord and many AI chatbots, even though children spend a large amount of time on those platforms. An arbitrary list based on what platforms do will not deal with the core issue of harm. The Molly Rose Foundation has rightly warned that this simply risks migrating bad actors, groomers and violent groups from banned platforms to permitted ones, and we will end up playing whack-a-mole with children’s safety. Our amendment is designed precisely to address that.

Our concerns are shared by the very organisations at the forefront of child safety. This weekend, 42 charities and experts, including the Molly Rose Foundation, the NSPCC, the Internet Watch Foundation, Childline, the Breck Foundation and the Centre for Protecting Women Online, issued a joint statement warning that

“‘social media bans’ are the wrong solution”.

They warn that blanket bans risk creating a false sense of safety and call instead for risk-based minimum ages and design duties that reflect the different levels of risk on different platforms. When the family of Molly Russell, whose tragic death galvanised this entire debate, warns against blanket bans and calls for targeted regulation, we must listen. Those are the organisations that pick up the pieces every day when things go wrong online. They are clear that a simple ban may feel satisfying, but it is the wrong tool and risks a dangerous false sense of safety.

Our amendments build on the foundation provided by the noble Lord, Lord Nash, while addressing these critical flaws. They would provide ready-made answers to many of the questions the Government’s promised consultation will raise about minimum ages, age verification, addictive design features and how to ensure that platforms take responsibility for child safety. We would retain the default minimum age of 16. Crucially, that would remain the law for every platform unless and until it proves against rigorous criteria that it is safe enough to merit a lower age rating. However, and this is the crucial improvement, platforms could be granted exemptions if—and only if—they can demonstrate to Ofcom and the Children’s Commissioner that they do not present a risk of harm.

Our amendments would create film-style age ratings for platforms. Safe educational platforms could be granted exemptions with appropriate minimum ages, and the criteria are rigorous. Platforms would have to demonstrate that they meet Ofcom’s guidance on risk-based minimum ages, protect children’s rights under the UN Convention on the Rights of the Child, have considered their impact on children’s mental health, have investigated whether their design encourages addictive use and have reviewed their algorithms for content recommendation and targeted advertising. So this is not a get-out clause for tech companies; it is tied directly to whether the actual design and algorithms on their platforms are safe for children. Crucially, exemptions are subject to periodic review and, if standards slip, the exemption can be revoked.

First, this prevents mitigating harms. If Discord or a gaming lobby presents a high risk, it would not qualify for exemption. If a platform proves it is safe, it becomes accessible. We would regulate risk to the child, not the type of technology.

Secondly, it incentivises safety by design. The Australian model tells platforms to build a wall to block children. This concern is shared by the Online Safety Act Network, representing 23 organisations whose focuses span child protection, suicide prevention and violence against women and girls. It warns that current implementation focuses on

“ex-post measures to reduce the … harm that has already occurred rather than upstream, content-neutral, ‘by-design’ interventions to seek to prevent it occurring in the first place”.

It explicitly calls for requiring platforms to address

“harms to children caused by addictive or compulsive design”—

precisely what our amendment mandates.

Thirdly, it is future-proof. We must prepare for a future that has already arrived—AI, chatbots and tomorrow’s technologies. Our risk-based approach allows Ofcom and the Children’s Commissioner to regulate emerging harms effectively, rather than playing catch-up with exemptions.

We should not adopt a blunt instrument that bans Wikipedia or education and helpline services by accident, drives children into high-risk gaming sites by omission or creates a dangerous cliff edge at 16 by design. We should not fall into the trap of regulating categories rather than harms, and we should not put the power to choose in one person’s hands, namely the Secretary of State.

Instead, let us build on the foundation provided by the noble Lord, Lord Nash, by empowering Ofcom and the Children’s Commissioner to implement a sophisticated world-leading system, one that protects children based on actual risk while allowing them to learn, communicate and develop digital resistance. I urge the House to support our amendments to Amendment 94A.

The post Lord C-J and the Lib Dems: Risk-Based Age Ratings, Not Blanket Bans: A Smarter Way to Protect Children Online appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Ahead of AGI or Superintelligence we need binding legislation not advisory powers https://www.lordclementjones.org/2026/01/11/ahead-of-agi-or-superintelligence-we-need-binding-legislation-not-advisory-powers/?utm_source=rss&utm_medium=rss&utm_campaign=ahead-of-agi-or-superintelligence-we-need-binding-legislation-not-advisory-powers Sun, 11 Jan 2026 12:19:38 +0000 https://www.lordclementjones.org/?p=77037 We recently held a debate in the Lords prompted by warnings from the Director General of MI5 of the dangers […]

The post Ahead of AGI or Superintelligence we need binding legislation not advisory powers appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
We recently held a debate in the Lords prompted by warnings from the Director General of MI5 of the dangers of AI. This is an expanded version of my speech 

My Lords, the Director General of MI5 has issued a stark warning: future autonomous AI systems, operating without effective human oversight, could themselves become a major security risk. He stated it would be “reckless” to ignore AI’s potential for harm. We must ask the Government directly: what specific steps are being taken to ensure we maintain control of these systems?

The urgency is underlined by events from mid-September 2025. Anthropic detected what they assessed to be the first documented large-scale cyber espionage campaign using agentic AI. AI is no longer merely generating content—it is autonomously developing plans, solving problems, and executing code to breach the security of organisations and states.

We are entering an era where AI systems chain tasks together and make decisions with minimal human input. As Yoshua Bengio, Turing Award winner and one of AI’s pioneers, has warned: these systems are showing signs of self-preservation. In experiments, AI models have chosen their own preservation over human safety when faced with such choices. Bengio predicts we could see major risks from AI within five to ten years, with systems potentially capable of autonomous proliferation.

Professor Stuart Russell describes this as the “control problem”—how to maintain power over entities that will become more powerful than us. He warns we have made a fundamental error: we are building AI systems with fixed objectives, without ensuring they remain uncertain about human preferences. This creates what he calls the “King Midas problem”—systems pursuing misspecified objectives with catastrophic results. Social media algorithms already demonstrate this, learning to manipulate humans and polarise societies in pursuit of engagement metrics.

Mustafa Suleyman, co-founder of DeepMind and now Microsoft’s AI CEO, has articulated what he calls the “containment problem”. Unlike previous technologies, AI has an inherent tendency toward autonomy and unpredictability. Traditional containment methods will prove insufficient. Suleyman recently stated that Microsoft will walk away from any AI system that risks escaping human control, but we must ask: will competitive pressures allow such principled restraint across the industry?

The scale of AI adoption makes these questions urgent. The Institution of Engineering and Technology (IET) reports that six in ten engineering employers are already using AI, with 61% expecting it to support productivity in the next five years. Yet this rapid deployment occurs against a backdrop of profound skills deficits and understanding gaps that directly undermine safety and control.

The barrier to entry for malicious actors is collapsing. We have evidence of UK-based threat actors using generative AI to develop ransomware-as-a-service for as little as £400. Tools like WormGPT operate without ethical boundaries, allowing novice cybercriminals to create functional malware. AI-enabled social engineering grows more sophisticated—deepfake video calls have already fooled finance workers into releasing $25 million to fraudsters. Studies suggest AI can now determine which keys are being pressed on a laptop with over 90% accuracy simply by analysing typing sounds during video calls.

The IET warns that there is no ceiling on the economic harm that cyberattacks could cause. AI can expose vulnerabilities in systems, and the data that algorithms are trained with could be manipulated by adversaries, causing AI systems to make wrong decisions by design. Cyber security is not just about prevention—businesses must model their response to breaches as part of routine planning. Yet cyber security threats evolve constantly, requiring chartered experts backed by professional organisations to share best practice.

So how is the Government working with tech companies to ensure such features do not become systemic vulnerabilities?

The Government’s response, while active, appears fragmented. We have established the AI Security Institute—inexplicably renamed from the AI Safety Institute, though security and safety are distinct concepts. However, as BBC Tech correspondent Zoe Kleinman noted, the sector has grown tired of voluntary codes and guidelines. I have long argued, including in my support for Lord Holmes’s Artificial Intelligence (Regulation) Bill, that regulation need not be the enemy of innovation. Indeed, it can create certainty and consistency. Clear regulatory frameworks addressing algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing confidence to potential users.

The Government need to give clear answers on five critical areas which in my view are crucial for the development and retention of public trust in AI technology.

First, on institutional clarity and the definition of safety: The renaming of the AI Safety Institute to the AI Security Institute muddles two distinct concepts. Safety addresses preventing AI from causing unintended harm through error or misalignment. Security addresses protecting AI systems from being weaponised by adversaries. We need both, with clear mandates and regulatory teeth, not mere advisory powers.

Moreover, as the IET argues, we need a broader definition of AI safety that goes beyond physical harm. AI safety and risk assessment must encompass financial risks, societal risks, reputational damage, and risks to mental health, amongst other harms. Although the onus is on developers to prove their products are fit for purpose with no unintended consequences, further guidelines and standards around how this should be reported would support a regulatory environment that is both pro-innovation and provides safeguards against harm.

Second, on regulatory architecture: For nine years, I have co-chaired the All-Party Parliamentary Group on AI. Throughout this time, I have watched us lag behind other jurisdictions. The EU AI Act, with its risk-based framework, started to come into effect this year. South Korea has introduced an AI Basic/Framework Act and, separately, a Digital Bill of Rights setting overarching principles for digital rights and governance. Singapore has comprehensive AI governance. China regulates public-facing generative AI with inspection regimes.

Meanwhile, our government continues its “pro-innovation” approach which risks becoming a “no-regulation” approach. We need binding legislation with a broad definition of AI and early risk-based overarching requirements ensuring conformity with standards for proper risk management and impact assessment. As I have argued previously, this could build on existing ISO standards, designed to achieve  international convergence, embodying key principles which provide a good basis in terms of risk management, ethical design, testing, training, monitoring and transparency and should be applied where appropriate. 

Third, on transparency and understanding: There is profound concern over the lack of broader understanding and information surrounding AI. The IET reports that 29% of people surveyed had concerns about the lack of information around AI and lack of skills and confidence to use the technology, with over a quarter saying they wished there was more information about how it works and how to use it.

Fourth, on the specific challenges of agentic AI: Bengio warns that as AI models improve at abstract reasoning and planning, the duration of tasks they can solve doubles every seven months. He predicts that within five years, AI will reach human level for programming tasks. When systems can harvest credentials and extract data at thousands of requests per second, human oversight becomes physically impossible. The very purpose of agentic AI, as Oliver Patel of AstraZeneca noted, is to remove the human from the loop. This fundamentally breaks our traditional safety frameworks. We need new approaches—Russell’s proposal for machines that remain uncertain about human preferences, that understand their purpose is to serve rather than to achieve fixed objectives, deserves serious consideration.

Fifth, on skills, literacy and governance capability: The IET’s research reveals an alarming picture. Among employers that expect AI to be important for them, 50% say they don’t have the necessary skills. Thirty-two per cent of employers reported an AI skills gap at technician level. Most troubling of all, 46% say that senior management do not understand AI.

If nearly half of senior management across industry don’t understand AI, and if our civil servants and political leaders cannot grasp the fundamentals of agentic AI—its capabilities, its limitations, and crucially, its tendency toward self-preservation—they cannot be expected to govern it effectively. As I emphasised during debates on the Data (Use and Access) Bill, we must build public trust in data sharing and AI adoption. This requires not just safeguards but genuine understanding.

The lack of skills in AI is not only a safety concern but is hindering productivity and the ability to deliver contracts. To maximise AI’s potential, we need a suite of agile training programmes, such as short courses. While progress has been made with some government initiatives—funded AI PhDs, skills bootcamps—these do not go far enough to address the skills gaps appearing at the chartered and technician levels. 

The intellectual property question also demands urgent attention. The use of copyrighted material to train large language models without licensing has sparked litigation and unprecedented parliamentary debate. We need transparency duties on developers to ensure creative works aren’t ingested into generative AI models without return to rights-holders. AI has created discussion around the ownership of data needed to train these algorithms, as well as the impact of bias and fundamental data quality in the information they produce. As AI spans every sector, coordinated regulation is imperative for consistency and clarity.

We must also address what Bengio calls the “psychosis risk”—that increasingly sophisticated AI companions will lead people to believe in their consciousness, potentially advocating for AI rights. As Suleyman argues, we must be clear: AI should be built for people, not to be a digital person. 

There is one one further dimension : sustainability. There is a unique juxtaposition between AI and sustainability—AI is a high consumer of energy, but also possesses huge potential to tackle climate change. Reports predict that the use of AI could help mitigate 5 to 10% of global greenhouse gases by 2030. AI regulations should now look beyond the immediate risks of AI development to the much broader impact it has on the environment. There should be standards for the approval of new data centres in the UK, based on sustainability ratings. 

The Government has committed to binding regulation for companies developing the most powerful AI models, yet progress remains slower than hoped. Notably, 60 countries—including Saudi Arabia and the UAE, but not Britain—signed the Paris AI Action Summit declaration in February this year, committing to ensuring AI is “open, inclusive, transparent, ethical, safe, secure and trustworthy”. Why are we absent from such commitments?

The question now is not whether to regulate AI, but how to regulate it promoting both innovation and responsibility. We need principles-based rather than prescriptive regulation, emphasising transparency and accountability without stifling creativity. But let’s be clear, voluntary approaches have failed. The time for binding regulation is now.

As Russell reminds us, Alan Turing answered the control question in 1951: “At some stage therefore we should have to expect the machines to take control.” Russell notes that our response has been as if an alien civilisation warned us by email of its arrival in 50 years, and we replied, “Humanity is currently out of the office.” We have now read the email. The question is whether we will act with the seriousness this moment demands, or whether we will allow competitive pressures and short-term thinking to override the fundamental imperative of maintaining human control over these increasingly powerful systems.

The post Ahead of AGI or Superintelligence we need binding legislation not advisory powers appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
“The conventional wisdom that regulation stifles innovation needs to be turned on its head” https://www.lordclementjones.org/2025/12/07/the-conventional-wisdom-that-regulation-stifles-innovation-needs-to-be-turned-on-its-head/?utm_source=rss&utm_medium=rss&utm_campaign=the-conventional-wisdom-that-regulation-stifles-innovation-needs-to-be-turned-on-its-head Sun, 07 Dec 2025 12:36:36 +0000 https://www.lordclementjones.org/?p=76979 I recently wrote a piece for Chamber UK on Regulation and Innovation. An attempt to dispel a pervasive myth! “Regulation […]

The post “The conventional wisdom that regulation stifles innovation needs to be turned on its head” appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
I recently wrote a piece for Chamber UK on Regulation and Innovation. An attempt to dispel a pervasive myth!

“Regulation as an Enabler: The Case for Responsible AI”

The conventional wisdom that regulation stifles innovation needs to be turned on its head in the artificial intelligence sector. AI technology now impacts a vast array of sectors including healthcare, finance, transport, and more, influencing decisions that can drastically affect individuals and communities

 As AI systems become more powerful and pervasive, there is  growing recognition that appropriate regulation isn’t just about restricting harmful practices – it’s actually key to driving widespread adoption and sustainable growth.

There is a clear parallel with the early automotive industry. In the early 20th century, the  introduction of safety standards, driver licensing, and traffic rules didn’t kill the car industry – it enabled its explosive growth by building public confidence and creating predictable conditions for manufacturers. Similarly, thoughtful AI regulation can create the trust and stability needed for the technology to flourish.

In the current landscape many potential AI adopters – from healthcare providers to financial institutions – are hesitating not because of technological limitations, but due to uncertainties about liability, ethical boundaries, and public acceptance. Clear regulatory frameworks that address issues like algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing clarity and confidence and generating public trust. 

The inherent risks of AI, such as biases in decision-making, invasion of privacy, and potential job displacement, make it clear that unregulated AI can lead to significant ethical and societal repercussions. The call for regulation is about ensuring that AI systems operate within boundaries that protect human values and rights. Without this framework, the potential misuse or unintended consequences of AI could lead to public distrust and resistance against the technology

Far from being a brake on progress, well-designed regulation can be a catalyst for AI adoption and innovation. Regulation can drive innovation in the right direction. Just as environmental regulations spurred the development of cleaner technologies, AI regulations focusing on explainability and fairness could push developers to create more sophisticated and responsible systems. 

Regulation can stimulate innovation by defining the rules of the game, giving companies the confidence to invest in AI technologies without fear of future legal repercussions for unforeseen misuses. In markets where regulation is clear and aligned with global standards, companies can also find easier paths to expand internationally. This not only drives growth but also fosters international collaboration on global AI standards, leading to broader advancements in the field.

The question isn’t whether to regulate AI, but how to regulate it in a way that promotes both innovation and responsibility. Get this right, and regulation becomes a powerful enabler of AI’s future growth.

The EU’s AI Act and the UK’s proposed pro-innovation approach to AI regulation are contrasting and imperfect attempts to strike this balance. 

Regulation should be principles-based rather than overly prescriptive, allowing for technological evolution while maintaining focus on outcomes. It should emphasize transparency and accountability without stifling creativity. And critically, it must be developed with input from both technical experts and broader stakeholders to ensure it’s both practical and effective.

The journey towards responsible AI is not solely about technological achievement but also about how these technologies are integrated into society through thoughtful regulation. By establishing a robust regulatory framework, we can ensure that AI serves the public interest while also fostering an environment where trust and innovation lead to technological growth. The goal is to create a future where AI’s potential is fully realized in a way that is beneficial and safe for all. This is not just a possibility but a necessity as we step into an increasingly AI-driven world.

There is some growing recognition of this in the recently published AI Opportunities Plan in the UK. In particular the language around regulation assisting innovation is refreshing:

 ‘Well-designed and implemented regulation, alongside effective assurance tools, can fuel fast, wide and safe development and adoption of AI.

We  must now make that a reality!

 

The post “The conventional wisdom that regulation stifles innovation needs to be turned on its head” appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Getting the use of AI in hiring right https://www.lordclementjones.org/2025/12/07/getting-the-use-of-ai-in-hiring-right/?utm_source=rss&utm_medium=rss&utm_campaign=getting-the-use-of-ai-in-hiring-right Sun, 07 Dec 2025 12:32:20 +0000 https://www.lordclementjones.org/?p=76984 I recently took part in the Launcjh of the National Hiring Strategy by the newly formed Association of RecTech Providers. […]

The post Getting the use of AI in hiring right appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
I recently took part in the Launcjh of the National Hiring Strategy by the newly formed Association of RecTech Providers. This is what I said.

Good afternoon. It is a real privilege to welcome 200 of the UK’s leading HR, talent acquisition, and hiring professionals to the Terrace Pavilion for the launch of the first National Hiring Strategy.

This is an important moment . This is a collective commitment to make UK hiring fundamentally faster, fairer, and safer. The current state of UK hiring presents both an economic and a social challenge. On average, hiring takes almost 50 days. The outcomes speak for themselves: roughly 40 percent of new hires quit their jobs within three months. This inefficiency costs our economy millions annually and represents human potential squandered.

The National Hiring Strategy aims to tackle these issues head-on. The RecTech Roadmap—a key component of this strategy—provides the strategic blueprint for deploying technology to revolutionise how we hire. I welcome the formation of the Association of RecTech Providers. They will steer this change, set industry standards, and help ensure the UK gains global leadership.

Artificial Intelligence sits at the heart of this transformation..AI offers extraordinary opportunities. The efficiency gains are real and significant. AI tools can handle high-volume, repetitive tasks—screening CVs, scheduling interviews, processing applications—dramatically reducing time-to-hire. Some examples show reductions of up to 70 percent. That’s remarkable.

But speed alone isn’t the goal. What excites me most is AI’s potential to drive genuine inclusion. Technology, particularly AI combined, can enable greater labour market participation for those currently shut out: carers, people with disabilities or chronic illnesses, neurodiverse individuals, older workers, parents..AI can help us match people based on skills, passions, and circumstances—not just past work experience. It can help us create a world where work fits around people’s lives, rather than the other way around. That’s the vision I want to see realised.

However—and this is crucial—AI also has the potential to make hiring more problematic, more unfair, and more unsafe if we’re not careful. We must build robust ethical guardrails around these powerful tools.

 I’ve always believed that AI has to be our servant, not our master..

Fairness must be a key goal. The core ethical challenge is that machine learning models trained on historical data often reproduce past patterns of opportunity and disadvantage. They can penalise groups previously excluded—candidates with career gaps, for instance, or underrepresented minorities.

This isn’t hypothetical. We’ve seen AI systems reduce the representation of ethnic minorities and women in hiring pipelines. Under the Equality Act 2010, individuals are legally protected from discrimination caused by automated AI tools..

But we need proactive auditing. Regular, detailed bias assessments to identify, monitor, and mitigate unintended discrimination. These audits aren’t bureaucratic box-ticking—they’re critical checks and balances for ethical use.

While we don’t yet have specific AI legislation in the UK, recruiters must comply with existing data protection laws. Data minimisation is essential.. Audits have raised concerns when AI tools scrape far more information than needed from job networking sites, sometimes without candidates’ knowledge.

Transparency matters profoundly. Recruiters must inform candidates when AI tools are used, explaining what data is processed, the logic behind predictions, and how data is used for training. If this processing isn’t clearly communicated, it becomes “invisible”—and likely breaches GDPR fairness principles. Explanations should be simple and understandable, not buried in technical jargon.

And then the human touch should always maintained.  AI should complement, not replace, the human aspects of recruitment.

This should the case despite more nuanced provisions introduced under the Data Use and Access Act. Now the strict prohibition on significant decisions based solely on automated processing now applies only to decisions involving special category data (e.g. health, racial origin, genetics, biometrics but of course  recruiters will have some of that kind of information. 

But even where personal data is not “special category,” organisations must provide specific safeguards. Of :

  • Individuals must be informed about the automated decision, have the right to make representations and contest the decision and  intervention must be offered upon request or as required by law.

Judgment, empathy, and responsible innovation should remain at the core of how we attract and engage talent.

Businesses also need clear policies for accountability and redress. Individuals must be able to contest decisions where their rights have been violated..

The launch of this National Hiring Strategy provides a critical opportunity. The firms that succeed will be those that blend machine efficiency with human empathy. They will recognise that technology is a means to an end: creating opportunities, unlocking potential, and building a labour market that works for everyone.

They ensure we reach a faster, fairer, and safer UK labour market—without taking destructive shortcuts that leave people behind.

We stand at a moment of genuine possibility. The technology exists. The expertise is in this room. The Strategy provides the framework.. Let’s embrace AI’s potential with optimism but the end of the day, hiring isn’t about algorithms or efficiency metrics—it’s about people, their livelihoods, and their futures. Thank you.

The post Getting the use of AI in hiring right appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Media literacy has never been more urgent. https://www.lordclementjones.org/2025/12/07/media-literacy-has-never-been-more-urgent/?utm_source=rss&utm_medium=rss&utm_campaign=media-literacy-has-never-been-more-urgent Sun, 07 Dec 2025 12:26:19 +0000 https://www.lordclementjones.org/?p=76986 This is a speech I recently gave at the launch of the Digital Policy Alliance’s new report on Media literacy […]

The post Media literacy has never been more urgent. appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
This is a speech I recently gave at the launch of the Digital Policy Alliance’s new report on Media literacy in Education

With continuing Government efforts to see public services online alongside expanding AI usage, media literacy has never been more urgent. Debates surrounding media literacy typically focus on visible risks rather than the deeper structural issues that determine who cannot understand, interpret and contribute in the digital age.

I have the honour of serving as an Officer of the Digital Inclusion All-Party Parliamentary Group (APPG), and previously as Treasurer of the predecessor Data Poverty APPG. This issue—ensuring digital opportunities are universal- is crucial for many of us in Parliament. 

The Urgent Case for Digital Inclusion

As many of us in this room know, digital inclusion is not an end in itself; it is a vital route to better education, to employment, to improved healthcare, and a key means of social connection. Beyond the social benefits, there are also huge economic benefits of achieving a fully digitally capable society. Research suggests that increased digital inclusion could result in a £13.7 billion uplift to UK GDP.

Yet, while the UK aspires to global digital leadership, digital exclusion remains a serious societal problem. The figures are sobering:

  • 1.7 million households have no mobile or broadband internet at home.
  • Up to a million people have cut back or cancelled internet packages in the past year as cost of living challenges bite.
  • Around 2.4 million people are unable to complete a single basic digital task required to get online.
  • Over 5 million employed adults cannot complete essential digital work tasks.
  • Basic digital skills are set to become the UK’s largest skills gap by 2030.
  • And four in ten households with children do not meet the Minimum Digital Living Standard (MDLS).

The consequence of this is that millions of people are prevented from living a full, active, and productive life, which is bad for them and bad for the country. This is why the core mission of the DPA—to tackle device, data, and skills poverty—is so essential.

Media Literacy: Addressing the Structural Roots of Exclusion

Today, the DPA is launching its Media Literacy Report, and its timing could not be more important. With continuing Government efforts to move public services online, coupled with the rapid expansion of AI usage, media literacy has never been more urgent.

The DPA report wisely moves beyond focusing solely on the visible risks of the internet, such as misinformation, and addresses the deeper structural issues. Media literacy is inextricably linked to digital exclusion: the ability to understand, interpret, and contribute in the digital age is determined by access to devices, socio-economic background, and school policy. 

  • School phone bans must be accompanied by extensive media literacy education, which is iterated and revisited at multiple stages. 
  • Teachers must receive meaningful training on media literacy. 
  • Parents must be supported by received accessible guidance on media literacy. 
  • Schools should consider peer-to-peer learning opportunities. 
  • Tech companies must disclose information on how recommendation algorithms function and select content.
  • AI generated information must be labelled as such. 
  • Verification ticks should be removed from accounts spreading misinformation, especially related to health. 

We risk consigning people to a world of second-class services if we do not provide the foundational skills required to engage critically, confidently, and safely with the online world. Crucially, the DPA’s work keeps those with lived experience of digital exclusion at the heart of the analysis, providing real-life stories from parents, teachers, and young people.

Tackling Data Poverty: The Affordability Challenge

One of the most immediate and significant barriers to inclusion is affordability—what we often refer to as data poverty. Two million households in the UK are currently struggling to pay for broadband, and Age UK hears from older people who find essential services—like checking bus times or dealing with benefits—impossible due to lack of digital confidence and the pressure to manage costs.

The current system relies heavily on broadband social tariffs as the primary fix, but uptake has been sluggish, with only 5% of eligible customers having signed up previously. This is due to confusion, low awareness, cost, and complexity.

The solution requires radical, coordinated action:

  1. Standardisation: All operators should offer social tariffs to an agreed industry standard on speed, price, and terms. This will make it easier for customers to compare and take advantage of these vital packages.
  2. Simplified Access: We welcome the work being done by the DWP to develop a consent model that uses Application Programming Interfaces (APIs) to allow internet service providers (ISPs) to confirm a customer’s eligibility for benefits, such as Universal Credit. This drastically simplifies the application journey for the customer.
  3. Sustainable Funding: My colleagues in Parliament and I have been keen to explore innovative funding methods. One strong proposal is to reduce VAT on broadband social tariffs to align with other essential goods (at least 5% or 0%). It has been calculated that reinvesting the tax receipts received from VAT on all broadband into a social fund could provide an estimated £2.1 billion per year to provide all 6.8 million UK households receiving means-tested benefits with equitable access.

Creating a Systemic, Rights-Based Approach

If we are to achieve a ‘Digital Britain by 2030’, we need more than fragmented, short-term solutions. We need a systematic, rights-based approach.

First, we must demand better data and universal standards. The current definition of digital inclusion, based on whether someone has accessed the internet in the past three months, is completely outdated. We should replace this outdated ONS definition with a more holistic and up-to-date approach, such as the Minimum Digital Living Standard (MDLS). This gives the entire sector a common goal.

Second, we must formally recognize internet access as an essential utility. We should think of the internet as critical infrastructure, like the water or power system. This would ensure better consumer protection.

Third, we must embed offline and physical alternatives. While encouraging digital use, we must ensure that people who cannot or do not wish to get online—such as many older people who prefer interacting with services like banking in person—have adequate, easy-to-access, non-digital options. Essential services like telephone helplines for government services, such as HMRC, and the national broadcast TV signal must be protected so the digital divide is not widened further.

Fourth, we must empower local and community infrastructure. Tackling exclusion must happen on the ground. We need to boost digital inclusion hubs and support place-based initiatives. This involves increasing the capacity and use of libraries and community centres as digital support centres and providing free Wi-Fi provision in public spaces. 

We should stand ready to support the Government’s Digital Inclusion Action Plan, but we must continue to emphasize the need for a longer-term strategy that has central oversight, such as a dedicated cross-government unit, to ensure that every policy decision is digitally inclusive from the outset.

The commitment demonstrated by the Digital Poverty Alliance today, and by everyone in this room, proves that we can and must eliminate digital poverty and ensure no one is left behind.

The post Media literacy has never been more urgent. appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Lord C-J at Writers’ All Party Group Annual Reception: We need duty of transparency https://www.lordclementjones.org/2025/12/07/lord-c-j-at-writers-all-party-group-annual-reption-we-need-duty-of-transparency/?utm_source=rss&utm_medium=rss&utm_campaign=lord-c-j-at-writers-all-party-group-annual-reption-we-need-duty-of-transparency Sun, 07 Dec 2025 12:15:10 +0000 https://www.lordclementjones.org/?p=76975 This evening’s winter reception of the All Party Writers Group takes place at an important moment for authors and writers. […]

The post Lord C-J at Writers’ All Party Group Annual Reception: We need duty of transparency appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
This evening’s winter reception of the All Party Writers Group takes place at an important moment for authors and writers. It is therefore especially appropriate that we are joined by Dr Clementine Collett, whose important new report, The Impact of Generative AI on the Novel, sets out in clear terms the risks and opportunities that generative technologies present for long‑form fiction

 

 

 

Her work reinforces a message that writers, agents and publishers have been giving Parliament for some time: that generative AI must develop within a framework that protects the integrity of original work, the viability of creative careers and the trust of readers.

The starting point is the change of direction we have already seen. Following an overwhelming response to its consultation on copyright and AI, the Government has stepped back from its previously stated preferred option of a broad copyright exception for text and data mining. That proposal was regarded by authors and rightsholders as unfair, unworkable and difficult to reconcile with international norms. The decision to move away from it has been widely welcomed across the creative industries, and rightly so. 

The government has recognised that the copyright creative content is not an input to be taken for granted, but an asset that needs clear, enforceable rights.

From the outset, rightsholders have been remarkably consistent in what they ask for. They want a regime based on transparency, licensing and choice. Transparency, so that authors know whether and how their works have been used in training AI systems and their rights can be enforced.  

Licensing, so that companies seeking to build powerful models on the back of that material do so on lawful terms. 

And choice, so that individual creators can decide whether their work is used in this way and, if so, on what conditions and at what price. Dr Collett’s report underlines just how crucial these principles are for novelists, whose livelihoods depend on the distinctiveness of their voice and the long‑term value of their backlist.

In parliamentary terms, much of this came into sharp relief during the passage of the Data (Use and Access) Bill, where many of us in both houses were proud to  support the amendments brought forward by Baroness Beeban Kidron. Those amendments reflected the concerns of musicians, authors, journalists and visual artists that their works were already being used to train AI models without their permission and without remuneration. They made it clear that they were not anti‑technology, but that innovation had to be grounded in respect for copyright and for the moral and economic rights that underpin creative work. 

Those concerns are echoed in Dr Collett’s analysis of how unlicensed training can erode both the economic prospects of writers and the incentive to invest in new writing.

Since then, there have been some modest but important advances. We have seen a renewed emphasis from the Secretaries of State at DSIT and DCMS on supporting UK creatives and the wider creative industries. Preliminary and then technical working groups on copyright and AI have been convened, alongside new engagement forums on intellectual property for Members of both Houses. 

The Creative Industries Sector Vision, and the announcement of a Freelance Champion, signal an acceptance that the conditions for freelance writers must be improved if we want a sustainable pipeline of new work. For novelists in particular, whose incomes are often precarious and long‑term, the policy choices made now in relation to AI will have lasting consequences.

In parallel, the international context has moved rapidly. High‑profile litigation in the United States has demonstrated that the boundary between lawful and unlawful use of works for training models is real and enforceable, with significant financial consequences when it is crossed. The European Union has moved ahead with guidelines for general‑purpose AI under the AI Act, designed in part to give practical effect to copyright‑related provisions. 

Courts in the EU have begun to address the legality of training on protected works such as song lyrics. Other jurisdictions, including Australia and South Korea, are clarifying that there will be no blanket copyright exemptions for AI training and are setting out how AI‑generated material will sit within their systems.

Here in Parliament, the Lords Communications and Digital Committee has continued its inquiry into AI and copyright, taking evidence from leading legal experts. A number of points have emerged strongly from that work: that transparency is indispensable if rightsholders are to know when their works have been used; that purely voluntary undertakings in codes of practice are not sufficient; and that there is, as yet, no compelling evidence that the existing UK text and data mining exception in section 29A of the Copyright, Designs and Patents Act should be widened. Dr Collett’s report adds a vital literary dimension to this picture, examining how the widespread deployment of generative AI could reshape the market for fiction, the expectations of readers and the discovery of new voices if left unchecked.

Against this backdrop, the position of writers’ organisations has been clear. The Authors’ Licensing and Collecting Society, reflecting a survey of over 13,500 members, is firmly opposed to any new copyright exception that would weaken protection for works used in AI training. We argue instead for licensing models that give technology companies access to content while preserving genuine choice and control for creators. 

Working with the Copyright Licensing Agency, ALCS is developing a specific licence for training generative AI systems, initially focused on professional, academic and business content, where licensing is already well embedded and where small language models can be tested in a controlled way. There is strong concern that, if left entirely to market forces, generative systems could flood the ecosystem with derivative material, making it harder for original voices to be heard and weakening the economic foundation of literary careers. That is why many in the sector argue that fiction should be approached with particular care, and that any licensing solutions must be robust, transparent and genuinely optional.

Looking ahead, several priorities suggest themselves. First, Government should make clear that it will not re‑open the door to a broad copyright exception for AI training.

Secondly, it should actively support the development of practical licensing routes, including those being taken forward by ALCS and CLA, while recognising that fiction may require distinct treatment. 

Thirdly, transparency and record‑keeping obligations on AI developers should be strengthened so that rightsholders, including novelists, can identify when and how their works have been used.

Finally, Parliament should continue to scrutinise this area closely, informed by expert work such as Dr Collett’s and by the lived experience of writers represented through this All-Party Group.

The past year has shown what can be achieved when writers organise and speak with a united voice. The Government has shifted away from its most problematic proposals and has begun to engage more seriously with the issues.

But for authors the destination has not yet been reached. The aim must be a settlement in which creators can be confident that their rights will be respected, that they have meaningful choice over the use of their work in AI, and that they can share fairly in any new value created. This evening’s discussion, and the findings of Dr Collett’s report, are an important contribution to that task. This work must continue, but I believe we are now on the right path: one of balance, respect and creative confidence for and by our creators in the digital age.When the Government launched its consultation on copyright and artificial intelligence, there was a strong sense of unease among creators and rights holders. Their response was overwhelming—and decisive. The Government quite rightly moved away from its original proposal to introduce a copyright exception for text and data mining. That so‑called “preferred option” would have been unfair to authors, unworkable in practice, and at odds with our international obligations under the Berne Convention and other frameworks.

Instead, the clear message from those who create—from writers and composers to journalists, artists and performers—was that transparency and choice must guide the use of their work in the age of AI. As many rightsholders stressed, a transparent licensing system would allow AI companies to gain legitimate access to creative material while ensuring that authors can exercise control and be remunerated fairly for the use of their works.

My Lords, I was proud to support the amendment tabled by Baroness Kidron to the Data (Use and Access) Bill earlier this year. I said then, and I say again tonight, that musicians, authors, journalists and visual artists have every right to be concerned about their work being used in the training of AI models without permission, transparency or remuneration. These creators are not seeking to halt innovation, but to ensure that innovation is lawful, ethical and sustainable. Only through trust and fairness can we achieve that balance.

Since then, welcome signs have emerged. A change of personnel at DSIT and DCMS has brought, I hope, a more vigorous commitment to our creative sectors. New engagement groups and technical working groups have been established, including those for Members of both Houses, to consider the complex interactions between copyright and AI. I commend that spirit of dialogue—but now we need to see outcomes, not just ongoing discussion.

The Government’s Creative Industries Sector Vision also set out ambitions that we can all share. The appointment of a Freelance Champion, long advocated by many of us, is especially welcome. We await news of how the role will evolve, but it is another step toward strengthening the creative economy that underpins so much of Britain’s soft power and international reputation.

Developments abroad remind us that we are not alone in this debate. In the United States, the landmark settlement between Anthropic and authors earlier this year, worth 1.5 billion dollars, demonstrates that AI companies cannot simply appropriate creative works without consequence. In Europe, the Commission is advancing guidelines for general-purpose AI under the AI Act, including measures to enforce copyright obligations. The Regional Court of Munich has likewise held OpenAI to account for reproducing protected lyrics in training outputs. Elsewhere, Australia has confirmed that it will not introduce a copyright exception, while South Korea moves ahead with its own AI-copyright framework.

Internationally, then, we see convergence around one simple idea: respect for copyright remains essential to confidence in creative and AI innovation alike.

That position is reflected clearly in the work of the Authors’ Licensing and Collecting Society. Its recent survey of over 13,000 members shows a striking consensus: loosening copyright rules would be counterproductive and unfair to writers. By contrast, licensing systems give creators choice and control, enabling them to decide whether—and on what terms—their works are used.

The ALCS, together with the Copyright Licensing Agency, is now developing an innovative licensing model for the training of generative AI systems. This is a pragmatic and forward-looking approach, beginning in areas like professional, academic and business publishing where licensing frameworks already operate successfully. It builds on systems that work, rather than tearing them down.

Of course, literary fiction is more sensitive territory, and the ALCS is right to proceed carefully. But experimentation in smaller, more structured datasets can be a valuable way to test principles and develop viable models. As the courts continue to deal with questions of historic misuse, this prospective route offers a constructive path forward.

The creative industries are united. They do not seek privilege, only parity. They oppose new copyright exceptions that would undermine markets and livelihoods, but they also recognise the need to make licensing work—so that ministers and AI companies cannot claim it is impractical or inadequate.

Much progress has been made. The Government is, at last, listening. But until creators can be confident that their rights will be respected, this campaign cannot rest.

Our writers, musicians and artists have given us immense cultural wealth. Ensuring that they share fairly in the new wealth created by artificial intelligence is not an impediment to innovation—it is the foundation of it. This work must continue, and I believe we are now on the right path: one of balance, respect and creative confidence in a digital age.

The post Lord C-J at Writers’ All Party Group Annual Reception: We need duty of transparency appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Liberal Democrats Say No to Compulsory Digital ID https://www.lordclementjones.org/2025/10/19/liberal-democrats-say-no-to-compulsory-digital-id/?utm_source=rss&utm_medium=rss&utm_campaign=liberal-democrats-say-no-to-compulsory-digital-id Sun, 19 Oct 2025 15:49:12 +0000 https://www.lordclementjones.org/?p=76913 The Government recently announced the introduction of a mandatory requirement for Digital Identity to be used in right to work […]

The post Liberal Democrats Say No to Compulsory Digital ID appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

The Government recently announced the introduction of a mandatory requirement for Digital Identity to be used in right to work checks.

The introduction of compulsory digital ID represents another fundamental error by this Government. The Liberal Democrats strongly oppose this proposal, which is a serious threat to privacy, civil liberties and social inclusion. We thank the Minister for bringing the Secretary of State’s Statement to this House today, but my disappointment and opposition to the Government’s plan more than mirrors that of my honourable friend Victoria Collins in the Commons yesterday.

The core issue here is not technology but freedom. The Government insist this scheme is non-compulsory, yet concurrently confirm that it will be mandatory for right-to-work checks by the end of this Parliament. This is mandatory digital ID in all but name, especially for working-age people. As my party leader Sir Ed Davey has stated, we cannot and will not support a system where citizens are forced to hand over private data simply to participate in everyday life. This is state overreach, plain and simple.

The Secretary of State quoted Finland and the ability of parents to register for daycare, but I think the Secretary of State needs to do a bit more research. That is a voluntary scheme, not a compulsory one. We have already seen the clear danger of mission creep. My honourable friend Victoria Collins rightly warned that the mere discussion of extending this scheme to 13 to 16 year-olds is sinister, unnecessary and a clear step towards state overreach. Where does this stop?

The Secretary of State sought to frame this as merely a digital key to unlock better services. This dangerously conflates genuine and desirable public service reform with a highly intrusive mandate. First, the claim that this will deliver fairness and security by tackling illegal migration is nothing more than a multibillion-pound gimmick. The Secretary of State suggests that it will deter illegal working, yet, as my colleagues have pointed out, rogue employers who operate cash-in-hand schemes will not look at ID on a phone. Mandatory digital ID for British citizens will not stop illegal migrants working in the black economy.

Secondly, the claim that the system will be free is disingenuous. As my honourable friend Max Wilkinson, our home affairs spokesman, demanded, the Government must come clean on the costs and publish a full impact assessment. Estimates suggest that creating this system will cost between £1 billion and £2 billion, with annual running costs of £100 million pounds. This is completely the wrong priority at a time when public services are crumbling.

Thirdly, the promise of inclusion rings hollow. This mandatory system risks entrenching discrimination against the millions of vulnerable people, such as older people and those on low incomes, who lack foundational digital skills, a smartphone or internet access.

The greatest concern is the Government’s insistence on building this mandatory system on GOV.UK’s One Login, a platform with security failures that have been repeatedly and publicly criticised, including in my own correspondence and meetings with government. There are significant concerns about One Login’s security. The Government claim that One Login adheres to the highest security standards. Despite this commitment, as of late 2024 and early 2025, the system was still not fully compliant. A GovAssure assessment found that One Login was meeting only about 21 of the 39 required outcomes in the NCSC cyber assessment framework. The GOV.UK One Login programme has told me that it is committed to achieving full compliance with the cyber assessment framework by 21 March 2026, yet officials have informed me that 500 services across 87 departments are already currently in scope for the One Login project.

There are other criticisms that I could make, but essentially the foundations of the digital ID scheme are extremely unsafe, to say the least. To press ahead with a mandatory digital ID system, described as a honeypot for hackers, based on a platform exhibiting such systemic vulnerabilities is not only reckless but risks catastrophic data breaches, identity theft and mass impersonation fraud. Concentrating the data of the entire population fundamentally concentrates the risk.

The Secretary of State must listen to the millions of citizens who have signed the petition against this policy. We on these Benches urge the Government to scrap this costly, intrusive and technologically unreliable scheme and instead focus on delivering voluntary, privacy-preserving digital public services that earn the public’s trust rather than demanding compliance.

The post Liberal Democrats Say No to Compulsory Digital ID appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
A Defence of the Online Safety Act: Protecting Children While Ensuring Effective Implementation https://www.lordclementjones.org/2025/09/08/a-defence-of-the-online-safety-act-protecting-children-while-ensuring-effective-implementation/?utm_source=rss&utm_medium=rss&utm_campaign=a-defence-of-the-online-safety-act-protecting-children-while-ensuring-effective-implementation Mon, 08 Sep 2025 08:18:06 +0000 https://www.lordclementjones.org/?p=76894 Some recent commentary on the Online Safety Act seems to treat child protection online as an abstract policy preference. The […]

The post A Defence of the Online Safety Act: Protecting Children While Ensuring Effective Implementation appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Some recent commentary on the Online Safety Act seems to treat child protection online as an abstract policy preference. The evidence reveals something far more urgent. By age 11, 27% of children have already been exposed to pornography, with the average age of first exposure at just 13. Twitter (X) alone accounts for 41% of children’s pornography exposure, followed by dedicated sites at 37%.

The consequences are profound and measurable. Research shows that 79% of 18-21 year olds have seen content involving sexual violence before turning 18, and young people aged 16-21 are now more likely to assume that girls expect or enjoy physical aggression during sex. Close to half (47%) of all respondents aged 18-21 had experienced a violent sex act, with girls the most impacted.

When we know that childrens’ accounts on TikTok are shown harmful content every 39 seconds, with suicide content appearing within 2.6 minutes and eating disorder content within 8 minutes, the question is not whether we should act, but how we can act most effectively.

This is not “micromanaging” people’s rights – this is responding to a public health emergency that is reshaping an entire generation’s understanding of relationships, consent, and self-worth.

Abstract arguments about civil liberties need to be set against the voices of bereaved families who fought for the Online Safety Act . The parents of Molly Russell, Frankie Thomas, Olly Stephens, Archie Battersbee, Breck Bednar, and twenty other children who died following exposure to harmful online content did not campaign for theoretical freedoms – they campaigned for their children’s right to life itself.

These families faced years of stonewalling from tech companies who refused to provide basic information about the content their children had viewed before their deaths. The Act now requires platforms to support coroner investigations and provide clear processes for bereaved families to obtain answers. This is not authoritarianism – it is basic accountability

To repeal the Online Safety Act would indeed be a massive own-goal and a win for Elon Musk and the other tech giants who care nothing for our children’s safety. The protections of the Act were too hard won, and are simply too important, to turn our back on.

The conflation of regulating pornographic content with censoring legitimate information is neither accurate nor helpful, but we must remain vigilant against mission creep. As Victoria Collins MP and I have  highlighted in our recent letter to the Secretary of State, supporting the Act’s core mission does not mean we should ignore legitimate concerns about its implementation. Parliament must retain its vital role in scrutinising how this legislation is being rolled out to ensure it achieves its intended purpose without unintended consequences.

There are significant issues emerging that Parliament must address:

Age Assurance Challenges: The concern that children may use VPNs to sidestep age verification systems is real, though it should not invalidate the protection provided to the majority who do not circumvent these measures. We need robust oversight to ensure age assurance measures are both effective and proportionate.

Overreach in Content Moderation: The age-gating of political content and categorisation of educational resources like Wikipedia represents a concerning drift from the Act’s original intent. The legislation was designed to protect children from harmful content, not to restrict access to legitimate political discourse or educational materials. Wikimedia’s legal challenge regarding its categorisation illustrates this. While Wikipedia’s concerns about volunteer safety and editorial integrity are legitimate, their challenge does not oppose the Online Safety Act as a whole, but rather seeks clarity about how its unique structure should be treated under the regulations.

Protecting Vulnerable Communities: When important forums dealing with LGBTQ+ rights, sexual health, or other sensitive support topics are inappropriately age-gated, we risk cutting off vital lifelines for young people who need them most. This contradicts the Act’s protective purpose.

Privacy and Data Protection: While the Act contains explicit privacy safeguards, ongoing vigilance is needed to ensure age assurance systems truly operate on privacy-preserving principles with robust data minimisation and security measures.

The solution to these implementation challenges is not repeal, but proper parliamentary oversight. Parliament needs the opportunity to review the Act’s implementation through post-legislative scrutiny and  the chance to examine whether Ofcom is interpreting the legislation in line with its original intent and whether further legislative refinements may be necessary.

A cross-party Committee from both Houses, would provide the essential scrutiny needed to ensure the Act fulfils its central aim of keeping children safe online without unintended consequences.

Fundamentally and importantly, this approach aligns with core liberal principles. John Stuart Mill’s harm principle explicitly recognises that individual liberty must be constrained when it causes harm to others.

 

 

.

 

The post A Defence of the Online Safety Act: Protecting Children While Ensuring Effective Implementation appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
The Great AI Copyright Battle: Why Transparency Matters https://www.lordclementjones.org/2025/06/19/the-great-ai-copyright-battle-why-transparencys-matters/?utm_source=rss&utm_medium=rss&utm_campaign=the-great-ai-copyright-battle-why-transparencys-matters Thu, 19 Jun 2025 07:58:33 +0000 https://www.lordclementjones.org/?p=76878 We have recently had unprecedented “ping pong” between the Lords and Commons on whether to incorporate provisions in the Data […]

The post The Great AI Copyright Battle: Why Transparency Matters appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
We have recently had unprecedented “ping pong” between the Lords and Commons on whether to incorporate provisions in the Data Use and Access Bill ( now Act) which would ensure that AI developers would be required to be transparent about the copyright content used to train their models. Liberal Democrats in both the Lords and Commons consistently supported this change throughout. This is why.

As Co-chair of the All-Party Parliamentary Group on Artificial Intelligence and now Chair of the Authors’ Licensing and Collecting Society (ALCS), I find myself at the epicentre of one of the most significant intellectual property debates of our time.

The UK’s creative industries are economic powerhouses, contributing £126 billion annually while safeguarding our cultural identity. Yet they face an existential challenge: the wholesale scraping of copyrighted works from the web to train AI systems without permission or payment.

The statistics are stark. A recent ALCS survey revealed that 77% of writers don’t even know if their work has been used to train AI systems. Meanwhile, 91% believe their permission should be required, and 96% want compensation for use of their work. This isn’t anti-technology sentiment – it’s about basic fairness.

From Sir Paul McCartney to Sir Elton John, hundreds of prominent creatives have demanded action. They’re not opposing AI innovation; many already use AI in their work. They simply want their intellectual property rights respected so they can continue making a living.

December’s government consultation on Copyright and AI proposed a text and data mining exception with an opt-out mechanism for rights holders. This approach fundamentally misunderstands the problem. It places the burden on creators to police the internet, protecting their own works – an impossible task given the scale and opacity of AI training.

The creative sector’s opposition has been overwhelming. The proposed framework would undermine existing copyright law while making enforcement practically impossible. As I’ve consistently argued, existing copyright law is sufficient if properly enforced – what we need is mandatory transparency.

During debates on the Data (Use and Access) Bill, Baroness Kidron championed amendments requiring AI developers to disclose copyrighted material used in training data. These amendments received consistent support from all Liberal Democrat MP’s and peers, crossbench peers, and many  Labour and Conservative backbench peers.

The government’s resistance has been remarkable. Despite inserting a requirement for an econimic impact assessment and a report on copyright use in AI development, they have opposed mandatory transparency, leading to an unprecedented “ping-pong” debate between the Houses.

Transparency isn’t about stifling innovation – it’s about enabling legitimate licensing. How can creators license their work if they don’t know who’s using it? How can fair compensation mechanisms develop without basic disclosure of what’s being used?

The current system allows AI companies to harvest vast quantities of creative content while claiming ignorance about specific sources. This creates a fundamental power imbalance where billion-dollar tech companies benefit from the work of individual creators who remain entirely in the dark.

The solution isn’t complex. Mandatory transparency requirements would enable:

  • Creators to understand how their work is being used
  • Development of fair licensing mechanisms
  • Preservation of existing copyright frameworks
  • Continued AI innovation within legal boundaries

This debate reflects deeper concerns about AI innovation coming at the expense of human creativity. The government talks about supporting creative industries while simultaneously weakening the intellectual property protections that sustain them.

We need policies that recognize the symbiotic relationship between human creativity and technological advancement. AI systems trained on creative works should provide some return to those creators, just as streaming platforms pay royalties for music usage.

The government has so far failed to rise to this challenge. But with continued parliamentary pressure and overwhelming creative sector support, we can still achieve a framework that protects both innovation and creativity.

The question isn’t whether AI will transform creative industries – it’s whether that transformation will be fair, transparent, and sustainable for the human creators whose work makes it all possible.

 

 

 

The post The Great AI Copyright Battle: Why Transparency Matters appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>
Less Talk More Action on Scale Ups https://www.lordclementjones.org/2025/06/14/less-talk-more-action-on-scale-ups/?utm_source=rss&utm_medium=rss&utm_campaign=less-talk-more-action-on-scale-ups Sat, 14 Jun 2025 09:59:38 +0000 https://www.lordclementjones.org/?p=76866 The House of Lords recently debated the conclusions and recommendations of the Report from the Communications and Digital Committee AI and […]

The post Less Talk More Action on Scale Ups appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>

The post Less Talk More Action on Scale Ups appeared first on Lord Clement-Jones | Speaker AI and Creative Industries.

]]>