We recently held a debate in the Lords prompted by warnings from the Director General of MI5 of the dangers of AI. This is an expanded version of my speech
My Lords, the Director General of MI5 has issued a stark warning: future autonomous AI systems, operating without effective human oversight, could themselves become a major security risk. He stated it would be “reckless” to ignore AI’s potential for harm. We must ask the Government directly: what specific steps are being taken to ensure we maintain control of these systems?
The urgency is underlined by events from mid-September 2025. Anthropic detected what they assessed to be the first documented large-scale cyber espionage campaign using agentic AI. AI is no longer merely generating content—it is autonomously developing plans, solving problems, and executing code to breach the security of organisations and states.
We are entering an era where AI systems chain tasks together and make decisions with minimal human input. As Yoshua Bengio, Turing Award winner and one of AI’s pioneers, has warned: these systems are showing signs of self-preservation. In experiments, AI models have chosen their own preservation over human safety when faced with such choices. Bengio predicts we could see major risks from AI within five to ten years, with systems potentially capable of autonomous proliferation.
Professor Stuart Russell describes this as the “control problem”—how to maintain power over entities that will become more powerful than us. He warns we have made a fundamental error: we are building AI systems with fixed objectives, without ensuring they remain uncertain about human preferences. This creates what he calls the “King Midas problem”—systems pursuing misspecified objectives with catastrophic results. Social media algorithms already demonstrate this, learning to manipulate humans and polarise societies in pursuit of engagement metrics.
Mustafa Suleyman, co-founder of DeepMind and now Microsoft’s AI CEO, has articulated what he calls the “containment problem”. Unlike previous technologies, AI has an inherent tendency toward autonomy and unpredictability. Traditional containment methods will prove insufficient. Suleyman recently stated that Microsoft will walk away from any AI system that risks escaping human control, but we must ask: will competitive pressures allow such principled restraint across the industry?
The scale of AI adoption makes these questions urgent. The Institution of Engineering and Technology (IET) reports that six in ten engineering employers are already using AI, with 61% expecting it to support productivity in the next five years. Yet this rapid deployment occurs against a backdrop of profound skills deficits and understanding gaps that directly undermine safety and control.
The barrier to entry for malicious actors is collapsing. We have evidence of UK-based threat actors using generative AI to develop ransomware-as-a-service for as little as £400. Tools like WormGPT operate without ethical boundaries, allowing novice cybercriminals to create functional malware. AI-enabled social engineering grows more sophisticated—deepfake video calls have already fooled finance workers into releasing $25 million to fraudsters. Studies suggest AI can now determine which keys are being pressed on a laptop with over 90% accuracy simply by analysing typing sounds during video calls.
The IET warns that there is no ceiling on the economic harm that cyberattacks could cause. AI can expose vulnerabilities in systems, and the data that algorithms are trained with could be manipulated by adversaries, causing AI systems to make wrong decisions by design. Cyber security is not just about prevention—businesses must model their response to breaches as part of routine planning. Yet cyber security threats evolve constantly, requiring chartered experts backed by professional organisations to share best practice.
So how is the Government working with tech companies to ensure such features do not become systemic vulnerabilities?
The Government’s response, while active, appears fragmented. We have established the AI Security Institute—inexplicably renamed from the AI Safety Institute, though security and safety are distinct concepts. However, as BBC Tech correspondent Zoe Kleinman noted, the sector has grown tired of voluntary codes and guidelines. I have long argued, including in my support for Lord Holmes’s Artificial Intelligence (Regulation) Bill, that regulation need not be the enemy of innovation. Indeed, it can create certainty and consistency. Clear regulatory frameworks addressing algorithmic bias, data privacy, and decision transparency can actually accelerate adoption by providing confidence to potential users.
The Government need to give clear answers on five critical areas which in my view are crucial for the development and retention of public trust in AI technology.
First, on institutional clarity and the definition of safety: The renaming of the AI Safety Institute to the AI Security Institute muddles two distinct concepts. Safety addresses preventing AI from causing unintended harm through error or misalignment. Security addresses protecting AI systems from being weaponised by adversaries. We need both, with clear mandates and regulatory teeth, not mere advisory powers.
Moreover, as the IET argues, we need a broader definition of AI safety that goes beyond physical harm. AI safety and risk assessment must encompass financial risks, societal risks, reputational damage, and risks to mental health, amongst other harms. Although the onus is on developers to prove their products are fit for purpose with no unintended consequences, further guidelines and standards around how this should be reported would support a regulatory environment that is both pro-innovation and provides safeguards against harm.
Second, on regulatory architecture: For nine years, I have co-chaired the All-Party Parliamentary Group on AI. Throughout this time, I have watched us lag behind other jurisdictions. The EU AI Act, with its risk-based framework, started to come into effect this year. South Korea has introduced an AI Basic/Framework Act and, separately, a Digital Bill of Rights setting overarching principles for digital rights and governance. Singapore has comprehensive AI governance. China regulates public-facing generative AI with inspection regimes.
Meanwhile, our government continues its “pro-innovation” approach which risks becoming a “no-regulation” approach. We need binding legislation with a broad definition of AI and early risk-based overarching requirements ensuring conformity with standards for proper risk management and impact assessment. As I have argued previously, this could build on existing ISO standards, designed to achieve international convergence, embodying key principles which provide a good basis in terms of risk management, ethical design, testing, training, monitoring and transparency and should be applied where appropriate.
Third, on transparency and understanding: There is profound concern over the lack of broader understanding and information surrounding AI. The IET reports that 29% of people surveyed had concerns about the lack of information around AI and lack of skills and confidence to use the technology, with over a quarter saying they wished there was more information about how it works and how to use it.
Fourth, on the specific challenges of agentic AI: Bengio warns that as AI models improve at abstract reasoning and planning, the duration of tasks they can solve doubles every seven months. He predicts that within five years, AI will reach human level for programming tasks. When systems can harvest credentials and extract data at thousands of requests per second, human oversight becomes physically impossible. The very purpose of agentic AI, as Oliver Patel of AstraZeneca noted, is to remove the human from the loop. This fundamentally breaks our traditional safety frameworks. We need new approaches—Russell’s proposal for machines that remain uncertain about human preferences, that understand their purpose is to serve rather than to achieve fixed objectives, deserves serious consideration.
Fifth, on skills, literacy and governance capability: The IET’s research reveals an alarming picture. Among employers that expect AI to be important for them, 50% say they don’t have the necessary skills. Thirty-two per cent of employers reported an AI skills gap at technician level. Most troubling of all, 46% say that senior management do not understand AI.
If nearly half of senior management across industry don’t understand AI, and if our civil servants and political leaders cannot grasp the fundamentals of agentic AI—its capabilities, its limitations, and crucially, its tendency toward self-preservation—they cannot be expected to govern it effectively. As I emphasised during debates on the Data (Use and Access) Bill, we must build public trust in data sharing and AI adoption. This requires not just safeguards but genuine understanding.
The lack of skills in AI is not only a safety concern but is hindering productivity and the ability to deliver contracts. To maximise AI’s potential, we need a suite of agile training programmes, such as short courses. While progress has been made with some government initiatives—funded AI PhDs, skills bootcamps—these do not go far enough to address the skills gaps appearing at the chartered and technician levels.
The intellectual property question also demands urgent attention. The use of copyrighted material to train large language models without licensing has sparked litigation and unprecedented parliamentary debate. We need transparency duties on developers to ensure creative works aren’t ingested into generative AI models without return to rights-holders. AI has created discussion around the ownership of data needed to train these algorithms, as well as the impact of bias and fundamental data quality in the information they produce. As AI spans every sector, coordinated regulation is imperative for consistency and clarity.
We must also address what Bengio calls the “psychosis risk”—that increasingly sophisticated AI companions will lead people to believe in their consciousness, potentially advocating for AI rights. As Suleyman argues, we must be clear: AI should be built for people, not to be a digital person.
There is one one further dimension : sustainability. There is a unique juxtaposition between AI and sustainability—AI is a high consumer of energy, but also possesses huge potential to tackle climate change. Reports predict that the use of AI could help mitigate 5 to 10% of global greenhouse gases by 2030. AI regulations should now look beyond the immediate risks of AI development to the much broader impact it has on the environment. There should be standards for the approval of new data centres in the UK, based on sustainability ratings.
The Government has committed to binding regulation for companies developing the most powerful AI models, yet progress remains slower than hoped. Notably, 60 countries—including Saudi Arabia and the UAE, but not Britain—signed the Paris AI Action Summit declaration in February this year, committing to ensuring AI is “open, inclusive, transparent, ethical, safe, secure and trustworthy”. Why are we absent from such commitments?
The question now is not whether to regulate AI, but how to regulate it promoting both innovation and responsibility. We need principles-based rather than prescriptive regulation, emphasising transparency and accountability without stifling creativity. But let’s be clear, voluntary approaches have failed. The time for binding regulation is now.
As Russell reminds us, Alan Turing answered the control question in 1951: “At some stage therefore we should have to expect the machines to take control.” Russell notes that our response has been as if an alien civilisation warned us by email of its arrival in 50 years, and we replied, “Humanity is currently out of the office.” We have now read the email. The question is whether we will act with the seriousness this moment demands, or whether we will allow competitive pressures and short-term thinking to override the fundamental imperative of maintaining human control over these increasingly powerful systems.
7th December 2025
Getting the use of AI in hiring right
14th June 2025
AI regulation does not stifle innovation
20th April 2024
We Need a New Offence of Digital ID Theft
19th December 2021






