Last year I helped to hosted a gathering of the Athens Roundtable in London. This is what I said on the conclusion of the morning session.  

Colleagues, it has been an immense privilege to share this day with you at the Athens Roundtable. We have ranged from the most technical questions of frontier models to the most human concerns about our children, our democracies, and our shared security. 

The unifying theme has been clear: the stakes of AI are now so great that “business as usual” in governance—incremental, fragmented, optional—simply will not do.

Fragmented governance, shared risks

What today’s discussions have exposed is a global governance landscape that is pulling apart at the very moment it needs to come together.

 The EU has moved ahead with a prescriptive, rights‑based AI Act, setting detailed obligations for high‑risk systems and outright bans for some uses. The United States, despite important executive action, still leans heavily on a market‑driven, innovation‑first model, with federal legislation uncertain and a patchwork of state rules emerging. The UK, for its part, has chosen a so‑called “pro‑innovation” and principles‑based approach, relying on existing regulators and voluntary guidance at a time when many of those regulators lack clear AI powers.​

The result is an increasingly unstable environment for everyone who actually has to build, buy, and deploy AI systems across borders. Developers face competing definitions of risk and accountability, while users confront inconsistent protections and redress. And as trust in institutions erodes and geopolitical tensions rise, this fragmentation feeds the perception that no one is truly in charge of the most powerful technology of our age.​

One of the most striking points of consensus today has been that standards now exist. We are no longer in the conceptual phase. ISO 42001 and related management‑system standards, NIST’s AI Risk Management Framework, OECD principles, and sectoral technical norms give us a very usable toolkit for risk and impact assessment, testing, audit, and lifecycle governance., these shared standards are the bridge between high‑level ethics and real‑world accountability; without them, principles remain decorative rather than operational.​

But the hard truth is that simply “encouraging” their use is no longer enough. Voluntary uptake has been patchy; shadow AI and unmanaged deployments are proliferating; and the systems with the greatest potential for societal harm are least likely to be governed by optional frameworks alone.

 If we truly believe that some AI‑driven outcomes are unacceptable—large‑scale societal destabilisation, systemic discrimination, catastrophic security failures—then we have to be honest that purely voluntary regimes will not prevent them.​

Mandating safeguards where it matters most

So the call to action from this Roundtable should be unambiguous. For high‑risk systems—especially those used in public services, critical infrastructure, law enforcement, financial stability, and the systems that shape the digital lives of children—adoption of established AI safety and ethics standards must become mandatory, not aspirational.

 Governments and regulators should require documented risk and impact assessments, robust testing and audit, monitoring and incident reporting, and clear human accountability before deployment, with proportionate sanctions when those duties are ignored.​

The same logic must increasingly apply to powerful foundation and open‑source models whose capabilities can be repurposed at scale. Left entirely to voluntary self‑governance, we risk a race to the bottom in which the most capable models are also the least constrained, and where once‑released weights cannot be recalled even when serious vulnerabilities are discovered. Mandated safeguards for powerful models—responsible release practices, security testing, traceability, and obligations on major deployers—are essential if we are to ensure the genie does not escape the bottle without any meaningful accountability.​

A bolder, more coordinated politics of AI

None of this means stifling innovation; on the contrary, clear and interoperable rules are what give responsible innovation room to flourish, business needs clarity, certainty, and consistency if it is to invest with confidence, and public trust will only follow if people can see that their rights are protected and that someone is answerable when things go wrong. That is why today’s conversation about unacceptable risks and serious incident preparedness must now translate into concrete steps: aligning regulatory approaches across jurisdictions, mandating core standards where the stakes are highest, and building the institutional capacity to detect, investigate, and learn from AI incidents before they cascade into crises.​

The message from this Athens Roundtable should therefore be a challenge as much as a comfort: policymakers must be bolder. It is no longer sufficient to pilot principles, convene summits, and extol the virtues of standards while leaving their adoption to chance. If we want AI that strengthens democracy rather than eroding it, protects our children rather than profiling them, and supports a fair global economy rather than deepening divides, then we must move—from shared concerns to joint, enforceable action.​

Let this be a moment when we collectively raise our sights and our standards. Thank you.

 

STAY CONNECTED

QUESTIONS, COMMENTS AND MEDIA

ABOUT LORD CLEMENT-JONES

MEMBER HOUSE OF LORDS

Tim Clement-Jones CBE, is former Chair of the House of Lords Artificial Intelligence Select Committee and Co-Chair of the All Party Parliamentary Group on Artificial Intelligence. He is a Liberal Democrat Peer and their spokesman for Science Innovation and Technology in the House of Lords. Tim is Chair of the Board of the Authors’ Licensing Collecting Society (ALCS)  and a champion of the creative industries. He is President of Ambitious Autism, the national autism education charity, and former Chair of the Council of Queen Mary University London .

Privacy Preference Center