I recently gave a talk to the Engineers’ Association of my Alma Mater, Trinity College Cambridge. This is what I said

Video here: https://www.youtube.com/watch?v=2Wnf97_Zu5E

 

You may ask how and why I have been sucked into the world of AI. Well, 8 years ago I set up a cross-party group in the UK parliament because i thought parliamentarians didn’t know enough about it and then- based on, in the Kingdom of the Blind the fact that one eyed man is king- I was asked to chair the House of Lords Special Enquiry Select Committee on AI with the remit  “to consider the economic, ethical and social implications of advances in artificial intelligence. This produced its report “AI in the UK: Ready Willing and Able?”  in April 2018.  It took a close look at government policy towards AI and its ambitions in the very early days of its policy-making when the UK was definitely ahead of the pack.

Since then I have been lucky enough to have acted as an adviser to the Council of Europe’s working party on AI (CAHAI) the One AI Group of OECD AI Experts and helped establish the OECD Global Parliamentary Network on AI which helps in tracking developments in AI and the policy responses to it, which come thick and fast.

Artificial Intelligence presents opportunities in a whole variety of sectors. I am an enthusiast for the technology -the opportunities for AI are incredibly varied-and I recently wote an upbeat piece on the way that AI is already transforming healthcare.

Many people find it unhelpful to have such a variety of different types of machine learning, algorithms, neural networks,  or deep learning, labelled AI. But the expression has been been used since John McCarthy invented it in 1956 and I think we are stuck with it!

Nowadays barely a day goes by without some reference to AI in the news media-particularly some aspect of Large Language Models in the news. We saw  the excitement over ChatGPT from Open AI  and AI text to image applications such as DALL E and now we have GPT 4 from OpenAI, LlaMa from Meta, Claude from Anthropic, Gemini from Google, Stability Diffusion from Stability AI, Co-pilot from Microsoft, Cohere, Midjourney, -a whole eco system of LLM’s of various kinds.

Increasingly the benefits are not just seen around increasing efficiency, speed etc in terms of analysis, pattern detection and ability to predict but now, with generative AI  much  more about what creatively AI can add to human endeavour , how it can augment what we do.

But things can go wrong. This isn’t just any old technology.The degree of autonomy, its very versatility, its ability to create convincing fakes, lack of human intervention, the Black box nature of some systems makes it different from other tech. The challenge is to ensure that AI is our servant not our master especially before the advent of AGI.

Failure to tackle issues such as bias/discrimination, deepfakery and disinformation, and lack of transparency will lead to a lack of public/consumer trust, reputational damage and inability to deploy new technology. Public trust and trustworthy AI is fundamental to continued advances in technology.

It is clear that AI even in its narrow form will and should have a profound impact on and implications for corporate governance in terms of the need to ensure responsible or ethical AI adoption.The AI Safety Conference at Bletchley Park-where incidentally my parents met- ended with a voluntary corporate pledge.

This means a more value driven approach to the adoption of new technology needs to be taken. Engagement from boards through governance right through to policy implementation is crucial. This is not purely a matter that can be delegated to the CTO or CIO.

It means in particular  assessing the ethics of adoption of AI  and the ethical standards to be applied corporately : It may involve the establishment of an ethics advisory committee.It certainly involves clear Board accountability..

We have a pretty good common set of principles -OECD or G20- which are generally regarded as the gold standard which can be adopted which can help us ensure

  • Quality of training data
  • Freedom from Bias
  • The impact on Individual civil and human rights
  • Accuracy and robustness
  • Transparency and Explainability which of course include the need for open communication where these technologies are deployed.

And now we have the G7 principles for Organizations Developing Advanced AI systems to back those up.

Generally in business and in the tech research and development world I think there is an appetite for adoption of common  standards  which incorporate  ethical principles such as for

  • Risk management
  • Impact assessment
  • Testing
  • AI audit
  • Continuous Monitoring

And I am optimistic that common standards can be achieved internationally in all these areas. The OECD  Internationally is doing a great deal to scope the opportunity and enable convergence. Our own  AI Standards Hub run by the Alan Turing institute is heavily involved. As is NIST in the US and the EU’s CEN-CENELEC standards bodies too.

Agreement on the actual regulation of AI in terms of what elements of governance and application of standards should be mandatory or obligatory, however, is much more difficult.

In the UK there are already some elements of a legal framework in place. Even without specific legislation, AI deployment in the UK will interface with existing legislation and regulation in particular relating to

  • Personal data under UK GDPR
  • Discrimination and unfair treatment under the Human Rights Act and Equality Act
  • Product safety and public safety legislation
  • And various sector-specific regulatory regimes requiring oversight and control by persons undertaking regulated functions, the FCA for financial services, Ofcom in the future for social media for example.

But when it comes to legislation and regulation that is specific to AI such over transparency and explanation and liability that’s where some of the difficulties and disagreements start emerging especially given the UK’s approach in its recent White Paper and the government’s response to the consultation.

Rather than regulating in the face of clear current evidence of the risk of the many uses and forms of AI it says it’s all too early to think about tackling the clear risks in front of us.  More research is needed. We are expected to wait until we have complete understanding and experience of the risks involved. Effectively in my view we are being treated as guinea pigs to see what happens whilst the government talks about the existential risks of AGI instead.

And we shouldn’t just focus on existential long term risk or indeed risk from Frontier AI, predictive AI is important too in terms of automated decision making, risk of bias and lack of transparency.

The government says it wishes its regulation to be innovation friendly and context specific but sticking to their piecemeal context specific approach  the government are not suggesting immediate regulation nor any new powers for sector regulators

But regulation is not necessarily the enemy of innovation, it can in fact be the stimulus and be the key to gaining and retaining public trust around digital technology and its adoption so we can realise the benefits and minimise the risks.

The recent response to the AI White paper  has demonstrated the gulf between the government’s rhetoric  about being world leading in safe AI.

In my view we need a broad definition of AI  and early risk based overarching horizontal legislation across the sectors ensuring that there is  conformity with standards for a proper risk management framework and impact assessment when AI systems are developed and aded.

Depending on the extent of the risk and impact assessed, further regulatory requirements would arise. When the system is assessed as high risk there would be additional requirements to adopt standards of testing, transparency and independent audit.

What else is on my wish list? As regards its use of AI and automated decision making systems the government needs to firmly implant its the Algorithmic Transparency Recording Standard alongside risk assessment together with a public register of AI systems in use in government.

It also needs need to beef up the Data Protection Bill in terms of rights of data subjects relative to Automated Decision Making rather than water them down and retain and extend the Data Protection Impact Assessment and DPO for use in AI regulation.

I also hope the Gov will take strong note of the House of Lords report on the use of copyrighted works by LLM’s. The government has adopted its usual approach of relying on a voluntary approach. But it is clear that this is simply is not going to work.  It needs to  act decisively to make sure that these works are not ingested into training LLM’s without any return to rightsholders.

Luckily others such as the EU-and even the US- contrary to many forecasts  are grasping the nettle. The EU’s AI Act is an attempt to grapple with the here and now risks in a constructive way and even the US where the White House Executive Order and Congressional bi-partisan proposals show a much more proactive approach.

But the more we diverge when it comes to regulation from other jurisdictions the more difficult it gets for UK  developers and those who want to develop AI systems internationally.

International harmonization, interoperability or convergence, call it what you like, is in my view essential if we are to see developers able to commercialize their products on a global basis, assured that they are adhering to common ethical standards of regulation.This means working with the BSI ISO  OECD and others towards convergence of international standards.There are already several existing standards such as ISO 42001 and 42006 and NIST’s RMF  which can form the basis for this

What I have suggested I believe would help provide the certainty, convergence and consistency we need to develop and invest in responsible AI innovation in the UK more readily. That in my view is the way to get real traction!