Recently the House of Lords belatedly debated the follow Report to the the original House of Lords AI Committee Report  AI Report No Room for Complacency . This is how I introduced it:

My Lords, the Liaison Committee report No Room for Complacency was published in December 2020, as a follow-up to our AI Select Committee report, AI in the UK: Ready, Willing and Able?, published in April 2018. Throughout both inquiries and right up until today, the pace of development here and abroad in AI technology, and the discussion of AI governance and regulation, has been extremely fast moving. Today, just as then, I know that I am attempting to hit a moving target. Just take, for instance, the announcement a couple of weeks ago about the new Gato—the multipurpose AI which can do 604 functions —or perhaps less optimistically, the Clearview fine. Both have relevance to what we have to say today.

First, however, I say a big thank you to the then Liaison Committee for the new procedure which allowed our follow-up report and to the current Lord Speaker, Lord McFall, in particular and those members of our original committee who took part. I give special thanks to the Liaison Committee team of Philippa Tudor, Michael Collon, Lucy Molloy and Heather Fuller, and to Luke Hussey and Hannah Murdoch from our original committee team who more than helped bring the band, and our messages, back together.

So what were the main conclusions of our follow-up report? What was the government response, and where are we now? I shall tackle this under five main headings. The first is trust and understanding. The adoption of AI has made huge strides since we started our first report, but the trust issue still looms large. Nearly all our witnesses in the follow-up inquiry said that engagement continued to be essential across business and society in particular to ensure that there is greater understanding of how data is used in AI and that government must lead the way. We said that the development of data trusts must speed up. They were the brainchild of the Hall-Pesenti report back in 2017 as a mechanism for giving assurance about the use and sharing of personal data, but we now needed to focus on developing the legal and ethical frameworks. The Government acknowledged that the AI Council’s roadmap took the same view and pointed to the ODI work and the national data strategy. However, there has been too little recent progress on data trusts. The ODI has done some good work, together with the Ada Lovelace Institute, but this needs taking forward as a matter of urgency, particularly guidance on the legal structures. If anything, the proposals in Data: A New Direction, presaging a new data reform Bill in the autumn, which propose watering down data protection, are a backward step.

More needs to be done generally on digital understanding. The digital literacy strategy needs to be much broader than digital media, and a strong digital competition framework has yet to be put in place. Public trust has not been helped by confusion and poor communication about the use of data during the pandemic, and initiatives such as the Government’s single identifier project, together with automated decision-making and live facial recognition, are a real cause for concern that we are approaching an all-seeing state.

My second heading is ethics and regulation. One of the main areas of focus of our committee throughout has been the need to develop an appropriate ethical framework for the development and application of AI, and we were early advocates for international agreement on the principles to be adopted. Back in 2018, the committee took the view that blanket regulation would be inappropriate, and we recommended an approach to identify gaps in the regulatory framework where existing regulation might not be adequate. We also placed emphasis on the importance of regulators having the necessary expertise.

In our follow-up report, we took the view that it was now high time to move on to agreement on the mechanisms on how to instil what are now commonly accepted ethical principles—I pay tribute to the right reverend Prelate for coming up with the idea in the first place—and to establish national standards for AI development and AI use and application. We referred to the work that was being undertaken by the EU and the Council of Europe, with their risk-based approaches, and also made recommendations focused on development of expertise and better understanding of risk of AI systems by regulators. We highlighted an important advisory role for the Centre for Data Ethics and Innovation and urged that it be placed on a statutory footing.

We welcomed the formation of the Digital Regulation Cooperation Forum. It is clear that all the regulators involved—I apologise for the initials in advance—the ICO, CMA, Ofcom and the FCA, have made great strides in building a centre of excellence in AI and algorithm audit and making this public. However, despite the publication of the National AI Strategy and its commitment to trustworthy AI, we still await the Government’s proposals on AI governance in the forthcoming White Paper.

It seems that the debate within government about whether to have a horizontal or vertical sectoral framework for regulation still continues. However, it seems clear to me, particularly for accountability and transparency, that some horizontality across government, business and society is needed to embed the OECD principles. At the very least, we need to be mindful that the extraterritoriality of the EU AI Act means a level of regulatory conformity will be required and that there is a strong need for standards of impact, as well as risk assessment, audit and monitoring, to be enshrined in regulation to ensure, as techUK urges, that we consider the entire AI lifecycle.

We need to consider particularly what regulation is appropriate for those applications which are genuinely high risk and high impact. I hope that, through the recently created AI standards hub, the Alan Turing Institute will take this forward at pace. All this has been emphasised by the debate on the deployment of live facial recognition technology, the use of biometrics in policing and schools, and the use of AI in criminal justice, recently examined by our own Justice and Home Affairs Committee.

My third heading is government co-ordination and strategy. Throughout our reports we have stressed the need for co-ordination between a very wide range of bodies, including the Office for Artificial Intelligence, the AI Council, the CDEI and the Alan Turing Institute. On our follow-up inquiry, we still believed that more should be done to ensure that this was effective, so we recommended a Cabinet committee which would commission and approve a five-year national AI strategy, as did the AI road map.

In response, the Government did not agree to create a committee but they did commit to the publication of a cross-government national AI strategy. I pay tribute to the Office for AI, in particular its outgoing director Sana Khareghani, for its work on this. The objectives of the strategy are absolutely spot on, and I look forward to seeing the national AI strategy action plan, which it seems will show how cross-government engagement is fostered. However, the Committee on Standards in Public Life—I am delighted that the noble Lord, Lord Evans, will speak today—report on AI and public standards made the deficiencies in common standards in the public sector clear.

Subsequently, we now have an ethics, transparency and accountability framework for automated decision-making in the public sector, and more recently the CDDO-CDEI public sector algorithmic transparency standard, but there appears to be no central and local government compliance mechanism and little transparency in the form of a public register, and the Home Office appears to be still a law unto itself. We have AI procurement guidelines based on the World Economic Forum model but nothing relevant to them in the Procurement Bill, which is being debated as we speak. I believe we still need a government mechanism for co-ordination and compliance at the highest level.

The fourth heading is impact on jobs and skills. Opinions differ over the potential impact of AI but, whatever the chosen prognosis, we said there was little evidence that the Government had taken a really strategic view about this issue and the pressing need for digital upskilling and reskilling. Although the Government agreed that this was critical and cited a number of initiatives, I am not convinced that the pace, scale and ambition of government action really matches the challenge facing many people working in the UK.

The Skills and Post-16 Education Act, with its introduction of a lifelong loan entitlement, is a step in the right direction and I welcome the renewed emphasis on further education and the new institutes of technology. The Government refer to AI apprenticeships, but apprentice levy reform is long overdue. The work of local digital skills partnerships and digital boot camps is welcome, but they are greatly underresourced and only a patchwork. The recent Youth Unemployment Select Committee report Skills for Every Young Person noted the severe lack of digital skills and the need to embed digital education in the curriculum, as did the AI road map. Alongside this, we shared the priority of the AI Council road map for more diversity and inclusion in the AI workforce and wanted to see more progress.

At the less rarefied end, although there are many useful initiatives on foot, not least from techUK and Global Tech Advocates, it is imperative that the Government move much more swiftly and strategically. The All-Party Parliamentary Group on Diversity and Inclusion in STEM recommended in a recent report a STEM diversity decade of action. As mentioned earlier, broader digital literacy is crucial too. We need to learn how to live and work alongside AI.

The fifth heading is the UK as a world leader. It was clear to us that the UK needs to remain attractive to international research talent, and we welcomed the Global Partnership on AI initiative. The Government in response cited the new fast-track visa, but there are still strong concerns about the availability of research visas for entrance to university research programmes. The failure to agree and lack of access to EU Horizon research funding could have a huge impact on our ability to punch our weight internationally.

How the national AI strategy is delivered in terms of increased R&D and innovation funding will be highly significant. Of course, who knows what ARIA may deliver? In my view, key weaknesses remain in the commercialisation and translation of AI R&D. The recent debate on the Science and Technology Committee’s report on catapults reminded us that this aspect is still a work in progress.

Recent Cambridge round tables have confirmed to me that we have a strong R&D base and a growing number of potentially successful spin-outs from universities, with the help of their dedicated investment funds, but when it comes to broader venture capital culture and investment in the later rounds of funding, we are not yet on a par with Silicon Valley in terms of risk appetite. For AI investment, we should now consider something akin to the dedicated film tax credit which has been so successful to date.

Finally, we had, and have, the vexed question of lethal autonomous weapons, which we raised in the original Select Committee report and in the follow-up, particularly in the light of the announcement at the time of the creation of the autonomy development centre in the MoD. Professor Stuart Russell, who has long campaigned on this subject, cogently raised the limitation of these weapons in his second Reith Lecture. In both our reports we said that one of the big disappointments was the lack of definition of “autonomous weapons”. That position subsequently changed, and we were told in the Government’s response to the follow-up report that NATO had agreed a definition of “autonomous” and “automated”, but there is still no comprehensive definition of lethal autonomous weapons, despite evidence that they have clearly already been deployed in theatres such as Libya, and the UK has firmly set its face against laws limitation in international fora such as the CCW.

For a short report, our follow-up report covered a great deal of ground, which I have tried to cover at some speed today. AI lies at the intersection of computer science, moral philosophy, industrial education and regulatory policy, which makes how we approach the risks and opportunities inherent in this technology vital and difficult. The Government are engaged in a great deal of activity. The question, as ever, is whether it is focused enough and whether the objectives, such as achieving trustworthy AI and digital upskilling, are going to be achieved through the actions taken so far. The evidence of success is clearly mixed. Certainly there is still no room for complacency. I very much look forward to hearing the debate today and to what the Minister has to say in response. I beg to move.