The House of Commons Science and Technology Committee has launched an inquiry into the governance of artificial intelligence (AI).

This is what they said on launching it:

In July, the UK Government set out its emerging thinking on how it would regulate the use of AI. It is expected to publish proposals in a White Paper later this year, which the Committee would examine in its inquiry.

Used to spot patterns in large datasets, make predictions, and automate processes, AI’s role in the UK economy and society is growing. However, there are concerns around its use. MPs will examine the potential impacts of biased algorithms in the public and private sectors. A lack of transparency on how AI is applied and how automated decisions can be challenged will also be investigated.

In the inquiry, MPs will explore how risks posed to the public by the improper use of AI should be addressed, and how the Government can ensure AI is used in an ethical and responsible way. The Committee seeks evidence on the current governance of AI, whether the Government’s proposed approach is the right one, and how their plans compare with other countries.

Rt Hon Greg Clark MP, Chair of Science and Technology Committee, said:

“AI is already transforming almost every area of research and business. It has extraordinary potential but there are concerns about how the existing regulatory system is suited to a world of AI.

With machines making more and more decisions that impact people’s lives, it is crucial we have effective regulation in place. In our inquiry we look forward to examining the Government’s proposals in detail.”

These are these key questions they are asking

  • How effective is current governance of AI in the UK?
  • What are the current strengths and weaknesses of current arrangements, including for research?
  • What measures could make the use of AI more transparent and explainable to the public?
  • How should decisions involving AI be reviewed and scrutinised in both public and private sectors?
  • Are current options for challenging the use of AI adequate and, if not, how can they be improved?
  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?
  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?
  • Is more legislation or better guidance required?
  • What lessons, if any, can the UK learn from other countries on AI governance?

This is the written evidence to the Committee from myself and Coran Darling,  a Trainee Solicitor and member of the global tech and life sciences sectors at DLA Piper 

Introduction

I, alongside Stephen Metcalfe MP, co-founded the All Party Parliamentary Group on Artificial Intelligence (“APPG”) in late 2016. The APPG is dedicated to informing parliamentarians of contextual developments and creating a community of interest around future policy regarding AI, its adoption, use, and regulation.

I was fortunate to then be asked to chair the House of Lords Special Enquiry Select Committee on AI with the remit: “to consider the economic, ethical, and social implications of advances in artificial intelligence”. As part of our work, the Select Committee produced its first report “AI in the UK: Ready Willing and Able?” in April 2018. The report looked closely at the current landscape of governmental policy towards the subject of AI and its ambitions for future development. This included, for example, those future plans contained in the Hall/Pesenti Review of October 2017, and those set out by former prime Minister Teresa May in her Davos World Economic Forum Speech, including her aim for the UK to “lead the world in deciding how AI can be deployed in a safe and ethical manner.”

Since then, as well as continuing to co-chair the APPG, I have maintained a close interest in the development of UK policy in AI, chaired a follow-up to the Select Committee’s report, “AI in the UK: No Room for Complacency”, acted as an adviser to the Council of Europe’s working party on AI (“CAHAI”) and helped establish the OECD Global Parliamentary Network on AI.

Lord Clement-Jones

25th November 2022

Background

The Hall Pesenti Review (“Review”) was an independent review commissioned in March 2017 tasked with reporting on the potential impact of AI on the UK economy. While it did not tackle the question of ethics or regulation of AI, the Review made several key recommendations designed to set a clear course for UK AI strategy including that:

  • Data Trusts should be developed to provide proven and trusted frameworks to facilitate the sharing of data between organisations holding data and organisations looking to use data to develop AI;
  • the Alan Turing Institute should become the national institute for AI and data science with the creation of an International Turing AI fellowship programme for AI in the UK; and
  • the establishment of an UK AI Council to help coordinate and grow AI in the UK should occur.

The Government’s subsequent “Industrial Strategy: building a Britain fit for the future” published in November 2017 (“Industrial Strategy”), identified putting AI “at the forefront of the UK’s AI and data revolution” as one of four ‘Grand Challenges’ identified as key to Britain’s future. At the same time, the Industrial Strategy recognised that ethics would be key to the successful adoption of AI in the UK. This led to the establishment of the Centre for Data Ethics and Innovation in late 2018 with the remit to “make sure that data and AI deliver the best possible outcomes for society, in support of their ethical and innovative use”. In early 2018, the Industrial Strategy would go on to produce a £950m ‘AI Sector Deal’, which incorporated nearly all the recommendations of the Review and established a new Government Office for AI designed to coordinate their implementation.

Building on the work of the Review and the Industrial Strategy, the original Select Committee report enquiry concluded that the UK was in a strong position to be among the world leaders in the development of AI. Our recommendations were designed to support the Government and the UK in realising the potential of AI for our society and our economy and to protect from future potential threats and risks. It was concluded that the UK had a unique opportunity to forge a distinctive role for itself as a pioneer in ethical AI. We did, however, emphasise that if poorly handled, public confidence in AI could be undermined significantly.

In anticipation of the OECD’s subsequent digital AI principles, which were adopted in 2019, the Select Committee proposed five principles that could form the basis of a cross-sector AI code, and which could be adopted both nationally and internationally.

We did not at that point recommend a new regulatory body for AI-specific regulation, but instead noted that such a framework of principles could underpin regulation, should it prove to be necessary, in the future and that existing regulators would be best placed to regulate AI in their respective sectors. The Government in its response accepted the need to retain and develop public trust through an ethical approach both nationally and internationally.

In December 2020, the Select Committee’s follow up report “AI in the UK: No Room for Complacency” we examined the progress made by the Government to date since our earlier work. After interviews with government ministers, regulators, and other key players, the new report made several key recommendations. In particular, that:

  • greater public understanding was essential for the wider adoption of AI and active steps should be taken by the Government to explain to the general public the use of their personal data by AI;
  • the development of policy and mechanisms to safeguard the use of data, such as data trusts, needed to pick up pace, otherwise it risked being left behind by technological developments;
  • the time had come for the Government to move from deciding what the ethics are to how to instil them in the development and deployment of AI systems. We called for the CDEI to establish and publish national standards for the ethical development and deployment of AI;
  • users and policymakers needed to develop a better understanding of risk and how it can be assessed and mitigated, in terms of the context in which it is applied; and
  • that coordination between the various bodies involved in the development of AI, including the various regulators, was essential. The Government therefore needed to better coordinate its AI policy and the use of data and technology by national and local government. 

Despite the passage of time since the Industrial Strategy, the current governance of AI remains incomplete and unsatisfactory in several respects.

With respect to the use of data for training and inputs, such as for decision making and prediction, the UK General Data Protection Regulation (“GDPR”) and the Data Protection Act 2018 are important forms of governance. The Government’s “Data A New Direction” consultation however has led to a new Data Protection bill (“DP Bill”) which, while currently in development, proposes major changes to the GDPR post Brexit. These include significant amendments, such as no longer requiring firms to have a designated Data Protection Officer. The proposed DP Bill also waters down several provisions relating to data impact assessments. This holds the potential to create a divergence from the established data protection position in the UK and is likely to impact on the important EU Adequacy Decision in June 2021, leading to uncertainty for those wishing to use data for training and processing. The Government’s apparent intention to amend Article 22 of the GDPR giving the citizen the right not to be subjected to automated decision making also creates further uncertainty and runs the risk of a lower level of governance over decision made by AI systems.

A further area currently without a satisfactory approach is that of data and the issue of bias in decision making as a result of inherent bias caused by the improper use of data sets during the process of training algorithms. While it is likely that the Government’s own gap analysis will show that equalities legislation covers bias in acquired data which leads to discriminatory decisions made by AI, further consideration is needed on whether specific legal obligations in relation to the use of AI should be implemented in this context to actively mitigate its risk, rather than state that a discriminatory outcome is prohibited.

It is also the case that in many other areas of data and AI, there is no proper current governance in terms of binding legal duties that ensure that key internationally accepted ethical principles, such as those set out in the OECD AI Principles, are observed. These include:

  • Inclusive growth, sustainable development and well-being;
  • Human-centred values and fairness;
  • Transparency and explainability;
  • Robustness, security and safety; and

Despite the overall acceptance that the UK would need to consider developing policy or regulations in order to remain ahead of the curve, the UK’s National AI Strategy, published in September 2021, contained no discussion of ethics or regulation. Instead, an AI Governance whitepaper was promised to be published at some point in 2022.

Subsequent publication of an AI policy paper and AI Action Plan in July 2022 did however indicate that the Government was committed to developing “a pro-innovation national position on governing and regulating AI.” It is expected that this will be used to develop the AI Governance White paper.

Their approach is as follows:

“Establishing clear, innovation-friendly and flexible approaches to regulating AI will be core to achieving our ambition to unleash growth and innovation while safeguarding our fundamental values and keeping people safe and secure […] drive business confidence, promote investment, boost public trust and ultimately drive productivity across the economy.”

To facilitate its ‘pro-innovation’ approach, the Government has proposed several early cross-sectoral and overarching principles which build on the OECD AI Principles. These principles will, it seems, be interpreted and implemented by regulators within the context of the environment they oversee and will therefore be flexible to interpretation.

In terms of classification of AI within this ‘pro-innovation’ approach, rather than working to a clear definition of AI and determining what falls within scope, as chosen by the EU with their proposed AI Act, the UK has elected to follow an approach that instead sets out the core principles of AI which allows regulators to develop their own sector-specific definitions to meet the evolving nature of AI as technology advances.

In my view however, without a broad definition and some overarching duty to carry out a risk and impact assessment and subsequent regular audit to assess whether an AI system is conforming to Al principles, the governance of AI systems will be deficient, on the grounds alone that not every sector is regulated as is likely to be required. For example, except for certain specific products such as driverless cars there is no accountability or liability regime established for liability for the operation of AI systems at present.

This is the case for the public sector, as well as those in the private sector. While The Government has recognised the need for guidance for public sector organisations in the procurement and use of AI, it remains that there is no central and local government compliance mechanism to put this into practice. There are therefore insufficient measures of transparency, such as in the form of a public register of use of automated decision making, that require oversight and assessment of the decisions being carried out by AI in the context of public organisations. Furthermore, despite the efforts of parliamentarians, and organisations such as the Ada Lovelace Institute, there is no material recognition by the Government that explicit legislation, and/or regulation for intrusive AI technology such as live facial recognition, is needed to prevent the arrival of the surveillance state.

In light of the recognition by the National AI Strategy of the need to gain public trust, and for the wider use of trustworthy AI, the Government’s current proposals for a context specific approach are inadequate. In the face of this need to retain public trust, it must be clear, above all however, that regulation is not necessarily the enemy of innovation. In fact, it can in be the stimulus and key to gaining and retaining public trust around digital technology and its adoption. An approach by the Government could and should take the form of an overarching regulatory regime designed to ensure public transparency in the use of AI technologies and the recourse available across sectors for non-ethical use.

As is currently proposed, an approach which adopts divergent regulatory requirements across sectors would run the risk of creating barriers for developers and adopters through the requirement of having to navigate the regulatory obligations of multiple sectors. Where a cross-compatible AI system is concerned, for example in finance and telecoms, an organisation would have to potentially understand and comply with different regimes administered by the FCA, Prudential Regulation Authority, and Ofcom at the same time.

So, for these reasons, a much more horizontal cross sectoral approach than the Government is proposing is needed for the development and adoption of AI systems. This should set out clear common duties to assess risk and impact and adhere to common standards. Depending on the extent of the risk and impact assessed further legal duties would arise.

The question (What lessons, if any, can the UK learn from other countries on AI governance?) in my view should extend wider and ask not just about the lessons but the degree of harmonisation needed to ensure the most beneficial context for UK AI development, adoption, and assurance of ethical AI standards.

In its recent AI policy paper, a surprising admission is made by the Government that a context-driven approach may lead to less uniformity between regulators and may cause confusion and apprehension for stakeholders who will potentially need to consider multiple regimes, as well as the measures required to be taken to deal with extra-territorial obligations, such as those of the proposed EU AI Act.

International harmonisation is, in my view, essential if we wish to see developers and suppliers able to commercialise their products on a global basis assured that they are adhering to common standards of regulation without lengthy verification on entry of each individual jurisdiction in which they interact.

This could come in the form of a national version of the EU’s approach, where we have regulation that harmonises the landscape across sectors and industries, or in the form of international agreement on the standards of risk and impact assessment to be adopted. Work on common standards (i.e. the tools which would be deployed if regulation were out in place) is bearing fruit and may also assist organsiations in ensuring they are in conformity without navigating every subsector or jurisdiction with which they interact.

Most recently, we have seen the launch of the interactive AI Standards Hub by the Alan Turing institute with the support of the British Standards Institution and National Physical Laboratory which will provide users across industry, academia, and regulators with practical tools and educational materials to effectively use and shape AI technical standard. This in turn could lead to agreement on ISO standards with the EU and the US where NIST is actively engaged in developing similar protocols.

Having a harmonised approach would help provide the certainty businesses would need to develop and invest in the UK more readily.

When it comes to dealing with our nearest trading partner, it may be favourable to go one step further. When the White Paper does emerge, I believe that it is important that there is recognition that a considerable degree of convergence between us and EU is required practically, and that a risk-based form of horizontal, rather than purely sectoral, regulation is needed.

The Government is engaged in a great deal of activity. The question, therefore, is whether it is fast or focused enough and whether its objectives (such as achieving trustworthy AI and harmonised international standards) are going to be achieved through the actions being taken so far. As it stands currently, this does not look to be the case.

 

Lord Clement-Jones,

Coran Darling