How the OECD’s AI system classification work added to a year of progress in AI governance

Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.

Lord C-J OECD Blog Jan 2021

 BackAI Wonk blogOECD Network of Experts on AIOECD Parliamentary Group on AIGlobal Partnership on AIGlobal Partnership on AIAI policy eventsSignup to our mailing listAI Principles BackOECD AI Principles overviewInclusive growth, sustainable development and well-beingHuman-centred values and fairnessTransparency and explainabilityRobustness, security and safetyAccountabilityInvesting in AI research and developmentFostering a digital ecosystem for AIShaping an enabling policy environment for AIBuilding human capacity and preparing for labour market transformationInternational co-operation for trustworthy AIPolicy areas BackPolicy areas overviewAgricultureCompetitionCorporate governanceDevelopmentDigital economyEconomyEducationEmploymentEnvironmentFinance and insuranceHealthIndustry and entrepreneurshipInnovationInvestmentPublic governanceScience and technologySocial and welfare issuesTaxTradeTransportWork, Innovation, Productivity & Skills programmeClassificationAI performanceSkillsDiffusionLabour marketsTrends & data BackTrends & data overviewOECD metrics & methodsAI newsAI researchAI jobs & skillsAI search trendsLive COVID-19 researchCountries & initiatives BackCountries & initiatives overviewNational strategies & policiesStakeholder initiativesOECD.orgGoing Digital ToolkitAbout BackAbout OECD.AINetwork of expertsThe AI WonkResources on AIPartnersFAQVideosSearch

  1. Home
  2. The AI Wonk
  3. How the OECD’s AI system classification work added to a year of progress in AI governance

Government

How the OECD’s AI system classification work added to a year of progress in AI governance

Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.

Lord Tim Clement-Jones

Lord, House of Lords

January 6, 2021 — clock7 min read

LinkedIn logo
Twitter logo
Facebook logo
AI in 2020

It has never been clearer, particularly after this year of COVID and our ever greater reliance on digital technology, that we need to retain public trust in the adoption of AI.

To do that we need, whilst realizing the opportunities, to mitigate the risks involved in the application of AI. This brings with it the need for a clear standard of accountability.

A year of operationalizing AI ethical principles

2019 was the year of the formulation of high-level ethical principles for AI by the OECDEU and G20. These are very comprehensive and provide the basis for a common set of international standards but it has become clear that voluntary ethical guidelines are not enough to guarantee ethical AI.

There comes a point where the risks attendant on non-compliance with ethical principles is so high that policy makers need to understand when certain forms of AI development and adoption require enhanced governance or and/or regulation. The key factor in 2020 has been the work done at international level in the Council of EuropeOECD and EU towards operationalizing these principles in a risk-based approach to regulation.

And they have been very complementary. The Council of Europe’s Ad Hoc Committee on AI (CAHAI) has drawn up a Feasibility Study for regulation of AI which advocates a risk-based approach to regulation as does last year’s EU White Paper on AI.

As the EU White Paper said: “As a matter of principle, the new regulatory framework for AI should be effective to achieve its objectives while not being excessively prescriptive so that it could create a disproportionate burden, especially for SMEs. To strike this balance, the Commission is of the view that it should follow a risk-based approach”

They go on to say:

“A risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question whether or not they are ‘high-risk’ . The determination of what is a high-risk AI application should be clear and easily understandable and applicable for all parties concerned.”

The feasibility study develops this further with discussion about the nature of the risks particularly to fundamental rights, democracy and the rule of law.

As the Study says: “These, risks, however, depend on the application context, technology and stakeholders involved. To counter any stifling of socially beneficial AI innovation, and to ensure that the benefits of this technology can be reaped fully while adequately tackling its risks, the CAHAI recommends that a future Council of Europe legal framework on AI should pursue a risk-based approach targeting the specific application context. This means not only that the risks posed by AI systems should be assessed and reviewed on a systematic and regular basis, but also that any mitigating measures …should be specifically tailored to these risks.”

This means not only that the risks posed by AI systems should be assessed and reviewed on a systematic and regular basis, but also that any mitigating measures …should be specifically tailored to these risks.
EU White Paper on AI

Governance must match the level of risk

Nonetheless, it is a complex matter to assess the nature of AI applications and their contexts. The same goes for the consequent risks of taking this forward into models of governance and regulation. If we aspire to a risk-based regulatory and governance approach we need to be able to calibrate the risk. This will in turn determine the necessary level of control.

Given this kind of calibration, there is a clear governance hierarchy to follow, depending on the rising risk involved. Where the risk is lower, actors can adopt a flexible approach such as a voluntary ethical code without a hard compliance mechanism. Where the risk is higher, they will need to institute enhanced corporate governance using business guidelines and standards, with clear disclosure and compliance mechanisms.

Then we have government best practice, such as the AI procurement Guidelines developed by the World Economic Forum and adopted by the UK government. Finally, and, as some would say, as a final resort, we introduce comprehensive regulation, such as that which is being adopted for autonomous vehicles, which is enforceable by law.

In regulating, developers need to be able to take full advantage of regulatory sandboxing which permits the testing of a new technology without the threat of regulatory enforcement but with strict overview and individual formal and informal guidance from the regulator.

There are any number of questions which arise in considering this governance hierarchy, but above all, we must ask ourselves if we have the necessary tools for risk assessment and a clear understanding of the necessary escalation in compliance mechanisms to match.

As has been well illustrated during the COVID pandemic, the language of risk is fraught with misunderstanding.  When it comes to AI technologies, we need to assess the risks such as the likely impact and probability of harm, the importance and sensitivity of use of data, the application within a particular sector, the risk of non-compliance and whether a human in the loop mitigates risk to any degree. 

AI systems classification framework at the OECD

The detailed and authoritative classification work carried out by the OECD Network of Experts Working Group on the Classification of AI systems comes at a crucial and timely point.

The preliminary classification framework of AI systems comprises 4 key pillars:

  1. Context: This refers to who is deploying the AI system and in what environment. This includes several considerations such as the business sector, the breadth of deployment, the system maturity, the stakeholders impacted and the overall purpose, such as for profit or not for profit.
  2. Data and Input: This refers to the provenance of the data the system uses, where and by whom it has been collected, the way it evolves and is updated, its scale and structure and whether it is public or private or personal and its quality.
  3. The AI Model, i.e. the underlying particularities that make up the AI system – is it, for instance, a neural network or a linear model? Supervised or unsupervised? A discriminative or generative model, probabilistic or non-probabilistic? How does it acquire its capabilities? From rules or machine learning? How far does the AI system conform to ethical design principles such as explainability and fairness?
  4. The Task and Output: This examines what the AI System actually does. What are the outputs that make up the results of its work? Does it forecast, personalize, recognize, or detect events for example?

Within the Context heading, the framework includes consideration of the benefits and risks to individuals in terms of impact on human rights, wellbeing and effects on infrastructure and how critical sectors function. To fit with the CAHAI and EU risk-based approach and be of maximum utility however, this should really be an overarching consideration after all the other elements have been assessed.

Also see: A first look at the OECD’s Framework for the Classification of AI Systems, designed to give policymakers clarity

The fundamental risks of algorithmic decision-making

One of the key questions, of course, is whether on this basis of this kind of classification and risk assessment there are early candidates for regulation.

The Centre for Data Ethics and Innovation created in the UK two years ago recently published their AI Barometer Report. This also discusses risk and regulation and found a common core of risk across sectors.

They say “While the top-rated risks varied from sector to sector, a number of concerns cropped up across most of the contexts we examined. This includes the risks of algorithmic bias, a lack of explainability in algorithmic decision-making, and the failure of those operating technology to seek meaningful consent from people to collect, use and share their data.”

A good example of where some of these issues have already arisen is the use of live facial recognition technologies which is becoming widespread. It is unusual for London’s Metropolitan Police Commissioner to describe a new technology as Orwellian (in reference to his seminal novel “1984” where he coined the phrase “Big Brother”) as she did last year talking about live facial recognition but now they are beginning to adopt it at scale.

In addition, over the past few years we have seen a substantial increase in the adoption of Algorithmic Decision Making and prediction, or ADM, across central and local government in the UK. In criminal justice and policing, algorithms for prediction and decision making are already in use.

Another high-risk AI technology which needs to be added to the candidates for regulation is the use of AI applications for recruitment processes as well as in situations impacting employees’ rights to privacy.

Future decision-making processes in financial services may be considered high risk and become candidates for regulation. This concerns areas such as credit scoring or determining insurance premiums by AI systems. 

AI risk and regulation in 2021 and beyond

The debate over hard and soft law in this area is by no means concluded. Denmark and a number of other EU member states have recently felt the need to put a stake in the ground with what is called a non-paper to the EU Commission over concerns that AI and other digital technologies may be overregulated in the EU’s plans for digital regulation.

Whether in the public or private sector, the cardinal principle must be that AI needs to be our servant not our master. Going forward, there is cause for optimism that experts, policy makers and regulators now recognize that there are varying degrees of risk in AI systems. We can classify and calibrate AI and develop the appropriate policies and solutions to ensure safety and trust. We can all as a result expect further progress in 2021.



https://oecd.ai/wonk/contributors/lord-tim-clement-jones


No Room for Complacency: Making ethical artificial intelligence a reality

OECD Blog Feb 2021

https://www.oecd-forum.org/posts/no-room-for-complacency-making-ethical-artificial-intelligence-a-reality

This article is part of a series in which OECD experts and thought leaders — from around the world and all parts of society — address the COVID-19 crisis, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.

Join the Forum Network for free using your email or social media accounts to share your own stories, ideas and expertise in the comments.


In April 2018, the House of Lords AI Select Committee I chaired produced its report AI in the UK: Ready, Willing and Able?, a special enquiry into the United Kingdom’s artificial intelligence (AI) strategy and the opportunities and risks afforded by it. It made a number of key recommendations that we have now followed up with a short supplementary report, AI in the UK: No Room for Complacency, which examines the progress made by the UK Government, drawing on interviews with government ministers, regulators and other key players in the AI field. 

Since the publication of our original report, investment in, and focus on the United Kingdom's approach to artificial intelligence has grown significantly. In 2015, the United Kingdom saw GBP 245 million invested in AI. By 2018, this had increased to over GBP 760 million. In 2019, it was GBP 1.3 billion.

The UK Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency.

Artificial intelligence has been deployed in the United Kingdom in a range of fields—from agriculture and healthcare, to financial services, through to customer service, retail, and logistics. It is being used to help tackle the COVID-19 pandemic,and is also being used to underpin facial recognition technology, deep fakes and other ethically challenging uses.

Our conclusion is that the UK Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency.

There are many bodies outside the framework of government that are to a greater or lesser extent involved in an advisory role: the AI Council, the Centre for Data Ethics and Innovation, the Ada Lovelace Institute and the Alan Turing Institute.

Co-ordination between the various bodies involved with the development of AI, including the various regulators, is essential. The UK Government needs to better co-ordinate its AI policy and the use of data and technology by national and local government.

A Cabinet Committee must be created; their first task should be to commission and approve a five-year strategy for AI. This strategy should prepare society to take advantage of AI, rather than feel it is being taken advantage of.

In our original report, we proposed a number of overarching principles providing the foundation for an ethical standard of AI for industry, government, developers and consumers. Since then, a clear consensus has emerged that ethical AI is the only sustainable way forward.

The United Kingdom is a signatory of the OECD Recommendation on AI, embodying five principles for responsible stewardship of trustworthy AI and the G20 non-binding principles on AI.This demonstrates the United Kingdom's commitment to collaborate on the development and use of ethical AI, but it is yet to take on a leading role.

The time has come for the UK Government to move from deciding what the ethics are, to how to instill them in the development and deployment of AI systems. We say that our government must lead the way on making ethical AI a reality. To not do so would be to waste the progress it has made to date, and to squander the opportunities AI presents for everyone in the United Kingdom.

We call for the Centre for Data Ethics and Innovation to establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks: one for the ethical development of AI, including issues of prejudice and bias; and the other for the ethical use of AI by policymakers and businesses. 

However, we have concluded that the challenges posed by the development and deployment of AI cannot necessarily be tackled by cross-cutting regulation. Understanding by users and policymakers needs to be developed through a better understanding of risk—and how it can be assessed and mitigated in terms of the context in which it is applied—so our sector-specific regulators are best placed to identify gaps in regulation.

AI will become embedded in everything we do. As regards skills, government inertia is a major concern. The COVID-19 pandemic has thrown these issues into sharp relief. As and when the COVID-19 pandemic recedes, and the UK Government addresses the economic impact of it, the nature of work will have changed and there will be a need for different jobs and skills.

This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. 

The Government needs to take steps so the digital skills of the United Kingdom are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.

It is clear that the pace, scale and ambition of government action does not match the challenge facing many people working in the United Kingdom. It will be imperative for the Government to move much more swiftly. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.

The question at the end of the day remains whether the United Kingdom is still an attractive place to learn about and work in AI. Our ability to attract and retain the top AI research talent is of paramount importance, and it will therefore be hugely unfortunate if the United Kingdom takes a step back, with the result that top researchers will be less willing to come here.

The UK Government must ensure that changes to the immigration rules promote—rather than obstruct—the study, research, and development of AI.

Go to the profile of Lord Tim Clement-Jones

Lord Tim Clement-Jones

Former Chair of House of Lords Select Committee on AI / Co-Chair of APPG on AI, House of Lords, United KingdomLord Clement-Jones was made CBE for political services in 1988 and life peer in 1998. He is Liberal Democrat, House of Lords spokesperson for Digital. He is former Chair of the House of Lords Select Committee on AI which sat from 2017-18; Co-Chair of the All-Party Parliamentary Group (“APPG”) on AI; a founding member of the OECD Parliamentary Group on AI and member of the Council of Europe’s Ad-hoc Committee on AI (“CAHAI”). He is a former member of the House of Lords Select Committees on Communications and the Built Environment; and current member of the House of Lords Select Committee on Risk Assessment and Risk Planning. He is Deputy-Chair of the APPG on China and Vice-Chair of the APPG’s on ‘The Future of Work’ and ‘Digital Regulation and Responsibility’. He is a Consultant of DLA Piper where previous positions include London Managing Partner, Head of UK Government Affairs and Co-Chair of Global Government Relations. He is Chair of Ombudsman Services Limited, the not for profit, independent ombudsman providing dispute resolution for communications, energy and parking industries. He is Chair of Council of Queen Mary University London; Chair of the Advisory Council of the Institute for Ethical AI in Education and Senior Fellow of the Atlantic Council’s GeoTech Center.


Why data trusts could help us better respond and rebuild from COVID-19 globally

Lord C-J April 2020

What are data trusts? What roles can Data Trusts play in the global response to COVID-19? What can the U.S. learn from the U.K.’s activities involving data trusts and AI? Please join the Atlantic Council’s GeoTech Center on Wednesday, April 15 at 12:30pm EDT, for a discussion with Lord Tim Clement-JonesDame Wendy Hall, and Dr. David Bray on the role of Data Trusts in the global response to and recovery from COVID-19. This discussion will include discussing data and AI activities occurring in the United Kingdom and what the other countries can learn from these efforts.

Please join us for this important conversation. You can register to receive further information on how to join the virtual audience via Zoom or watch the video live streamed on this web page. If you wish to join the question and answer period, you must join by the Zoom app or web after registering. 

https://www.youtube.com/watch?v=CyGYDAxyVbk
https://www.atlanticcouncil.org/event/why-data-trusts-could-help-us-better-respond-and-rebuild-from-covid19-globally/

The geopolitics of digital identity: Dr. David Bray and Lord Tim Clement-Jones

July 2020

Throughout the course of the COVID-19 pandemic, technologists have pointed out how digital identity systems could remedy some of the difficulties that we face as an open society suddenly unable to interact face-to-face. Even those who previously did not consider themselves to be “digital natives” have been forced to adopt a digital lifestyle, one in which traditional sources of identification and trust-building have become less useful.

Lord Tim Clement-Jones, a Nonresident Senior Fellow with the GeoTech Center, and Dr. David Bray, Director of the Geotech Center, discussed the issue of digital identity in a recent event at the IdentityNorth Summit. Lord Jones pointed out how technologies for securely connecting an individual’s digital presence to their identity are not new, but have yet to be applied at a national scale, or in a universal manner that would be necessary to maximize their impact. He recognized, though, that certain applications of digital identity technology might be of concern to ordinary people; though he might be comfortable using his digital identity as part of the United Kingdom Parliament’s new system for MPs to vote, the average citizen might take concern with their votes being tabulated digitally, or being connected to other facets of their online identity.

As a result, the experts emphasized how digital identity, in whatever forms it will take, needs to be inclusive of all individuals and experiences, regardless of, for example, their level of literacy or digital accessibility. Though analog identity systems are by no means perfect, to protect from identity theft and misuse of digital identity systems, initial pilot programs similar to the Canadian system in development will need to roll out a hybrid of both physical and digital forms of identity.

Watch the video above to hear more of Lord C-J’s commentary on what precautions must be taken to enable the success of digital identity in a post-COVID-19 world.

https://www.atlanticcouncil.org/insight-impact/in-the-news/the-geopolitics-of-digital-identity-dr-david-bray-and-lord-tim-clement-jones/
https://www.atlanticcouncil.org/insight-impact/in-the-news/the-geopolitics-of-digital-identity-dr-david-bray-and-lord-tim-clement-jones/

The UK's Role In The Future Of AI

Kathleen Walch Contributor COGNITIVE WORLD Contributor Group AI

Forbes Magazine April 2020

The UK has played an important role in the history and development of AI. Alan Turing, a British mathematician, is considered to be the father of theoretical computer science and has deep roots in AI as well.  In addition to crafting the foundations for modern computing, Turing envisioned the Turing test, which aims to determine a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. 

While the UK was heavily involved in AI development from the very first years, the UK also helped bring about the first AI Winter in the industry as well. The Lighthill report cast a deep shadow on AI’s promises and caused a sharp pullback in funding from the government, research institutions, and universities. The report represented a pessimistic view of AI and was highly critical of many core aspects of research in this field.

However, with the resurgence of interest and investment in AI, the UK has likewise been making heavy investments in AI, and as a result continues to show its strength in the field. In a recent report by research firm Cognilytica, the United Kingdom has one of the strongest AI strategies in the world with strong government funding for AI, strong research activity in the field, strong VC funding and AI startups, and strong enterprise activity and adoption of AI. (Disclosure: I’m a principal analyst at Cognilytica). So where is the UK heading with regards to its overall investment and support of AI? 

Parliament's Role in AI

The AI Today podcast interviewed Lord Tim Clement-Jones, Co-Chair of the All Party Parliamentary Group on Artificial Intelligence and former chair of the House of Lords Artificial Intelligence Select Committee. In 2017 the UK established an All Party Parliamentary Group on Artificial Intelligence to address ethical issues, industrial norms, regulatory options and social impact for AI in Parliament. Despite AI’s history with periods of little interest and funding, the times have changed. According to Lord Clement-Jones, he believes that AI is finally here to stay, which is why he set out to learn more about the future of AI with some of his peers. In doing this, he has ended up as a bit of an expert on the topic, and is now publicly speaking about what AI could mean for all of us. ADVERTISING

Artificial intelligence is changing fast, and with it, we must consider what might be coming from the future of the use of this technology. Despite the fact that many people assume Silicon Valley is where the majority of development is being carried out, the reality is that AI is being developed all around the globe. In Cognilytica’s above mentioned report the countries leading the way include the United States, United Kingdom, France, and Israel, with China, South Korea, Germany, and many other countries very close behind according to a range of facts. AI is being pursued by both governments and businesses alike, which means that there is some serious potential for unexpected breakthroughs but also makes it nearly impossible to know what it might look like in the future. 

Lord Tim Clement-Jones thinks that AI has the power to do some amazing things with the broad spectrum that it covers and the fact that it can be applied to many aspects of life as well as just about every single industry. However just exactly how it can and will be applied also makes it a bit difficult to regulate. He is relatively concerned with the ethics of this technology and how we can best go about creating and using AI ethically. The UK has designated itself as a hopeful leader in ethical AI development, but the concept of ethical and responsible AI is still new and relatively nascent. Lord Tim Clement-Jones stresses this is an area that we will need some sort of global agreement in the long run. 

International adoption of AI standards and ethics

Lord Tim Clement-Jones thinks a set of criteria that researchers, developers, and those building AI should agree to follow in order to find an ethical way to continue designing AI is incredibly important. Trying to implement specific ideas can help developers to create AI that is more helpful than harmful. He focuses on the idea that AI should be beneficial, transparent, unbiased, and not destructive. He believes that if we hold true to these ideals in design, we can create AI that is useful for society but does not put anyone at risk of a disadvantage.

One thing that he is particularly worried about is this notion that if people become fearful of the technology, they will ultimately stifle the innovation. It is his hope that by placing an emphasis on creating ethically designed AI systems, people will feel more comfortable with it being used. In fact some organizations such as the OECD have created a set of AI principles that were adopted by member countries including the UK to help create international guidelines for all to follow.

Some people are concerned that AI will be taking their jobs. What we’ve seen is that AI is not a job killer, but a job category killer. Lord Tim Clement-Jones believes that if we can focus on how AI can help citizens and help society there should be no real reason to fear this technology. A big point of concern with AI technology is this notion that artificial intelligence will replace the need for humans. Lord Tim Clement-Jones believes that this will only be a concern if companies put a focus on productivity over actual business transformation. There are plenty of jobs and tasks that AI can take over, particularly ones focused around busywork. 

However, that does not mean that there will necessarily be less jobs overall. He believes that the industry will create new and different jobs and that the world will rise to meet the occasion.In fact, we’ve seen this happen with other transformative technologies as well. If anything, his big area of concern is the potential impact that it might have with on-the-job training and learning. While it is true that technology can make some jobs and tasks more efficient, it can also cut into training time from employees who used to spend that time connecting and learning from their more educated peers. For example, technology and AI are helping law firms by taking on certain tasks, however by not having junior lawyers performing these tasks it takes away opportunities for them to learn. However, if we can meet the training and on-the-job learning needs through other means, this should not need to be a huge problem. 

Another area of potential concern that would arise is the potential for negative outcomes due to dependencies on AI technologies. He points out that airplane pilots are now less than pleased with the fact that the cockpit is mostly an automated experience, meaning they don’t necessarily spend much time using their skills in flight. Because of that, it has the potential to create a knowledge gap or even just allow educated individuals to get rusty. When you consider the fact that these employees only need their skills in the event that something goes wrong, it is easy to see how a frightening scenario might play out. If a person who is skilled does not regularly use and exercise on those skills until the worst possible moment when they suddenly become necessary, it is possible for there to be rather disastrous outcomes.

As a whole though, Lord Tim Clement-Jones believes that the future of AI is bright. He stresses the idea that AI can do so much good for us and help us to improve the quality of our world. There are endless potential benefits with this technology. However, because of the potential for abuse and the raw power of these systems, we simply must take steps to ensure that this remains an ethical process. For now, it seems obvious that AI is transformative technology that will widely impact a range of industries, governments, and society as a whole. As we move forward, it will take many conversations between countries and businesses alike to ensure that it is a bright one.

https://www.forbes.com/sites/cognitiveworld/2020/04/12/the-united-kingdoms-role-in-the-future-of-ai/?sh=7a154382768d


The rise of AI marks an opportunity for radical changes in corporate governance

Lord C-J NSTech Jan 2020

There is currently a great deal of concern in Britain and the EU more widely about the implications of the adoption of artificial intelligence (AI), particularly in algorithmic decision making and prediction in the public sector, notably in policing and the criminal justice system, and in the use of live facial recognition technology in public places.

As a result there has been pressure to set out much clearer guidelines, beyond general ethical codes, for the use of these technologies by government and its agencies.

But even if we get things right in the public sector, businesses have responsibility too, both those who develop AI and those who adopt it. AI even in its narrow form will and should have a profound impact on and implications for corporate governance generally.

Trade organisations such as TechUK and specific AI organisations such as the Partnership on AI (comprised of major tech companies and NGO’s) recognise that corporate responsibility and governance on AI is increasingly important.

There is a growing corpus of corporate governance work relating to AI and the ethics of its application in business.  Asset managers such Hermes and Fidelity are now adopting guidance for the companies they invest in.

The Institute of Business Ethics’s report  “Corporate Ethics in a Digital Age” is a masterly briefing for boards written by Peter Montagnon, formerly chair of the IBA investment Committee, who sadly died the week after its launch.

But he has left a very important piece of work behind him together with the vital message that boards should be in control and accountable when it comes to applying AI in their business and they should have the skillsets to enable them to do so.

The Tech Faculty of the ICAEW has produced a valuable paper on New Technologies, Ethics and Accountability. The bottom line is we need to operationalize the ethics and engrain ethical behavior.They have set out a number of questions which boards should be asking themselves.

It is imperative that boards have the right skill sets in order to fulfil their oversight role. For instance do they understand what technology is being used in their company and how it is being used and managed, for example by HR in recruitment and assessment? Have they strong lines of accountability for the introduction and impact of AI?

Boards need to be aware of the questions they should ask and the advice they need and from whom. They need to consider what tools they have available such as

  • Algorithm impact assessments/ algorithm assurance
  • Risk Assessment /Ethical Audit Mechanisms/Kitemarking
  • Ethics by design, metrics, standards for “training testing and fixing”

Risk management is central to the introduction of new technology. Does a company mainstream oversight into its Audit and Risk committee or set up an Ethics Advisory Board? It has even been suggested  by Christian Voegtlin, associate professor in corporate social responsibility at Audencia Business School that there should be a chief philosophy officer to ensure adherence to ethical standards.

Is an AI adopting business taking full advantage of the rapidly growing concept of regulatory sandboxing? This means a regulator such as our Financial Conduct Authority permitting the testing of a new technology without the threat of regulatory enforcement but with strict overview and individual formal and informal guidance from the regulator.

Some make an analogy with the application of professional medical ethics. We take these for granted but should individual AI engineers be explicitly required to declare their adherence to a set of ethical standards along the lines of a new tech Hippocrates Oath? This could apply to both AI adopters as well as developers.

More broadly and more significantly, however, AI can and should contribute positively to a purposeful form of capitalism which is not simply the pursuit of profit but where companies deploy AI in an ethical way, to achieve greater sustainability and a fairer distribution of power and wealth.

We have seen the high level sets of AI ethics developed by bodies like the EU, OECD, the G20, the Partnership on AI .These are very comprehensive and provide the basis for a common set of international standards.

In the words of the title of Brent Mittelstadt’s recent Nature paper however, “Principles Alone cannot guarantee ethical AI”. We need to develop alongside them a much more socially responsible form of corporate governance.

Dr Maha Hosain Aziz in her recent book “Future World Order” talks of the need for a new social contract between tech companies and citizens. I think we need to go further however.

It is not just the tech companies where the issues identified by Rana Foroohar in “Don’t be Evil The Case Against Big Tech”are relevant. It also extends to: “Digital property rights, privacy laws, antitrust rules, free speech, the legality of surveillance, the implications of data for economic competitiveness and national security, the impact of the algorithmic disruption of work on labor markets, the ethics of artificial intelligence and the health and well being of users of digital technology.”

As Foroohar says, “[when] we think about how to harness the power of technology for the public good, rather than the enrichment of a few companies, we must make sure that the leaders of those companies aren’t the only ones to have a say in what the rules are.”

The Big Innovation Centre has played a leading role in the debate with its “Purposeful Company Project”, which was launched back in 2015 with an ethos that “the role of business is to fulfil human wants and needs and to pursue a purpose that has a clear benefit to society. It is through the fulfilment of their chosen purpose that value is created.”

Since then, it has produced several important reports on the need for an integrated regulatory approach to stewardship and intrinsic purpose definition, and on the changes that should be made to the Financial Reporting Council’s UK Stewardship Code.

With all the potential opportunities and disruption involved with AI, this work is now absolutely crucial to ensure that businesses don’t adopt new technologies without a strong underlying set of corporate values so that it is not just shareholders who benefit but that the impact and distribution of benefit to employees and society at large are fully considered.

We of course can’t confine these ethical challenges to the UK. We need ethical alignment in a global world. I hope we will both adopt the international principles which have been developed and, by the same token, argue for the international adoption of the purposeful company principles we are developing in the UK.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI.

https://tech.newstatesman.com/business/ai-corporate-governance
https://tech.newstatesman.com/business/ai-corporate-governance

The government’s approach to algorithmic decision-making is broken: here’s how to fix it

Lord C-J NSTech Feb 2020

I recently initiated a debate in the House of Lords asking whether the government had fully considered the implications of decision-making and prediction by algorithm in the public sector.

Over the past few years we have seen a substantial increase in the adoption of algorithmic decision-making and prediction or ADM across central and local government. An investigation by the Guardian last year showed some 140 of 408 councils in the UK are using privately-developed algorithmic ‘risk assessment’ tools, particularly to determine eligibility for benefits and to calculate entitlements. Experian, one of the biggest providers of such services, secured £2m from British councils in 2018 alone, as the New Statesman revealed last July.

Data Justice Lab research in late 2018 showed 53 out of 96 local authorities and about a quarter of police authorities are now using algorithms for prediction, risk assessment and assistance in decision-making. In particular we have the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias and discrimination and dubious predictions.

There’s a lack of transparency in algorithmic decision-making across the public sector

Central government use is more opaque but HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services. A key example of ADM use in central government is the DWP’s much criticised Universal Credit system which was designed to be digital by default from the beginning. The Child Poverty Action Group in their study “The Computer Says No” shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

We know that the Department for Work and Pensions has hired nearly 1,000 new IT staff in the past two years, and has increased spending to about £8m a year on a specialist “intelligent automation garage” where computer scientists are developing over 100 welfare robots, deep learning and intelligent automation for use in the welfare system. As part of this, it intends according to the National Audit Office to develop “a fully automated risk analysis and intelligence system on fraud and error”.

The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system a year ago and said in a statement afterwards: “Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.”

It is clear that failure to properly regulate these systems risks embedding the bias and inaccuracy inherent in systems developed in the US such as Northpointe’s COMPAS risk assessment programme in Florida or the InterRAI care assessment algorithm in Arkansas.

These issues have been highlighted by Liberty and Big Brother Watch in particular.

Even when not using ADM solely, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.

As the 2018 Report by AI Now Institute at New York University says: “While individual human assessors may also suffer from bias or flawed logic, the impact of their case-by-case decisions has nowhere near the magnitude or scale that a single flawed automated decision-making systems can have across an entire population.”

Last March, the Committee on Standards in Public Life decided to carry out a review of AI in the public sector to understand the implications of AI for the seven Nolan principles of public life and examine if government policy is up to the task of upholding standards as AI is rolled out across our public services. The committee chair Lord Evans said on recently publishing the report:

“Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector….

“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

The report found that despite the GDPR, the Data Ethics Framework the OECD principles and the Guidelines for Using Artificial Intelligence in the Public Sector, the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be.

See also: Will the government’s new AI procurement guidelines actually work?

The Committee’s report presents a number of recommendations to mitigate these risks, including greater transparency by public bodies in use of algorithms, new guidance to ensure algorithmic decision-making abides by equalities law, the creation of a single coherent regulatory framework to govern this area, the formation of a body to advise existing regulators on relevant issues, and proper routes of redress for citizens who feel decisions are unfair.

Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report “AI in the UK: Ready Willing and Able?” in 2018. We said: “We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.”

It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a “right to an explanation” provision, when an individual has been subject to fully automated decision-making. Few highly significant decisions however are fully automated — often, they are used as decision support, for example in detecting child abuse. The law should also cover systems where AI is only part of the final decision.

A legally enforceable “right to explanation”

The Science and Technology Select Committee Report “Algorithms in Decision-Making” of May 2018, made extensive recommendations.

It urged the adoption of a legally enforceable “right to explanation” that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.

Subsequently the Law Society in their report last June about the use of AI in the Criminal Justice system also expressed concern and recommended measures for oversight, registration and mitigation of risks in the Justice system.

Last year ministers commissioned the AI Adoption Review designed to assess the ways artificial intelligence could be deployed across Whitehall and the wider public sector. Yet, as NS Tech revealed in December, the government is now blocking the full publication of the report and has only provided a version which is heavily redacted. How, if at all, does the Government’s adoption strategy fit with publication by the Government Digital Service and Office for AI guidelines for Using Artificial Intelligence in the Public Sector last June, and the further guidance on AI Procurement in October, derived from work by the World Economic Forum?

See also: Government blocks full publication of AI review

We need much greater transparency about current deployment, plans for adoption and compliance mechanisms.

Nesta last year in their report ”Decision-making in the Age of the Algorithm”,  for instance, set out a comprehensive set of principles to inform human machine interaction for public sector use of algorithmic decision-making which go well beyond the government guidelines.

This, as Nesta say, is designed to introduce tools in a way which:

  • Is sensitive to local context
  • Invests in building practitioner understanding
  • Respects and preserves practitioner agency.

As they also say “The assumption underpinning this guide is that public sector bodies will be working to ensure that the tool being deployed is ethical, high quality and transparent”.

It is high time that a minister was appointed — as recommended by the Commons Science and Technology Committee — with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector. Those standards should be set in terms of design, mandatory bias testing and audit together with a register for algorithmic systems in use, and that there is redress. This is particularly important for those used by the police and criminal justice system in decision-making processes.

Putting the Centre for Data Ethics and Innovation on a statutory basis

The Centre for Data Ethics and Innovation should have an important advisory role in all this; it is doing important work on algorithmic bias which will help inform government and regulator. But it needs now to be put on statutory basis as soon as possible.

It could also consider whether as part of a package of measures as Big Brother Watch has suggested we should:

  • Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
  • Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
  • Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns

If we do not act soon we will find ourselves in the same position as the Netherlands where there was a recent decision that an algorithmic risk assessment tool (“SyRI”) used to detect welfare fraud breached article 8 of the ECHR. The Legal Education Foundation has looked at similar algorithmic ‘risk assessment’ tools used by some local authorities in the UK for certain welfare benefit claims and has concluded that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality law framework in the UK, and is “hidden” from sight due to the way in which the technology is being deployed.

There is a problem with double standards here too. Government behaviour is in stark contrast with the approach of the ICO’s draft guidance “Explaining decisions made with AI”, which highlights the need to comply with equalities legislation and administrative law.

Last March when I asked an oral question on the subject of ADM the relevant minister agreed that it had to be looked at “fairly urgently”. It is currently difficult to discern any urgency or even who is taking responsibility for decisions in this area. We need at the very least to urgently find out where the accountability lies and press for comprehensive action without further delay.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI

https://tech.newstatesman.com/guest-opinion/algorithmic-decision-making

Don’t trade away our valuable national data assets

Lord C-J NSTech October 2020

Data is at the heart of the global digital economy, and the tech giants hold vast quantities of it.

The Centre for European Policy Studies think tank recently estimated that 92 per cent of the western world’s data is now held in the US. The Cisco Global Cloud Index estimates that by 2021, 94 per cent of what are called workloads and compute instances will be processed by cloud platforms, whilst only 6 per cent will be processed by traditional data centres . This will potentially lead to vast concentrations of data being held by a very few cloud vendors (which will predominantly be AWS, Microsoft, Google and Alibaba).

NHS data in particular is a precious commodity especially given the many transactions between technology, telecoms and pharma companies concerned with NHS data. In a recent report the professional services firm EY estimated the value of NHS data could be around £10bn a year in the benefit delivered.

The Department for Health and Social Care is preparing to publish its National Health and Care Data Strategy this Autumn, in which it is expected to prioritise the “Safe, effective and ethical use of data-driven technologies, such as artificial intelligence, to deliver fairer health outcomes”. Health professionals have strongly argued that free trade deals risk compromising the safe storage and processing of NHS data.

We must ensure that it is the NHS, rather than the US tech giants and drug companies, that reap the benefit of all this data. Last year, it was revealed that pharma companies Merck, Bristol Myers Squibb and Eli Lilly paid the government for licences costing up to £330,000 each, in return for anonymised health data.

Harnessing the value of healthcare data must be allied with ensuring that adequate protections are put in place in trade agreements if that value isn’t to be given or traded away.

There is also the need for data adequacy to ensure that personal data transfers to third countries outside the EU are protected, in line with the principles of the GDPR. In July, in the case of Schrems II, the European Court of Justice ruled that the privacy shield framework which allows data transfers between the US, the UK and the EU was invalid. That has been compounded by the recent ECJ judgement this month in the case brought by Privacy International.

The European Court of Justice’s recent invalidation of the EU/US Privacy Shield also cast doubt on the effectiveness of Standard Contractual Clauses (SCCs) as a legal framework to ensure an adequate level of data protection in third countries – with the European Data Protection Board recommending that the determination of adequacy be risk assessed on a case by case basis by data controllers.

Given that the majority of US cloud providers are subject to US surveillance law, few transfers based on the SCC’s are expected to pass the test. This will present a challenge for the UK government, given the huge amounts of data it is storing with US companies.

There is a danger however that the UK will fall behind Europe and the rest of the world unless it takes back control of its data and begins to invest in its own cloud capabilities.

There is a common assumption that apart from any data adequacy issues, data stored in the UK is subject only to UK law. This is not the case. In practice, data may be resident in the UK, but it is still subject to US law. In March 2018, the US government enacted the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which allows law enforcement agencies to demand access to data stored on servers hosted by US-based tech firms, such as Amazon Web Services, Microsoft and Google, regardless of the data’s physical location and without issuing a request for mutual legal assistance.

NHSX for example has a cloud contract with AWS. AWS’s own terms and conditions do not commit to keeping data in the region selected by government officials if AWS is required by law to move the data elsewhere in the world.

Key (and sensitive) aspects of government data, such as security and access roles, rules, usage policies and permissions may also be transferred to the US without Amazon having to seek advance permission. Similarly, AWS has the right to access customer data and provide support services from anywhere in the world.

The Government Digital Service team, which sets government digital policy, gives no guidance on where government data should be hosted – it simply states that all data categorised as “Official” (the vast majority of government data, but including law enforcement, biometric and patient data) is suitable for public cloud and instructs its own staff simply to “use AWS” with no guidance given on where the data must be hosted. The costs of AWS services varies widely depending on the region selected and the UK is one of the most expensive “regions”. Regions are physically selected by the technical staff, rather than procurement or security teams.

So the procurement of data processing and storage services must also be considered as carefully as the way Government uses data.  A break down in public trust in the Government’s ability to secure their data due to hacks, foreign government interventions and breaches in data protection regulation would deprive us of the full benefits of using cloud services and stifle UK investment and innovation in data handling.

It follows if we are to obtain the maximum public benefit from our data we need to hold government to account to ensure that they aren’t simply handing contracts to suppliers, such as AWS, who are subject to the CLOUD act. And specifically we need to ensure genuine sovereignty of NHS data and that it is monetised in a safe way focused on benefitting the NHS and our citizens.

With a new National Data Strategy in the offing there is now the opportunity for the government to maximise the opportunities afforded through the collection of data and position the UK as leader in data capability and data protection. We can do this and restore credibility and trust through:

  • Guaranteeing greater transparency of how patient data is handled, where it is stored and with whom and what it is being used for
  • Appropriate and sufficient regulation that strikes the right balance between credibility, trust, ethics and innovation
  • Ensuring service providers that handle patient data operate within a tight ethical framework
  • Ensuring that the UK’s data protection regulation isn’t watered down as a consequence of Brexit
  • Making the UK the safest place in the world to process and store data

In delivering this last objective there is a real opportunity for government to lead by example – not just the UK, but the rest of the world by developing its own sovereign data capability. A UK national cloud capability based on technical, ethical, jurisdictional and robust regulatory standards would be inclusive, multi-vendor by nature, and could be available for government and industry alike.

A UK cloud could create a huge national capability by enabling collaboration through data and intelligence sharing. It would underpin new industries in the UK based on the power of data, bolster the UK’s national security, grow the economy and bolster the exchequer.

As a demonstration of what can be done, in October 2018, Angela Merkel announced Gaia-X, following warnings from German law makers and industry leaders that Germany is too dependent on foreign-owned digital infrastructure. The initiative aims to restore sovereignty to German data and address growing alarm over the reliance of industry, governments and police forces on US cloud providers. Gaia-X has growing support in Europe and EU member states have made a joint declaration on cloud, effectively the development of an EU cloud capability.

Retention of control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain the UK’s position as a leading life science economy and innovator and that is where as part of the new Trade Legislation being put in place clear safeguards are needed to ensure that in trade deals our publicly held data is safe from exploitation except as determined by our own government’s democratically taken decisions.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI

https://tech.newstatesman.com/cloud/dont-trade-away-our-valuable-national-data-assets
https://tech.newstatesman.com/cloud/dont-trade-away-our-valuable-national-data-assets

AI technology urgently needs proper regulation beyond a voluntary ethics code

Lord C-J House Magazine February 2020

We already have the most comprehensive CCTV coverage in the Western world, add artificial intelligence driven live facial recognition and you have all the makings of a surveillance state, writes Lord Clement-Jones.

In recent months live facial recognition technology has been much in the news.

Despite having been described by as ‘potentially Orwellian’ by the Metropolitan Police Commissioner, and ‘deeply concerning’ by the Information Commissioner the Met have now announced its widespread adoption.

The Ada Lovelace Institute in Beyond Face Value reported similar concerns.

The Information Commissioner has been consistent in her call for a statutory code of practice to be in place before facial-recognition technology can be safely deployed by police forces saying; “Never before have we seen technologies with the potential for such widespread invasiveness...The absence of a statutory code that speaks to the challenges posed by LFR will increase the likelihood of legal failures and undermine public confidence.”

Met Police Officers 'Did Not Act Inappropriately” At Sarah Everard Vigil, Report Finds

Met Police Officers "Did Not Act Inappropriately” At Sarah Everard Vigil, Report FindsBy Alain Tolhurst30 Mar

I and my fellow Liberal Democrats share these concerns. We already have the most comprehensive CCTV coverage in the western world. Add to that artificial intelligence driven live facial recognition and you have all the makings of a surveillance state.

The University of Essex in its independent report last year demonstrated the inaccuracy of the technology being used by the Met. Analysis of six trials found that the technology mistakenly identified innocent people as “wanted” in 80 per cent of cases.

Even the Home Office’s own Biometrics and Forensics Ethics Group has questioned the accuracy of live facial recognition technology and noted its potential for biased outputs and biased decision-making on the part of system operators.

As a result, the Science and Technology Select Committee last year recommended an immediate moratorium on its use until concerns over the technology’s effectiveness and potential have been fully resolved.

To make matters worse in a recent parliamentary question, Baroness Williams of Trafford outlined the types of people who can be included on a watch list through this technology. They are persons wanted on warrants, individuals who are unlawfully at large, persons suspected of having committed crimes, persons who might be in need of protection, individuals whose presence at an event causes particular concern, and vulnerable persons.

It is chilling that not only is this technology in place and being used but that the government has arbitrarily already decided who it is legitimate to use the technology on.

A moratorium is therefore a vital first step. We need to put a stop to this unregulated invasion of our privacy and have a careful review.

I have now tabled a private members bill which first legislates for a moratorium and then institutes a review of the use of the technology which would have as minimum terms of reference: the equality and human rights implications of the use of automated facial recognition technology; the data protection implications of the use of that technology; the quality and accuracy of the technology; the adequacy of the regulatory framework governing how data is or would be processed and shared between entities involved in the use of facial recognition; and recommendations for addressing issues identified by the review.

At that point we can debate if or when it’s use is appropriate and whether and how to regulate its use. This might be absolute restriction or permitting certain uses where regulation to ensure privacy safeguards are in place, together with full impact assessment and audit.

The Lords AI Select Committee I chaired recommended the adoption of a set of ethics around the development of AI applications believing that in the main voluntary compliance was the way forward. But certain technologies need proper regulation now, beyond a voluntary ethics code. This is one such example and it is urgent.

Lord Clement-Jones is a Liberal Democrat Member of the House of Lords and Liberal Democract Lords Spokesperson for Digital. 

https://www.politicshome.com/thehouse/article/ai-technology-urgently-needs-proper-regulation-beyond-a-voluntary-ethics-code


No room for government complacency on artificial intelligence, says new Lords report December 2020

Friday 18 December 2020

The Government needs to better coordinate its artificial intelligence (AI) policy and the use of data and technology by national and local government.

  • The increase in reliance on technology caused by the COVID-19 pandemic, has highlighted the opportunities and risks associated with the use of technology, and in particular, data. Active steps must be taken by the Government to explain to the general public the use of their personal data by AI.
  • The Government must take immediate steps to appoint a Chief Data Officer, whose responsibilities should include acting as a champion for the opportunities presented by AI in the public service, and ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.
  • A problem remains with the general digital skills base in the UK. Around 10 per cent of UK adults were non-internet users in 2018. The Government should takes steps to ensure that the digital skills of the UK are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.
  • AI will become embedded in everything we do. It will not necessarily make huge numbers of people redundant, but when the COVID-19 pandemic recedes and the Government has to address the economic impact of it, the nature of work will change and there will be a need for different jobs and skills. This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. In particular the AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific national training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
  • The Centre for Data Ethics and Innovation (CDEI) should establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses.
  • For its part, the Information Commissioner’s Office (ICO) must develop a training course for use by regulators to give their staff a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. Such training should be prepared with input from the CDEI, the Government’s Office for AI and Alan Turing Institute.
  • The Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.
  • The UK remains an attractive place to learn, develop, and deploy AI. The Government must ensure that changes to the immigration rules must promote rather than obstruct the study, research and development of AI.

There is also now a clear consensus that ethical AI is the only sustainable way forward. The time has come for the Government to move from deciding what the ethics are, to how to instil them in the development and deployment of AI systems.

These are the main conclusions of the House of Lords Liaison Committee’s report, AI in the UK: No Room for Complacency, published today, 18 December.

This report examines the progress made by the Government in the implementation of the recommendations made by the Select Committee on Artificial Intelligence in its 2018 report AI in the UK: ready, willing and able?

Lord Clement-Jones, who was Chair of the Select Committee on Artificial Intelligence, said:

“The Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency. There must be more and better coordination, and it must start at the top.

“A Cabinet Committee must be created whose first task should be to commission and approve a five-year strategy for AI. The strategy should prepare society to take advantage of AI rather than be taken advantage of by it.

“The Government must lead the way on making ethical AI a reality. To not do so would be to waste the progress it has made to date, and to squander the opportunities AI presents for everyone in the UK.”

Other findings and conclusions include:

https://www.parliament.uk/business/lords/media-centre/house-of-lords-media-notices/2020/december-2020/no-room-for-government-complacency-on-artificial-intelligence-says-new-lords-report/