Lord C-J NSTech Feb 2020

I recently initiated a debate in the House of Lords asking whether the government had fully considered the implications of decision-making and prediction by algorithm in the public sector.

Over the past few years we have seen a substantial increase in the adoption of algorithmic decision-making and prediction or ADM across central and local government. An investigation by the Guardian last year showed some 140 of 408 councils in the UK are using privately-developed algorithmic ‘risk assessment’ tools, particularly to determine eligibility for benefits and to calculate entitlements. Experian, one of the biggest providers of such services, secured £2m from British councils in 2018 alone, as the New Statesman revealed last July.

Data Justice Lab research in late 2018 showed 53 out of 96 local authorities and about a quarter of police authorities are now using algorithms for prediction, risk assessment and assistance in decision-making. In particular we have the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias and discrimination and dubious predictions.

There’s a lack of transparency in algorithmic decision-making across the public sector

Central government use is more opaque but HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services. A key example of ADM use in central government is the DWP’s much criticised Universal Credit system which was designed to be digital by default from the beginning. The Child Poverty Action Group in their study “The Computer Says No” shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

We know that the Department for Work and Pensions has hired nearly 1,000 new IT staff in the past two years, and has increased spending to about £8m a year on a specialist “intelligent automation garage” where computer scientists are developing over 100 welfare robots, deep learning and intelligent automation for use in the welfare system. As part of this, it intends according to the National Audit Office to develop “a fully automated risk analysis and intelligence system on fraud and error”.

The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system a year ago and said in a statement afterwards: “Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.”

It is clear that failure to properly regulate these systems risks embedding the bias and inaccuracy inherent in systems developed in the US such as Northpointe’s COMPAS risk assessment programme in Florida or the InterRAI care assessment algorithm in Arkansas.

These issues have been highlighted by Liberty and Big Brother Watch in particular.

Even when not using ADM solely, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.

As the 2018 Report by AI Now Institute at New York University says: “While individual human assessors may also suffer from bias or flawed logic, the impact of their case-by-case decisions has nowhere near the magnitude or scale that a single flawed automated decision-making systems can have across an entire population.”

Last March, the Committee on Standards in Public Life decided to carry out a review of AI in the public sector to understand the implications of AI for the seven Nolan principles of public life and examine if government policy is up to the task of upholding standards as AI is rolled out across our public services. The committee chair Lord Evans said on recently publishing the report:

“Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector….

“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

The report found that despite the GDPR, the Data Ethics Framework the OECD principles and the Guidelines for Using Artificial Intelligence in the Public Sector, the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be.

See also: Will the government’s new AI procurement guidelines actually work?

The Committee’s report presents a number of recommendations to mitigate these risks, including greater transparency by public bodies in use of algorithms, new guidance to ensure algorithmic decision-making abides by equalities law, the creation of a single coherent regulatory framework to govern this area, the formation of a body to advise existing regulators on relevant issues, and proper routes of redress for citizens who feel decisions are unfair.

Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report “AI in the UK: Ready Willing and Able?” in 2018. We said: “We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.”

It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a “right to an explanation” provision, when an individual has been subject to fully automated decision-making. Few highly significant decisions however are fully automated — often, they are used as decision support, for example in detecting child abuse. The law should also cover systems where AI is only part of the final decision.

A legally enforceable “right to explanation”

The Science and Technology Select Committee Report “Algorithms in Decision-Making” of May 2018, made extensive recommendations.

It urged the adoption of a legally enforceable “right to explanation” that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.

Subsequently the Law Society in their report last June about the use of AI in the Criminal Justice system also expressed concern and recommended measures for oversight, registration and mitigation of risks in the Justice system.

Last year ministers commissioned the AI Adoption Review designed to assess the ways artificial intelligence could be deployed across Whitehall and the wider public sector. Yet, as NS Tech revealed in December, the government is now blocking the full publication of the report and has only provided a version which is heavily redacted. How, if at all, does the Government’s adoption strategy fit with publication by the Government Digital Service and Office for AI guidelines for Using Artificial Intelligence in the Public Sector last June, and the further guidance on AI Procurement in October, derived from work by the World Economic Forum?

See also: Government blocks full publication of AI review

We need much greater transparency about current deployment, plans for adoption and compliance mechanisms.

Nesta last year in their report ”Decision-making in the Age of the Algorithm”,  for instance, set out a comprehensive set of principles to inform human machine interaction for public sector use of algorithmic decision-making which go well beyond the government guidelines.

This, as Nesta say, is designed to introduce tools in a way which:

  • Is sensitive to local context
  • Invests in building practitioner understanding
  • Respects and preserves practitioner agency.

As they also say “The assumption underpinning this guide is that public sector bodies will be working to ensure that the tool being deployed is ethical, high quality and transparent”.

It is high time that a minister was appointed — as recommended by the Commons Science and Technology Committee — with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector. Those standards should be set in terms of design, mandatory bias testing and audit together with a register for algorithmic systems in use, and that there is redress. This is particularly important for those used by the police and criminal justice system in decision-making processes.

Putting the Centre for Data Ethics and Innovation on a statutory basis

The Centre for Data Ethics and Innovation should have an important advisory role in all this; it is doing important work on algorithmic bias which will help inform government and regulator. But it needs now to be put on statutory basis as soon as possible.

It could also consider whether as part of a package of measures as Big Brother Watch has suggested we should:

  • Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
  • Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
  • Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns

If we do not act soon we will find ourselves in the same position as the Netherlands where there was a recent decision that an algorithmic risk assessment tool (“SyRI”) used to detect welfare fraud breached article 8 of the ECHR. The Legal Education Foundation has looked at similar algorithmic ‘risk assessment’ tools used by some local authorities in the UK for certain welfare benefit claims and has concluded that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality law framework in the UK, and is “hidden” from sight due to the way in which the technology is being deployed.

There is a problem with double standards here too. Government behaviour is in stark contrast with the approach of the ICO’s draft guidance “Explaining decisions made with AI”, which highlights the need to comply with equalities legislation and administrative law.

Last March when I asked an oral question on the subject of ADM the relevant minister agreed that it had to be looked at “fairly urgently”. It is currently difficult to discern any urgency or even who is taking responsibility for decisions in this area. We need at the very least to urgently find out where the accountability lies and press for comprehensive action without further delay.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI