Constitution Society Blog Lord C-J March 2021

Lord Clement-Jones CBE is the House of Lords Liberal Democrat Spokesperson for Digital and former Chair of the House of Lords Select Committee on Artificial Intelligence (2017-2018).

Tackling the algorithm in the public sector

 

 

Algorithms in the public sector have certainly been much in the news since I raised the subject in a house of Lords debate last February. The use of algorithms in government – and more specifically, algorithmic decision-making – has come under increasing scrutiny. 

The debate has become more intense since the UK government’s disastrous attempt to use an algorithm to determine A-level and GCSE grades in lieu of exams, which had been cancelled due to the pandemic.  This is what the FT had to say last August after the OFQUAL Exam debacle, where students were subjected to what has been described as unfair and unaccountable decision-making over their A-level grades: 

The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure….’

It concluded: ‘Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts…. As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes.’ 

Over the past few years, we have seen a substantial increase in the adoption of algorithmic decision-making and prediction, or ADM, across central and local government. An investigation by the Guardian in late 2019 showed some 140 local authorities out of 408 surveyed, and about a quarter of police authorities, were now using computer algorithms for prediction, risk assessment and assistance in decision-making in areas such as benefit claims, who gets social housing and other issues – despite concerns about their reliability. According to the Guardian, nearly a year later that figure had increased to half of local councils in England, Wales and Scotland; many of them without any public consultation on their use.

Of particular concern are tools such as the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias, discrimination and dubious predictions.

Central government use is even more opaque but we know that HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services. 

A key example of ADM use in central government is the DWP’s much criticised Universal Credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group study ‘The Computer Says No’ shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

The Joint Council for the Welfare of Immigrants (JCWI) and campaigning organisation Foxglove joined forces last year to sue the Home Office over an allegedly discriminatory algorithmic system – the so called ‘streaming tool’ – used to screen migration applications.  This is the first, it seems, successful legal challenge to an algorithmic decision system in the UK, although before having to defend the system in court, the Home Office decided to scrap the algorithm.

The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system two years ago and said in a statement afterwards: ‘Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.’

Overseas the use of algorithms is even more extensive and, it should be said, controversial – particularly in the US. One such system is the NYPD’s Patternizr, a tool that the NYPD has designed to identify potential future patterns of criminal activity. Others include Northpointe’s COMPAS risk assessment programme in Florida and the InterRAI care assessment algorithm in Arkansas.

It’s not that we weren’t warned, most notably in Cathy O’Neil’s Weapons of Math Destruction (2016) and Hannah Fry’s Hello World (2018), of the dangers of replication of historical bias in algorithmic decision making. 

It is clear that failure to properly regulate these systems risks embedding bias and inaccuracy. Even when not relying on ADM alone, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.

Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report ‘AI in the UK: Ready Willing and Able?’ in 2018. We said at the time: ‘We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.’

It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a ‘right to an explanation’ provision, when an individual has been subject to fully automated decision-making. However, few highly significant decisions are fully automated – often, they are used as decision support, for example in detecting child abuse. The law should be expanded to also cover systems where AI is only part of the final decision.

The Science and Technology Select Committee Report ‘Algorithms in Decision-Making’ of May 2018, made extensive recommendations in this respect. It urged the adoption of a legally enforceable ‘right to explanation’ that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results. It also called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.

Last year, the Committee on Standards in Public Life published a review that looked at the implications of AI for the seven Nolan principles of public life, and examined if government policy is up to the task of upholding standards as AI is rolled out across our public services. 

The committee’s Chair, Lord Evans, said on publishing the report:

‘Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector…. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.’

The report found that despite the GDPR, the Data Ethics Framework, the OECD principles, and the Guidelines for Using Artificial Intelligence in the Public Sector; the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be. The Committee’s report presented a number of recommendations to mitigate these risks, including 

  • greater transparency by public bodies in use of algorithms, 
  • new guidance to ensure algorithmic decision-making abides by equalities law, 
  • the creation of a single coherent regulatory framework to govern this area, 
  • the formation of a body to advise existing regulators on relevant issues, 
  • and proper routes of redress for citizens who feel decisions are unfair.

In the light of the Committee on Standards in Public Life Report, it is high time that a minister was appointed with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector, as was also recommended by the Commons Science and Technology Committee. 

We also need to consider whether – as Big Brother Watch has suggested – we should:

  • Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
  • Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
  • Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns.

This chimes with both the Mind the Gap report from the Institute for the Future of Work, which proposed an Accountability for Algorithms Act, and the Ada Lovelace Institute paper, Can Algorithms Ever Make the Grade? Both reports call additionally for a public register of algorithms, such as have been instituted in Amsterdam and Helsinki, and independent external scrutiny to ensure the efficacy and accuracy of algorithmic systems.

Post COVID, private and public institutions will increasingly adopt algorithmic or automated decision making. These will give rise to complaints requiring specialist skills beyond sectoral or data knowledge. The CDEI in its report, Bias in Algorithmic Decision Making, concluded that algorithmic bias means that the overlap between discrimination law, data protection law and sector regulations is becoming increasingly important and existing regulators need to adapt their enforcement to algorithmic decision-making. 

This is especially true of both the existing and proposed public sector ombudsman who are – or will be – tasked with dealing with complaints about algorithmic decision-making. They need to be staffed by specialists who can test algorithms’ compliance with ethically aligned design and operating standards and regulation. 

There is no doubt that to avoid unethical algorithmic decision making becoming irretrievably embedded in our public services we need to see this approach taken forward, and the other crucial proposals discussed above enshrined in new legislation.

The Constitution Society is committed to the promotion of informed debate and is politically impartial. Any views expressed in this article are the personal views of the author and not those of The Constitution Society.

Categories: AIConstitutional standards

https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/