Lord C-J NSTech Jan 2020

There is currently a great deal of concern in Britain and the EU more widely about the implications of the adoption of artificial intelligence (AI), particularly in algorithmic decision making and prediction in the public sector, notably in policing and the criminal justice system, and in the use of live facial recognition technology in public places.

As a result there has been pressure to set out much clearer guidelines, beyond general ethical codes, for the use of these technologies by government and its agencies.

But even if we get things right in the public sector, businesses have responsibility too, both those who develop AI and those who adopt it. AI even in its narrow form will and should have a profound impact on and implications for corporate governance generally.

Trade organisations such as TechUK and specific AI organisations such as the Partnership on AI (comprised of major tech companies and NGO’s) recognise that corporate responsibility and governance on AI is increasingly important.

There is a growing corpus of corporate governance work relating to AI and the ethics of its application in business.  Asset managers such Hermes and Fidelity are now adopting guidance for the companies they invest in.

The Institute of Business Ethics’s report  “Corporate Ethics in a Digital Age” is a masterly briefing for boards written by Peter Montagnon, formerly chair of the IBA investment Committee, who sadly died the week after its launch.

But he has left a very important piece of work behind him together with the vital message that boards should be in control and accountable when it comes to applying AI in their business and they should have the skillsets to enable them to do so.

The Tech Faculty of the ICAEW has produced a valuable paper on New Technologies, Ethics and Accountability. The bottom line is we need to operationalize the ethics and engrain ethical behavior.They have set out a number of questions which boards should be asking themselves.

It is imperative that boards have the right skill sets in order to fulfil their oversight role. For instance do they understand what technology is being used in their company and how it is being used and managed, for example by HR in recruitment and assessment? Have they strong lines of accountability for the introduction and impact of AI?

Boards need to be aware of the questions they should ask and the advice they need and from whom. They need to consider what tools they have available such as

  • Algorithm impact assessments/ algorithm assurance
  • Risk Assessment /Ethical Audit Mechanisms/Kitemarking
  • Ethics by design, metrics, standards for “training testing and fixing”

Risk management is central to the introduction of new technology. Does a company mainstream oversight into its Audit and Risk committee or set up an Ethics Advisory Board? It has even been suggested  by Christian Voegtlin, associate professor in corporate social responsibility at Audencia Business School that there should be a chief philosophy officer to ensure adherence to ethical standards.

Is an AI adopting business taking full advantage of the rapidly growing concept of regulatory sandboxing? This means a regulator such as our Financial Conduct Authority permitting the testing of a new technology without the threat of regulatory enforcement but with strict overview and individual formal and informal guidance from the regulator.

Some make an analogy with the application of professional medical ethics. We take these for granted but should individual AI engineers be explicitly required to declare their adherence to a set of ethical standards along the lines of a new tech Hippocrates Oath? This could apply to both AI adopters as well as developers.

More broadly and more significantly, however, AI can and should contribute positively to a purposeful form of capitalism which is not simply the pursuit of profit but where companies deploy AI in an ethical way, to achieve greater sustainability and a fairer distribution of power and wealth.

We have seen the high level sets of AI ethics developed by bodies like the EU, OECD, the G20, the Partnership on AI .These are very comprehensive and provide the basis for a common set of international standards.

In the words of the title of Brent Mittelstadt’s recent Nature paper however, “Principles Alone cannot guarantee ethical AI”. We need to develop alongside them a much more socially responsible form of corporate governance.

Dr Maha Hosain Aziz in her recent book “Future World Order” talks of the need for a new social contract between tech companies and citizens. I think we need to go further however.

It is not just the tech companies where the issues identified by Rana Foroohar in “Don’t be Evil The Case Against Big Tech”are relevant. It also extends to: “Digital property rights, privacy laws, antitrust rules, free speech, the legality of surveillance, the implications of data for economic competitiveness and national security, the impact of the algorithmic disruption of work on labor markets, the ethics of artificial intelligence and the health and well being of users of digital technology.”

As Foroohar says, “[when] we think about how to harness the power of technology for the public good, rather than the enrichment of a few companies, we must make sure that the leaders of those companies aren’t the only ones to have a say in what the rules are.”

The Big Innovation Centre has played a leading role in the debate with its “Purposeful Company Project”, which was launched back in 2015 with an ethos that “the role of business is to fulfil human wants and needs and to pursue a purpose that has a clear benefit to society. It is through the fulfilment of their chosen purpose that value is created.”

Since then, it has produced several important reports on the need for an integrated regulatory approach to stewardship and intrinsic purpose definition, and on the changes that should be made to the Financial Reporting Council’s UK Stewardship Code.

With all the potential opportunities and disruption involved with AI, this work is now absolutely crucial to ensure that businesses don’t adopt new technologies without a strong underlying set of corporate values so that it is not just shareholders who benefit but that the impact and distribution of benefit to employees and society at large are fully considered.

We of course can’t confine these ethical challenges to the UK. We need ethical alignment in a global world. I hope we will both adopt the international principles which have been developed and, by the same token, argue for the international adoption of the purposeful company principles we are developing in the UK.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI.