As Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence I recently gave a Speech at the Berlin AI Expo on why business needs to develop an ethical framework for the use of AI and algorithmns. This is what I said. 

Society as a whole is becoming more and more conscious of the impact of AI.  I am still not sure however how fully conscious we are of exactly how fast and impactful this will be.

But after a decade of fiction it is now becoming fact. At the Digital Innovators’ Summit Berlin Summit recently Tobias Hellwig, Editorial Developer at SPIEGEL Tech Lab apparently asked his audience “Do you remember the talking car?” instantly reminding them that long before Alexa and Siri, characters such as Knight Rider’s KITT and Space Odyssey’s HAL were the heroes of artificial intelligence.

But here and now Amazon’s Echo and Echo Dot devices have already generated 34,000 five-star reviews among consumers who are engaging on average 16 times a day.  AI, particularly in the form of chatbots, is one of the big areas of tech to watch. Developers have written 34,000 chatbots for Facebook’s Messenger in the last 6 months. They have have taken off so much that apparently 25% of the users of the Microsoft Xiaoice chatbot in China and Japan have told “her” that they love her.

A good example of current and future impact is professionals among whom I number myself. As Professor Richard Susskind notes, we have seen some impact already on the professions: Lawyers are using it for due diligence, Architects use CAD, engineers and accountants even more so. Even the Clergy. There is an app for confessions! But so far none of this has changed the professional advisory model radically.

Far more radical changes are on the way. It is likely the machines will be able do almost all routine professional work. Processing of data with the necessary algorithms will give rise to alternative ways of delivering practical professional expertise.

As Susskind says, the professions risk becoming as outdated as the old liveries and crafts. Fletchers -Arrow makers- or Coopers -Barrel Makers- for instance.

In healthcare the recent Royal Society Report on Machine Learning states that healthcare is where the biggest impact will be.

A  key factor here is the potential hollowing out of professional skills. How are young professionals and other experts going to get the necessary experience in middle career when it’s going to be AI that’s going to do so much of the work.

Others such as Greg Ip, Chief Economics Commentator of  the Wall Street Journal, argue that any pessimism is misplaced.

But on any basis there is a huge societal moral dimension here. What are we content to allow AI to do in substitution for humans? What kind of judgements can we allow them to make. For instance turning off life support systems in a hospital?

It may be that in many circumstances a patient or client will want someone who can share the human condition -for instance a carer, marriage counsellor, an undertaker or a receptionist. Even when making a will AI by itself may not be good enough.

Professor Margaret Boden expresses the overarching question very simply for us as “even if AI can do something, should it?”

Many of these questions almost amount to the questions raised by Dr Yuval Noah Harari in his recent book  “Homo Deus”. What kind of human should we be turning ourselves into? How can we protect ourselves from our destructive nature?

AI systems learn from (“interrogate” perhaps) the data which they are presented with which inevitably will reflect patterns of human behaviour. If we simply reflect human values when instilling values in AI aren’t we storing up trouble for ourselves, given humans’ ability to go off the rails?

We have the the Tay example easily to hand, where an AI chatbot from Microsoft actually machine learnt all the wrong kind of behaviour, in this case adopting racist and sexist language and attitudes from online conversations and had to be shut down within a week in March last year.

Will we inflict violent behaviour on military robots?

Shouldn’t we be thinking about values in a rather different way?

A further dimension is the question of what kind of intrusion and monitoring of individuals in the employment context is acceptable. What values should companies be adopting towards this?

Then we have the issue of what skills will be required in the future. The Royal Society makes a strong case for cross disciplinary skills. Other skills include cross cultural competency, novel and adaptive think and social intelligence.  We need new active programmes to develop these skills Young people need to have much better information at the start of their working lives about the growth prospects for different sectors to be able to make career choices.

So we are going to need creative skills, data using skills, innovation skills, but we may well not need quite so much in the way of analytical skills in the future, because that will be done for us.

The jobs of the future have been described by the Chairman of IBM Ginni Rometty, as not about white collar vs. blue collar jobs, but about the “new collar” jobs that employers in many industries demand, but which remain largely unfilled.

She says: “We are hiring because the nature of work is evolving – and that is also why so many of these jobs remain hard to fill. As industries from manufacturing to agriculture are reshaped by data science and cloud computing, jobs are being created that demand new skills – which in turn requires new approaches to education, training and recruiting.

She added: “And the surprising thing is that not all these positions require advanced education……….. What matters most is that these employees – with jobs such as cloud computing technicians and services delivery specialists – have relevant skills, often obtained through vocational training.”

Brynjolfson and McAfee in their book the Second Machine Age develop the skills discussion further but crucially add that end of the day we have to decide which values to adopt in the face of technological change.

So we come back again to the moral and societal dimension. How do we ensure that the benefits of AI are evenly distributed?  That the productivity gains of AI benefit us all and not simply major corporations. Will the dividend be shared?  Does the possibility that the distribution of jobs themselves will be so uneven mean that we need to contemplate a Universal Basic Income?

Ryan Avent in his perceptive book The Wealth of Humans concludes “Faced with this great, powerful, transformative force, we shouldn’t be frightened. We should be generous. We should be as generous as we can be.”

The Royal Society in their recent Machine Learning Report talk about the ‘democratisation of AI’ in this context’ and this brings great responsibilities for employers in terms understanding the disruption to their workforce and undertaking retraining. The Open Source Initiative is one already available response to this.

It also means that there must be standards of accountability. The potential for bias in algorithms for instance is a great concern. How do we know in the future when a mortgage, a grant or an insurance policy is refused that there is no bias in the system?

The CEO of Microsoft himself, Satya Nadella, has urged creators to take accountability for the algorithms they create in view of the possible unintended consequences which only they could solve.

It is vital that throughout we treat AI as a tool not as a technology that controls us. With software that has been described as “learned not crafted” it will be increasingly important for us to know that machine learning in all its forms in particular is not autonomous and has a clear purpose and direction for the benefit of mankind.

How therefore do we take this forward? in the UK we are currently undertaking a major review of our corporate governance processes. This includes suggestions for new rights of approval for shareholders and greater stakeholder engagement beyond the shareholders. But for AI, governance really includes a combination of legal, ethical and behavioral aspects of conduct  that need to be established.

"Society as a whole is becoming more and more conscious of the impact of AI."

— Lord Clement-Jones

AI has a particular twofold set of challenges identified by the Royal Society, first the way in which machine learning algorithms use data sets on which they are trained, in particular as regards privacy and data use and secondly the properties of the resulting algorithms after they have been trained on data, including as they say safety, reliability, interpretability and responsibility. To this I would add transparency, human involvement in quality control and lack of bias.

Will Hutton and Birgitte Andersen of the Big Innovation Centre, in the context of the challenges of Brexit and the necessary industrial strategy have argued for the creation of much stronger and more purposeful corporate cultures. As they say:

“The opportunities and challenges of digitisation, of artificial intelligence with all the ethical issues cross-cutting the enormous possibilities, of the internet of things etc. are best exploited by companies with a strong sense of purpose”

This chimes in with many of the Thematic Pillars adopted by the Partnership in AI, founded by companies such as Apple, Amazon, Facebook, Google/ DeepMind, IBM, Microsoft and which now includes a rapidly growing list of non-Silicon Valley companies.

In this context I believe the time has come  in businesses with a strong AI and algorithm component to consider setting up AI ethics advisory boards  to ensure that algorithms are free of bias when making decisions, for instance on credit ratings, mortgages or insurance, especially if that rather chilling concept the all encompassing “Master Algorithm” comes to fruition.

Such ethics advisory boards will also need to draw lines in terms of what they think is appropriate to be done by AI within a business, because change can be as rapid or as infinite as we want and the impact can be as assistive to or in substitution to human employment and skills as desired.

The Royal Society however argue for a sectoral approach to governance on the grounds that issues can be very “context specific” and the regulators should be specific to the sector.  

I believe that something of a voluntary nature more akin to a common governance framework is desirable and can be constructed without amounting to a one size fits all solution. In research we already have this in the proposal for the Responsible Research and Innovation Framework. We need the corporate/commercial equivalent.

Nevertheless, however voluntary the governance aspects (perhaps on an increasingly  commonplace “comply or explain basis”) there will the necessity of legislating to establish legal liability where AI carries  out its tasks incompetently, inadequately or with bias and thereby causes damage. We will need to determine to what extent are corporate bodies or individual actors liable?

What status will robots have in law? Last year, as reported by Future Advocacy in their Report “An Intelligent Future?” the European Parliament released a proposal suggesting that robots be classed as ‘electronic persons’ in response to the increasing abilities and uses of robotics and AI systems.

Added to this, and perhaps the greatest priority of all, is the need to ensure public understanding and acceptance of AI. This is not simply guaranteed by increasing prevalence of AI and algorithm based functions, which now appear in everyday form from search engines to online recommender systems, voice recognition translation and fraud detection.

In fact public awareness of AI and machine learning is very low, even if what it delivers is well recognized. It is clear that when there is awareness there are number of concerns expressed such as the fear they could cause harm, replace people and skew the choices available.

So public engagement is crucial to build trust in AI and machine learning. This in turn means ensuring that algorithms are interpretable and transparent. This brings us straight back to the governance area.

The potential of artificial intelligence (AI) to revolutionise our  landscape is almost infinite – but there is a huge amount of work to be done before ethical and societal issues are ironed out. Professor Stephen Hawking has put the future rather dramatically: “the development of full artificial future intelligence could spell the end of the human race” and again “the rise of powerful AI could either be the best or the worst thing ever to happen to humanity.”

I wouldn’t be so pessimistic but we must absolutely build on the best of our human virtues and create a virtuous circle of trust and communication aligned with ethical behaviour and transparency of algorithm construction. Governments, business and academia in close partnership with the public should start work on this immediately.