Recently I helped to launch the report of the Select Committee Report on AI which I chaired. This is a piece I recently wrote about the Report and its implications.

Barely a day goes by without a piece in the media on a new aspect of AI or Robotics, including in today’s Gulf Today I see. Some pessimistic others optimistic.

Elon Musk Tesla and SpaceX boss has called AI more dangerous than nuclear weapons.

The late Professor Stephen Hawking has put the future rather dramatically: “the development of full artificial future intelligence could spell the end of the human race” and again “the rise of powerful AI could either be the best or the worst thing ever to happen to humanity.”

Others such as Dr Nathan Myhrvold former CTO of Microsoft. Have a more optimistic  view about the future. The market will solve everything.

The CEO of Google, Sundar Pichai,  says AI is more profound than electricity or fire.

We need to recognize that understanding the implications of AI  here and now is important : Amazon’s Echo and Echo Dot, Google Home and a variety of other devices Siri on Apple devices for example, are already in one in ten homes in the USA and UK.

This is the context for the Report of our House of Lords AI Select Committee which came after nine months of inquiry, consideration of hundreds of written submissions of evidence, hours of fascinating oral testimony, one session being trained to build our own neural networks and a fair few lively meetings deciding amongst ourselves what to make of it all.

In our conclusions we are certainly not of the school of Elon Musk. On the other hand we are not blind optimists. We are fully aware of the risks that the widespread use of AI could raise, but our evidence led us to believe that these risks are avoidable, or can be mitigated to reduce their impact.

But we need to recognize that understanding the implications of AI  here and now is important : Amazon’s Echo and Echo Dot, Google Home and a variety of other devices Siri on Apple devices for example, are already in one in ten homes in the USA and UK. As a result of the Cambridge Analytica saga consumers and citizens are far more conscious of the uses to which their data is put, both through AI and otherwise than just a few months ago.

The Report of our AI Select Committee  came after nine months of inquiry, consideration of 225 written submissions of evidence, and 22 oral sessions.

Our task was “to consider the economic, ethical and social implications of advances in artificial intelligence”.From the outset of the inquiry, we asked ourselves, and our witnesses, five key questions:

  • How does AI affect people in their everyday lives, and how is this likely to change?
  • What are the potential opportunities presented by artificial intelligence for the United Kingdom?  How can these be realised?
  • What are the possible risks and implications of artificial intelligence? How can these be avoided?
  • How should the public be engaged with in a responsible manner about AI?
  • What are the ethical issues presented by the development and use of artificial intelligence

"For AI to continue to be a success, we need to work together."

— Lord Clement-Jones

As it is 181 pages long with 74 recommendations you’ll be pleased I won’t be going into detail but the report is intended to be practical and to build upon much of the excellent work being done already in the UK.

Our recommendations are intended to be practical and to build upon much of the excellent work being done already in the UK and they revolve around five central threads which run through the report.

The first is that the UK is an excellent place to develop AI, and people are willing to use the technology in their businesses and personal lives.The question we asked was, how do you ensure that we stay as one of the best places in the world to develop and use AI?

There is no silver bullet. But we have identified a range of sensible steps that will keep the UK on the front foot.

These include making data more accessible to smaller businesses, and asking the Government to establish a growth fund for SMEs to scale up their businesses domestically and not worry about having to find investment from overseas or prematurely sell to a tech major. The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK. Their recent AI Sector deal is a good start but only a start. Real ambition is needed.

A second thread relates to diversity and inclusion.

In Education and skills

  • In Digital Understanding
  • In Job opportunities
  • In Design of AI and Algorithms
  • In the Datasets used

In particular the prejudices of the past must not be unwittingly built into automated systems. We say that the Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.

A third thread relates to equipping people for the future.  Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI.  Retraining will become a lifelong necessity.

At earlier stages of education, children need to be adequately prepared for working with, and using, AI, data understanding is crucial.

A fourth thread is that individuals need to be able to have greater personal control over their data, and the way in which it is used. We need to get the balance right between maximising the insights which data can provide to improve services and ensuring that privacy is protected.

The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency.

This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability hubs of all things and data trusts.

AI has the potential to be truly disruptive to business and to the delivery of public services.  For example AI could completely transform our healthcare both administratively and clinically if NHS data is labelled, harnessed and curated in the right way. But it must be done in a way which builds public confidence . That these new frameworks and mechanisms are important

Transparency in AI is needed. We recommended that industry, through the new AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.

Of particular importance to the committee was the need to avoid data monopolies, particularly by the tech majors. Access to large quantities of data is one of the factors fueling the current AI boom.  We have heard considerable evidence that the ways in which data is gathered and accessed needs to change, so that innovative companies, big and small, as well as academia, have fair and reasonable access to data

Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. In our report we call on the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies operating in the UK. It is vital that SME’s have access go datasets so they are free to develop AI.

The fifth and unifying thread is that an ethical approach is fundamental to making the development and use of AI a success for the UK. The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment.

A great deal of lip service is being paid to the ethical development of AI but the time has come for action and not just paying lip service to the idea. We’ve suggested five principles that could form the basis of a cross-sector AI Code.

  • Artificial intelligence should be developed for the common good and benefit of humanity.
  • Artificial intelligence should operate on principles of intelligibility and fairness.
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence

These are just to get the ball rolling, and not just amongst academics, or between businesses, or between Governments. They must be agreed and shared widely, and work for everyone. Without this, an agreed ethical approach will never be given a chance to get off of the ground.

We did not suggest any new regulatory body for AI, taking the view that ensuring that ethical behavior takes place should be the role of existing regulators, whether FCA, CMA, ICO, OFCOM. We also believe that in the private sector there is a strong potential role for ethics advisory boards.

 AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach will ensure the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.

All this adds up to a package which we believe will ensure that the UK could remain competitive in this space

AI policy is in its infancy in the UK.The Government has made a good start in policy making and our report is intended to be collaborative in its spirit and help develop that policy to ensure it is comprehensive and coordinated.

In our  Report we asked whether the UK is ready, willing and able to take advantage of AI. With our recommendations, it will be.

The omens from Government are good. What we need from now onwards is making sure that our recommendations are adopted. Where you agree with them we welcome support in taking them forwards with industry, academia and the Government. For AI to continue to be a success, we need to work together.