The House of Lords recently debated the report  of its Select Committee on Communications and Digital entitled “Digital regulation: joined-up and accountable” 

This is what I said about the shape digital regulation should take and how it could best be coordinated 

In their digital regulation plan, first published last July and updated last month, the Government acknowledged that

“Digital technologies … demand a distinct regulatory approach … because they have distinctive features which make digital businesses and applications unique and innovative, but may also challenge how we address risks to consumers and wider society.”

I entirely agree, but I also agree with the noble Baroness, Lady Stowell, the noble Lord, Lord Vaizey, and the noble Earl, Lord Erroll, that we need to do this

without the kind of delays in introducing regulation that we are already experiencing.

The plan for digital regulation committed to ensuring a forward-looking and coherent regulatory approach for digital technologies. The stress throughout the plan and the digital strategy is on a light-touch and pro-innovation regulatory regime, in the belief that this will stimulate innovation. The key principles stated are “Actively promote innovation”, achieve “forward-looking and coherent outcomes” and

“Exploit opportunities and address challenges in the international arena”.

This is all very laudable and reinforced by much of what the Select Committee said in its previous report, as mentioned by the noble Baroness. But one of the key reasons why the design of digital governance and regulation is important is to ensure that public trust is developed and retained in an area where there is often confusion and misunderstanding.

With the Online Safety Bill arriving in this House soon, we know only too well that the power of social media algorithms needs taming. Retention of public trust has not been helped by confusion over the use of algorithms to take over exam assessment during the pandemic and poor communication about the use of data on things like the Covid tracing app, the GP data opt-out and initiatives such as the Government’s single-ID identifier “One Login” project, which, together with the growth of automated decision-making, live facial recognition and use of biometric data, is a real cause for concern for many of us.

The fragility of trust in government use and sharing of personal data was demonstrated when Professor Ben Goldacre recently gave evidence to the Science and Technology Committee, explaining that, despite being the Government’s lead adviser on the use of health data, he had opted out of giving permission for his GP health data to be shared.

As an optimist, I believe that new technology can potentially lead to greater productivity and more efficient use of resources. But, as the title of Stephanie Hare’s new book puts it, Technology Is Not Neutral. We should be clear about the purpose and implications of new technology when we adopt it, which means regulation which has the public’s trust. For example, freedom from bias is essential in AI systems and in large part depends on the databases we use to train AI. The UK’s national AI strategy of last September does talk about public trust and the need for trustworthy AI, but this needs to be reflected in our regulatory landscape and how we regulate. In the face of the need to retain public trust, we need to be clear, above all, that regulation is not necessarily the enemy of innovation; in fact, it can be the stimulus and key to gaining and retaining public trust around digital technology and its adoption.

We may not need to go full fig as with the EU artificial intelligence Act, but the fact is that AI is a very different animal from previous technology. For instance, not everything is covered by existing equalities or data protection legislation, particularly in terms of accountability, transparency and explainability. A considerable degree of horizontality across government, business and society is needed to embed the OECD principles.

 

As the UK digital strategy published this month makes clear, there is a great deal of future regulation in the legislative pipeline, although, as the noble Baroness mentioned, we are lagging behind the EU. As a number of noble Lords mentioned, we are expecting a draft digital competition Bill in the autumn which will usher in the DMU in statutory form and a new pro-competition regime for digital markets. Just this week, we saw the publication of the new Data Protection and Digital Information Bill, with new powers for the ICO. We have also seen the publication of the national AI strategy, AI action plan and AI policy statement.

In the context of increased digital regulation and the need for co-ordination across regulators, the Select Committee welcomed the formation of the Digital Regulation Cooperation Forum by the ICO, CMA, Ofcom and FCA, and so do I, alongside the work plan which the noble Baroness, Lady Stowell, mentioned. I believe that this will make a considerable contribution to public trust in regulation. It has already made great strides in building a centre of excellence in AI and algorithm audit.

UK Digital Strategy elaborates on the creation of the DRCF:

“We are also taking steps to make sure the regulatory landscape is fully coherent, well-coordinated and that our regulators have the capabilities they need … Through the DRCF’s joint programme of work, it has a unique role to play in developing our pro-innovation approach to regulation.”

Like the Select Committee in one of its key recommendations, I believe we can go further in ensuring a co-ordinated approach to digital regulation, horizon scanning—which has been mentioned by all noble Lords—and adapting to future regulatory needs and oversight of fitness for purpose, particularly the desirability of a statutory duty to co-operate and consult with one another. It is a proposal which the Joint Committee on the Draft Online Safety Bill, of which I was a member, took up with enthusiasm. We also agreed with the Select Committee that it should be put on a statutory footing, with the power to resolve conflicts by directing its members. I was extremely interested to hear from noble Lords, particularly the noble Lord, Lord Vaizey, and the noble Earl, Lord Erroll, about the circumstances in which those conflicts need to be resolved. It is notable that the Government think that that is a bridge too far.

This very week, the Alan Turing Institute published a very interesting report entitled Common Regulatory Capacity for AI. As it says, the use of artificial intelligence is increasing across all sectors of the economy, which raises important and pressing questions for regulators. Its very timely report presents the results of research into how regulators can meet the challenge of regulating activities transformed by AI and maximise the potential of AI for regulatory innovation.

It takes the arguments of the Select Committee a bit further and goes into some detail on the capabilities required for the regulation of AI. Regulators need to be able to ensure that regulatory regimes are fit for AI and that they are able to address AI-related risks and maintain an environment that encourages innovation. It stresses the need for certainty about regulatory expectations, public trust in AI technologies and the avoidance of undue regulatory obstacles.

 

Regulators also need to understand how to use AI for regulation. The institute also believes that there is an urgent need for an increased and sustainable form of co-ordination on AI-related questions across the regulatory landscape. It highlights the need for access to new sources of shared AI expertise, such as the proposed AI and regulation common capacity hub, which

“would have its home at a politically independent institution, established as a centre of excellence in AI, drawing on multidisciplinary knowledge and expertise from across the national and international research community.”

It sets out a number of different roles for the newly created hub.

To my mind, these recommendations emphasise the need for the DRCF to take statutory form in the way suggested by the Select Committee. But, like the Select Committee, I believe that it is important that other regulators can come on board the DRCF. Some of them are statutory, such as the Gambling Commission, the Electoral Commission and the IPO, and I think it would be extremely valuable to have them on board. However, some of them are non-statutory, such the BBFC and the ASA. They could have a place at the table and join in benefiting from the digital centre of excellence being created.

Our Joint Committee also thoroughly agreed with the Communications and Digital Committee that a new Joint Committee on digital regulation is needed in the context of the Online Safety Bill. Indeed the Secretary of State herself has expressed support. As the Select Committee recommended, this could cover the broader digital landscape to partly oversee the work of the DRCF and also importantly address other objectives such as scrutiny of the Secretary of State, looking across the digital regulation landscape and horizon scanning—looking at evolving challenges, which was considered very important by our Joint Committee and the Select Committee.

The Government are engaged in a great deal of activity. The question, as ever, is whether the objectives, such as achieving trustworthy AI, digital upskilling and powers for regulators, are going to be achieved through the actions being taken so far. I believe that the recommendations of the Select Committee set out in this report would make a major contribution to ensuring effective and trustworthy regulation and should be supported.