This is my reaction to the government’s AI White Papert response which broadly follows the line taken by the White paper last year and does not take on board the widespread demand expressed during the consultation for regulatory action. 

There is a gulf between the government’s hype about a bold and considered approach and leading in safe AI and reality. The fact is we are well behind the curve, responding at snails pace to fast moving technology. We are far from being bold leaders in AI . This is in contrast to the EU with its AI Act which is grappling with the here and now risks in a constructive way. 

As the House of Lords Communication s and Digital Committee said in its recent report on Large Language Models  there is  evidence of regulatory capture by the big tech companies . The Government rather than regulating just keeps saying more research is needed in the face of clear current evidence of the risk of many uses and forms of AI. It’s all too early to think about tackling the risk in front of us. We are expected to wait until we have complete understanding and experience of the risks involved. Effectively we are being treated as guinea pigs to see what happens whilst the government taklks about the existential risks of AI instead

Further the response to the White Paper states that general purpose AI already presents considerable risks accross a range of sectors, so in effect admitting that a purely sectoral approach is not practical. 

The Government has failed to move from principles to practice. Sticking to their piecemeal context specific approach approach they are not suggesting immediate regulation nor any new powers for sector regulators but, as anticipated, the setting up of a new central function (with the CDEI being subsumed into DSIT as the Responsible Technology Adoption Unit) and an AI Safety Institute to assess risk and advise on future regulation. In essence any action-without new powers- will be left to the existing regulators rather than any new horizontal duties being mandated.

Luckily others such as the EU-and even the US- contrary to many forecasts- are grasping the nettle.

My view is that 

We need early risk based horizontal legislation across the sectors ensuring that standards for a proper risk management framework and impact assessment are imposed when AI systems are developed and adopted and consequences that flow when the system is assessed as high risk in terms of additional requirements to adopt standards of transparency and independent audit.

We shouldn’t just focus on existential long term risk or risk from Frontier AI, predictive AI is important too in terms of automated decision making. risk of bias and lack of traanspartency.

We should focus as well on interoperability with other jurisdictions which means working with the BSI,ISO, IEEE, OECD and others towards convergence of standards such as on Risk Managenent, Impact Assessment, Testing Audit, Design, Continous monitoring etc etc There are several existing international standards such as ISO 42001 and 42006 which are ready to be adopted.

That government needs to implant its the Algorithmic Transparency Recording Standard into our public ervices alongside risk assessment of the AI systems it uses together with a public register of AI systems in use in government. It also needs need to beef up the Data Protection Bill in terms of rights of data subjects relative to Automated Decision Making rather than water them down and retain and extend the Data Protection Impact Assessment and DPO for use in AI regulation. 

Also and very importantly in the light of recent news from the IPO.   I hope the Gov will take strong note of the House of Lords report on the use of copyrighted works by LLM’s. The government has adopted its usual approach of relying on a voluntary approach. But it is clear that this is simply is not going to work. The ball is back in its court and It needs to  act decisively to make sure that these works are not ingested into training LLM’s without any return to rightsholders. 

In summary. We need consistency certainty and convergence if developers, adopters and consumers are going to safely innovate and take advantage of AI advances and currently the UK is not delivering the prospect of any of  these.