Lord C-J : Give Musicians the Freedom to Tour

At a recent debate colleagues and I heavily criticized the Government’s failure to secure a cultural exemption from cabotage rules in the EU trade negotiation

My Lords, I join with other noble Lords in pointing out that the issues on cabotage are part of a huge cloud now hanging over the creative sector, including the requirement for work permits or visa exemptions in many EU countries, CITES certificates for musical instruments, ATA carnets for all instruments and equipment, and proof of origin requirements for merchandise. Cabotage provisions in the EU-UK Trade and Co-operation Agreement will mean that performers’ European tours will no longer be viable, because the agreement specifies that hauliers will be able to make only two journeys within a trip to the EU. Having to return to the UK between unloading sites in the EU will have a significant negative impact on the UK’s cultural exports and associated jobs.

A successful UK transport industry dedicated to our creative industries is at risk of relocation to the EU, endangering British jobs and jeopardising the attractiveness of the UK as a culture hub, as support industries will follow the companies that relocate to the EU. What proposals do the Government have for a negotiated solution, such as they have heard about today, that will meet their needs?


COVID-19, Artificial Intelligence and Data Governance: A Conversation with Lord Tim Clement-Jones

 

BIICL June 2020

 

https://youtu.be/sABSaAkkyrI

 

This was the first in a series of webinars on 'Artificial Intelligence: Opportunities, Risks, and the Future of Regulation'.

In light of the COVID-19 outbreak, governments are developing tracing applications and using a multitude of data to mitigate the spread of the virus. But the processing, storing, use of personal data and the public health effectiveness of these applications require public trust and a clear and specific regulatory context.

The technical focus in the debate on the design of the applications - centralised v. decentralised, national v. global, and so on - obfuscates ethical, social, and legal scrutiny, in particular against the emerging context of public-private partnerships. Discussants focused on these issues, considering the application of AI and data governance issues against the context of a pandemic, national responses, and the need for international, cross border collaboration.

Lord Clement-Jones CBE led a conversation with leading figures in this field, including:

Professor Lilian Edwards, Newcastle Law School, the inspiration behind the draft Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates;

Carly Kind, Director of The Ada Lovelace Institute, which published the rapid evidence review paper Exit through the App Store? Should the UK Government use technology to transition from the COVID-19 global public health crisis

Professor Peter Fussey, Research Director of Advancing human rights in the age of AI and the digital society at Essex University's Human Rights Centre;

Mark Findlay, Director of the Centre for Artificial Intelligence and Data Governance at Singapore Management University, which has recently published a position paper on Ethics, AI, Mass Data and Pandemic Challenges: Responsible Data Use and Infrastructure Application for Surveillance and Pre-emptive Tracing Post-crisis.

The event was convened by Dr Irene Pietropaoli, Research Fellow in Business & Human Rights, British Institute of International and Comparative Law.

 


Regulating artificial intelligence: Where are we now? Where are we heading?

By Annabel Ashby, Imran Syed & Tim Clement-Jones on March 3, 2021

https://www.technologyslegaledge.com/author/tclementjones/

Hard or soft law?

That the regulation of Artificial intelligence is a hot topic is hardly surprising. AI is being adopted at speed, news reports frequently appear about high-profile AI decision-making, and the sheer volume of guidance and regulatory proposals for interested parties to digest can seem challenging.

Where are we now? What can we expect in terms of future regulation? And what might compliance with “ethical” AI entail?

High-level ethical AI principles were made by the OECD, EU and G20 in 2019. As explained below, great strides were made in 2020 as key bodies worked to capture these principles in proposed new regulation and operational processes. 2021 will undoubtedly keep up this momentum as these initiatives continue their journey into further guidance and some hard law.

In the meantime, with regulation playing catch-up with reality (so often the case where technological innovation is concerned), industry has sought to provide reassurance by developing voluntary codes. While this is helpful and laudable, regulators are taking the view that more consistent, risk-based regulation is preferable to voluntary best practice.

We outline the most significant initiatives below, but first it is worth understanding what regulation might look like for an organisation using AI.

Regulating AI

Of course the devil will be in the detail, but analysis of the most influential papers from around the globe reveals common themes that are the likely pre-cursers of regulation. It reveals that conceptually, the regulation of AI is fairly straightforward and has three key components:

  • setting out the standards to be attained;
  • creating record keeping obligations; and
  • possible certification following audit of those records, which will all be framed by a risk-based approach.

Standards

Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.

Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.

Records

To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.

Certification

A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.

Risk-based approach

Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.

Key initiatives and Proposals

Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:

OECD

The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.

Europe

Several significant proposals were published by key institutions in 2020.

In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.

In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.

Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report  identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.

United Kingdom   

The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.

Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.

The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.

The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.

Regulators

Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.

The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.

Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.

2021 and beyond

In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.Print:EmailTweetLikeLinkedIn


Tackling the algorithm in the public sector

Constitution Society Blog Lord C-J March 2021

Lord Clement-Jones CBE is the House of Lords Liberal Democrat Spokesperson for Digital and former Chair of the House of Lords Select Committee on Artificial Intelligence (2017-2018).

https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/

 

 

Algorithms in the public sector have certainly been much in the news since I raised the subject in a house of Lords debate last February. The use of algorithms in government – and more specifically, algorithmic decision-making – has come under increasing scrutiny. 

The debate has become more intense since the UK government’s disastrous attempt to use an algorithm to determine A-level and GCSE grades in lieu of exams, which had been cancelled due to the pandemic.  This is what the FT had to say last August after the OFQUAL Exam debacle, where students were subjected to what has been described as unfair and unaccountable decision-making over their A-level grades: 

The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure….’

It concluded: ‘Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts…. As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes.’ 

Over the past few years, we have seen a substantial increase in the adoption of algorithmic decision-making and prediction, or ADM, across central and local government. An investigation by the Guardian in late 2019 showed some 140 local authorities out of 408 surveyed, and about a quarter of police authorities, were now using computer algorithms for prediction, risk assessment and assistance in decision-making in areas such as benefit claims, who gets social housing and other issues – despite concerns about their reliability. According to the Guardian, nearly a year later that figure had increased to half of local councils in England, Wales and Scotland; many of them without any public consultation on their use.

Of particular concern are tools such as the Harm Assessment Risk Tool (HART) system used by Durham Police to predict re-offending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias, discrimination and dubious predictions.

Central government use is even more opaque but we know that HMRC, the Ministry of Justice, and the DWP are the highest spenders on digital, data and algorithmic services. 

A key example of ADM use in central government is the DWP’s much criticised Universal Credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group study ‘The Computer Says No’ shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

The Joint Council for the Welfare of Immigrants (JCWI) and campaigning organisation Foxglove joined forces last year to sue the Home Office over an allegedly discriminatory algorithmic system – the so called ‘streaming tool’ – used to screen migration applications.  This is the first, it seems, successful legal challenge to an algorithmic decision system in the UK, although before having to defend the system in court, the Home Office decided to scrap the algorithm.

The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, looked at our Universal Credit system two years ago and said in a statement afterwards: ‘Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.’

Overseas the use of algorithms is even more extensive and, it should be said, controversial – particularly in the US. One such system is the NYPD’s Patternizr, a tool that the NYPD has designed to identify potential future patterns of criminal activity. Others include Northpointe’s COMPAS risk assessment programme in Florida and the InterRAI care assessment algorithm in Arkansas.

It’s not that we weren’t warned, most notably in Cathy O’Neil’s Weapons of Math Destruction (2016) and Hannah Fry’s Hello World (2018), of the dangers of replication of historical bias in algorithmic decision making. 

It is clear that failure to properly regulate these systems risks embedding bias and inaccuracy. Even when not relying on ADM alone, the impact of automated decision-making systems across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights.

Some of the current issues with algorithmic decision-making were identified as far back as our House of Lords Select Committee Report ‘AI in the UK: Ready Willing and Able?’ in 2018. We said at the time: ‘We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.’

It was clear from the evidence that our own AI Select Committee took, that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection to those subject to ADM. It contains a ‘right to an explanation’ provision, when an individual has been subject to fully automated decision-making. However, few highly significant decisions are fully automated – often, they are used as decision support, for example in detecting child abuse. The law should be expanded to also cover systems where AI is only part of the final decision.

The Science and Technology Select Committee Report ‘Algorithms in Decision-Making’ of May 2018, made extensive recommendations in this respect. It urged the adoption of a legally enforceable ‘right to explanation’ that allows citizens to find out how machine-learning programmes reach decisions affecting them – and potentially challenge their results. It also called for algorithms to be added to a ministerial brief, and for departments to publicly declare where and how they use them.

Last year, the Committee on Standards in Public Life published a review that looked at the implications of AI for the seven Nolan principles of public life, and examined if government policy is up to the task of upholding standards as AI is rolled out across our public services. 

The committee’s Chair, Lord Evans, said on publishing the report:

‘Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector…. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.’

The report found that despite the GDPR, the Data Ethics Framework, the OECD principles, and the Guidelines for Using Artificial Intelligence in the Public Sector; the Nolan principles of openness, accountability and objectivity are not embedded in AI governance and should be. The Committee’s report presented a number of recommendations to mitigate these risks, including 

  • greater transparency by public bodies in use of algorithms, 
  • new guidance to ensure algorithmic decision-making abides by equalities law, 
  • the creation of a single coherent regulatory framework to govern this area, 
  • the formation of a body to advise existing regulators on relevant issues, 
  • and proper routes of redress for citizens who feel decisions are unfair.

In the light of the Committee on Standards in Public Life Report, it is high time that a minister was appointed with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector, as was also recommended by the Commons Science and Technology Committee. 

We also need to consider whether – as Big Brother Watch has suggested – we should:

  • Amend the Data Protection Act to ensure that any decisions involving automated processing that engage rights protected under the Human Rights Act 1998 are ultimately human decisions with meaningful human input.
  • Introduce a requirement for mandatory bias testing of any algorithms, automated processes or AI software used by the police and criminal justice system in decision-making processes.
  • Prohibit the use of predictive policing systems that have the potential to reinforce discriminatory and unfair policing patterns.

This chimes with both the Mind the Gap report from the Institute for the Future of Work, which proposed an Accountability for Algorithms Act, and the Ada Lovelace Institute paper, Can Algorithms Ever Make the Grade? Both reports call additionally for a public register of algorithms, such as have been instituted in Amsterdam and Helsinki, and independent external scrutiny to ensure the efficacy and accuracy of algorithmic systems.

Post COVID, private and public institutions will increasingly adopt algorithmic or automated decision making. These will give rise to complaints requiring specialist skills beyond sectoral or data knowledge. The CDEI in its report, Bias in Algorithmic Decision Making, concluded that algorithmic bias means that the overlap between discrimination law, data protection law and sector regulations is becoming increasingly important and existing regulators need to adapt their enforcement to algorithmic decision-making. 

This is especially true of both the existing and proposed public sector ombudsman who are – or will be – tasked with dealing with complaints about algorithmic decision-making. They need to be staffed by specialists who can test algorithms’ compliance with ethically aligned design and operating standards and regulation. 

There is no doubt that to avoid unethical algorithmic decision making becoming irretrievably embedded in our public services we need to see this approach taken forward, and the other crucial proposals discussed above enshrined in new legislation.

The Constitution Society is committed to the promotion of informed debate and is politically impartial. Any views expressed in this article are the personal views of the author and not those of The Constitution Society.

Categories: AIConstitutional standards

https://consoc.org.uk/tackling-the-algorithm-in-the-public-sector/

 


Digital Technology, Trust, and Social Impact with David Puttnam

What is the role of government policy in protecting society and democracy from threats arising from misinformation? Two leading experts and members of the UK Parliament, House of Lords, help us understand the report Digital Technology and the Resurrection of Trust.

 

About the House of Lords report on trust, technology, and democracy

Michael Krigsman: We're discussing the impact of technology on society and democracy with two leading members of the House of Lords. Please welcome Lord Tim Clement-Jones and Lord David Puttnam. David, please tell us about your work in the House of Lords and, very briefly, about the report that you've just released.

Lord David Puttnam: Well, the most recent 18 months of my life were spent doing a report on the impact of digital technology on democracy. In a sense, the clue is in the title because my original intention was to call it The Restoration of Trust because a lot of it was about misinformation and disinformation.

The evidence we took, for just under a year, from all over the world made it evident the situation was much, much worse, I think, than any other committee, any of the 12 of us, had understood. I ended up calling it The Resurrection of Trust and I think that, in a sense, the switch in those words tells you how profound we decided that the issue was.

Then, of course, along comes January the 6th in Washington, and a lot of the things that we had alluded to and things that we regarded as kind of inevitable all, in a sense, came about. We're feeling a little bit smug at the moment, but we kind of called it right at the end of June last year.

Michael Krigsman: Our second guest today is Lord Tim Clement-Jones. This is his third time back on the CXOTalk. Tim, welcome back. It's great to see you again.

Lord Tim Clement-Jones: It's great to be back, Michael. As you know, my interest is very heavily in the area of artificial intelligence, but I have this crossover with David. David was not only on my original committee, but artificial intelligence is right at the heart of these digital platforms.

I speak on digital issues in the House of Lords. They are absolutely crucial. The whole area of online harms (to some quite high degree) is driven by the algorithms at the heart of these digital platforms. I'm sure we're going to unpack that later on today.

David and I do work very closely together in trying to make sure we get the right regulatory solutions within the UK context.

Michael Krigsman: Very briefly, Tim, just tell us (for our U.S. audience) about the House of Lords.

Lord Tim Clement-Jones: It is a revising chamber, but it's also a chamber which has the kind of expertise because it contains people who are maybe at the end of their political careers, if you like, with a small p, but have a big expertise, a great interest in a number of areas that they've worked on for years or all their lives, sometimes. We can draw on real experience and understanding of some of these issues.

We call ourselves a revising chamber but, actually, I think we should really call ourselves an expert chamber because we examine legislation, we look at future regulation much more closely than the House of Commons. I think, in many ways, actually, government does treat us as a resource. They certainly treat our reports with considerable respect.

Key issues covered by the House of Lords report

Michael Krigsman: David, tell us about the core issues that your report covered. Tim, please jump in.

Lord David Puttnam: I think Tim, in a sense, set it up quite nicely. We were looking at the potential danger to democracy—of misinformation, disinformation—and the degree to which the duty of care was being exercised by the major platforms (Facebook, Twitter, et cetera) in understanding what their role was in a new 21st Century democracy, both looking at the positive role they could play in terms of information, generating information and checking information, but also the negative in terms of the amplification of disinformation. That's an issue we looked at very carefully.

This is where Tim and my interests absolutely coincide because within those black boxes, within those algorithmic structures is where the problem lies. The problem century-wise—maybe this will spark people a little, I think—is that these are flawed business models. The business model that drives Facebook, Google, and others is in the advertising-related business model. That requires volume. That requires hits and what their incomes generate on the back of hits.

One of the things we tried to unpick, Michael, which was, I think, pretty important, was we took the vision that it's about reach, not about freedom of speech. We felt that a lot of the freedom of speech advocates misunderstood the problem here. Really, the problem was the amplification of misinformation which in turn benefited or was an enormous boost to the revenues of those platforms. That's the problem.

We are convinced through evidence. We're convinced that they could alter their algorithms, that they can actually dial down and solve many, many of the problems that we perceive. But, actually, it's not in their business interest to. They're trapped, in a sense between demands or requirements of their shareholders to optimize that, to optimize share value, and the role and responsibility they have as massive information platforms within a democracy.

Lord Tim Clement-Jones: Of course, governments have been extremely reluctant, in a sense, to come up against big tech in that sense. We've seen that in the competition area over the advertising monopoly that the big platforms have. But I think many of us are now much more sensitive to this whole aspect of data, behavioral data in particular.

I think Shoshana Zuboff did us all a huge benefit by really getting into detail on what she calls exhaust data, in a sense. It may seem trivial to many of us but, actually, the use to which it's put in terms of targeting messages, targeting advertising, and, in a sense, helping drive those algorithms, I think, is absolutely crucial. We're only just beginning to come to grips with that.

Of course, David and I are both, if you like, tech enthusiasts, but you absolutely have to make sure that we have a handle on this and that we're not giving way to unintended consequences.

Impact of social media platforms on society

Michael Krigsman: What is the deep importance of this set of issues that you spend so much time and energy preparing that report?

Lord David Puttnam: If you value, as certainly I do—and I'm sure we all do value—the sort of democracy we were born and brought up in, for me it's rather like carrying a porcelain bowl across a very slippery floor. We should be looking out for it.

I did a TED Talk in 2012 ... [indiscernible, 00:07:19] entitled The Duty of Care where I made the point that we use the concept of duty of care with many, many things: in the medical sense, in the educational sense. Actually, we haven't applied it to democracy.

Democracy, of all the things that we value, may end up looking like the most fragile. Our tolerance, if you like, of the growth of these major platforms, our encouragement of the reach because of the benefits of information, has kind of blindsided us to what was also happening at the same time.

Someone described the platforms as outrage factories. I'm not sure if anyone has come up with a better description. We've actually actively encouraged outrage instead of intelligent debate.

The whole essence of democracy is compromise. What these platforms do not is encourage intelligent debate and reflect the atmosphere of compromise that any democracy requires in order to be successful.

Lord Tim Clement-Jones: The problem is that the culture has been, to date, against us really having a handle on that. I think it's only now, and I think that it's very interesting to see what the Biden Administration is doing, too, particularly in the competition area.

One of the real barriers, I think, is thinking of these things in only individual harm. I think we're now getting to the point where maybe if somebody is affected by hate speech or racial slurs or whatever as individuals, then I think governments are beginning to accept that that kind of individual harm is something that we need to regulate and make sure that the platforms deal with.

I think that the area that David is raising, which is so important and there is still resistance in governments where it's, if you like, societal harms that are being caused by the platforms. Now, this is difficult to define, but the consequences could be severe if we don't get it right.

I think, across the world, you only have to look at Myanmar, for instance, [indiscernible, 00:09:33]. If that wasn't societal harm in terms of use by the military of Facebook, then I don't know what is. But there are others.

David has used the analogy of January the 6th, for instance. There are analogies and there are examples across the world where democracy is at risk because of the way that these platforms operate.

We have to get to grips with that. It may be hard, but we have to get to grips with it.

Michael Krigsman: How do you get to grips with a topic that, by its nature, is relatively vague and unfocused? Unlike individual harms, when you talk about societal harm, you're talking about very diffuse and broad impacts.

Lord David Puttnam: Michael, I sit on the Labor benches at the House of Lords and probably, unsurprising, I'm a Louey Grandise [phonetic, 00:10:27] fan, so I think the most interesting thing that's taking place at the moment is people who look back to the early part of the 20th Century and the railroads, the breaking up of the railroads, and understanding why that had to happen.

It wasn't just about the railroads. It was about the railroads' ability to block and distort all sorts of other markets. The obvious one was the coal market, but others. Then indeed it blocked and made extraordinary advances on the nature of shipping.

What I think legislators have woken up to is, this isn't just about platforms. This is actually about the way we operate as a society. The influence of these platforms is colossal, but most important of all, the fact that what we have allowed to develop is a business model which acts inexorably against our society's best interest.

That is, it inflames fringe views. It inflames misinformation. Actually, not only inflames it. It then profits from that inflammation. That can't be right.

Lord Tim Clement-Jones: Of course, it is really quite opaque because, if you look at this, the consumer is getting a free ride, aren't they? Because of the advertising, it's being redirected back to them. But it's their data which is part of the whole business model, as David has described.

It's very difficult sometimes for regulators to say, "Ah, this kind of consumer detriment," or whatever it may be. That's why you also need to look at the societal aspects of this.

If you purely look (in conventional terms) at consumer harm, then you'd actually probably miss the issues altogether because—with things like advertising monopoly, use of data without consent, and so on, and misinformation and disinformation—it is quite difficult (without looking at the bigger societal picture) just to pin it down and say, "Ah, well, there's a consumer detriment. We must intervene on competition grounds." That's why, in a sense, we're all now beginning to rewrite the rules so that we do catch these harms.

Balancing social media platforms rights against the “duty of care”

Michael Krigsman: We have a very interesting point from Simone Jo Moore on LinkedIn who is asking, "How do you strike this balance between intelligent questioning and debate versus trolling on social media? How should lawmakers and policymakers deal with this kind of issue?

Lord David Puttnam: We came up with, we identified an interesting area, if you like, of comprise – for want of a better word. As I say, we looked hard at the impact on reach.

Now, Facebook, if you were a reasonably popular person on Facebook, you can quite quickly have 5,000 people follow what you're saying. At that point, you get a tick.

It's clear to us that the algorithm is able to identify you as a super-spreader at that point. What we're saying is, at that moment not only have you got your tick but you then have to validate and verify what it is you're saying.

That state of outrage, if you like, is what blocks the 5,000 and then has to be explained and justified. That seemed to us an interesting area to begin to explore. Is 5,000 the right number? I don't know.

But what was evident to us is the things that Tim really understands extremely well. These algorithmic systems inside that black box can be adjusted to ensure that, at a certain moment, validation takes place. Of course, we saw it happen in your own election that, in the end, warnings were put up.

Now, you have to ask yourself, why wasn't that done much, much, much sooner? Why? Because we only reasonably recently became aware of the depth of the problem.

In a sense, the whole Russian debacle in the U.S. in the 2016 election kind of got us off on the wrong track. We were looking at the wrong place. It wasn't what Russia had done. It was what Russia was able to take advantage of. That should have been the issue and it us a long time to get there.

Lord Tim Clement-Jones: That's why, in a sense, you need new ways of thinking about this. It's the virality of the message, exactly as David has talked about, the super-spreader.

I like the expression used by Avaaz in their report that came out last year looking at, if you like, the anti-vaxx messages and the disinformation over the Internet during the COVID pandemic. They talked about detoxing the algorithm. I think that's really important.

In a sense, I don't think it's possible to lay down absolutely hard and fast rules. That's the benefit of the duty of care that it is a blanket legal concept, which has a code of practice, which is effectively enforced by a regulator. It means that it's up to the platform to get it right in the first place.

Then, of course – David's report talked about it – you need forms of redress. You need a kind of ombudsman, or whatever may be the case, independent of the platforms who can say, "They got it wrong. They allowed these messages to impact on you," and so on and so forth. There are mechanisms that can be adopted, but at the heart of it, as David said, is this black box algorithm that we really need to get to grips with.

Michael Krigsman: You've both used terms that are very interestingly put together, it seems to me. One, Tim, you were just talking about duty of care. David, you've raised (several times) this notion of flawed business models. How do these two, duty of care and the business model, intersect? It seems like they're kind of diametrically opposed.

Lord David Puttnam: It depends on your concept of what society might be, Michael. The type of society I spent my life arguing for, they're not opposed at all, the role of the peace, because that society would have a combination of regulation, but also personal responsibility on the part of the people who run businesses.

One of the things that I think Tim and I are going to be arguing for, which we might have problems in the UK, is the notion of personal responsibility. At what point do the people who sit on the board at Facebook have a personal responsibility for the degree to which they exercise duty of care over the malfunction of their algorithmic systems?

Lord Tim Clement-Jones: I don't see a conflict either, Michael. I think that you may see different regulators involved. You may see, for instance, a regulator imposing a way of working over content, user-generated content on a platform. You may see another regulator (more specialist, for instance) on competition. I think it is going to be horses for courses, but I think that's the important thing to make sure that they cooperate.

I just wanted to say that I do think that often people in this context raised the question of freedom of expression. I suspect that people will come on the chat and want to raise that issue. But again, I don't see a conflict in this area because we're not talking about ordinary discourse. We're talking about extreme messages: anti-vaxxing, incitement of violence, and so on and so forth.

The one thing David and I absolutely don't want to do is to impede freedom of expression. But that's sometimes used certainly by the platforms as a way of resisting regulation, and we have to avoid that.

How to handle the cross-border issues with technology governance?

Michael Krigsman: We have another question coming now from Twitter from Arsalan Khan who raises another dimension. He's talking about if individual countries create their own policies on societal harm, how do you handle the cross-border issues? It seems like that's another really tricky one here.

Lord David Puttnam: I think what is happening, and this is quite determined, I think, on the part of the Biden Administration—the UK and, actually, Europe, the EU, is probably further advanced than anybody else on this—is to align our regulatory frameworks. I think that will happen.

Now, in a sense, these are big marketplaces. The Australian situation with Facebook has stimulated this. Once you get these major markets aligned, it's extremely hard to see how Facebook, Google, and the rest of them could continue with their advertising with their current model. They would have to adjust to what those marketplaces require.

Bear in mind, what troubles me a lot, Michael, is that, if you think back, Mr. Putin and President Xi must be laughing their heads off at the mess we got ourselves into because they've got their own solution to this problem – a lovely, simple solution.

We've got our knickers in a twist in an extraordinary situation quite unintended in most states. The obligation is on the great Western democracies to align the regulatory frameworks and work together. This can't be done on a country-by-country basis.

Lord Tim Clement-Jones: Once the platforms see the writing on the wall, in a sense, Michael, I think they will want to encourage people to do that. As you know, I've been heavily involved in the AI ethics agenda. That is coming together on an international basis. This, if anything, is more immediate and the pressures are much greater. I think it's bound to come together.

It's interesting that we've already had a lot of interest in the duty of care from other countries. The UK, in a sense, is a bit of a frontrunner in this despite the fact that David and I are both rather impatient. We feel that it hasn't moved fast enough.

Nevertheless, even so, by international standards, we are a little bit ahead of the game. There is a lot of interest. I think, once we go forward and we start defining and putting in regulation, that's going to be quite a useful template for people to be able to legislate.

Lord David Puttnam: Michael, it's worth mentioning that it's interesting how things bubble up and then become accepted. When the notion of fines of up to 10% of turnover was first mooted, people said, "What?! What?!"

Now, that's regarded as kind of a standard around which people begin to gather, so there is momentum. Tim is absolutely right. There is momentum here. The momentum is pretty fierce.

Ten percent of turnover is a big fine. If you're sitting on a board, you've got to think several times before you sign up on that. That's not just the cost of doing business.

Michael Krigsman: Is the core issue then the self-interest of platforms versus the public good?

Lord David Puttnam: Yes, essentially it is. Understand, if you look back and look at the big anti-trust decisions that were made in the first decade of the 20th Century. I think we're at a similar moment and, incidentally, I think that it is as certain that these things will be resolved within the next ten years in a very similar manner.

I think it's going to be up to the platforms. Do they want to be broken up? Do they want to be fined? Or do they want to get rejoined in society?

Lord Tim Clement-Jones: Yeah, I mean I could get on and really bore everybody with the different forms of remedies available to our competition regulators. But David talked about big oil, which was broken up by what are called structural remedies.

Now, it may well be that, in the future, regulators—because of the power of the tech platforms—are going to have to think about exactly doing that, say, separating Facebook from YouTube or from Instagram, or things of that sort.

We're not out of the era of "move fast and break things." We now are expecting a level of corporate responsibility from these platforms because of the power they wield. I think we have to think quite big in terms of how we're going to regulate.

Should governments regulate social media?

Michael Krigsman: We have another comment from Twitter, again from Arsalan Khan. He's talking about, do we need a new world order that requires technology platforms to be built in? It seems like as long as you've got this private sector set of incentives versus the public good, then you're going to be at loggerheads. In a practical way, what are the solutions, the remedies, as you were just starting to describe?

Lord Tim Clement-Jones: What are governments for? Arsalan always asks the most wonderful questions, by the way, as he did last time.

What are governments for? That is what the role of government is. It is, in a sense, a brokerage. It's got to understand what is for the benefit of, if you like, society as a whole and, on the other hand, what are the freedoms that absolutely need preserving and guaranteeing and so on.

I would say that we have some really difficult decisions to make in this area. But David and I come from the point of view of actually creating more freedom because the impact of the platforms (in many, many ways) will be to reduce our freedoms if we don't do something about it.

Lord David Puttnam: It's very, very much, and that's why I would argue, Michael, that the Facebook reaction or response in Australia was so incredibly clumsy because what it did is it begged a question we could really have done without, which is, are they more powerful than the sovereign nations?

Now, you can't go there because you get the G7 together or the G20 together, you know, you're not going to get into a situation where any prime minister is going to concede that, actually, "I'm afraid there's nothing we can do about these guys. They're bigger than us. We're just going to have to live with it." That's not going to happen.

Lord Tim Clement-Jones: The only problem there was the subtext. The legislation was prompted by one of the biggest media organizations in the world. In a sense, I felt pretty uncomfortable taking sides there.

Lord David Puttnam: I think it was just an encouragement to create a new series of an already long-running TV series.

Lord Tim Clement-Jones: [Laughter]

Lord David Puttnam: You're absolutely right about that. I had to put that down as an extraordinary irony of history. The truth is you don't take on nations, and many have.

Some of your companies have and genuinely believe that they were bigger. But I would say don't go there. Frankly, if I were a shareholder in Facebook – I'm not – I'd have been very, very, very cross with whoever made that decision. It was stupid.

Michael Krigsman: Where is all of this going?

Lord Tim Clement-Jones: We're still heavily engaged in trying to get the legislation right in the UK. But David and I believe that our role is to kind of keep government honest and on track and, actually, go further than they've pledged because this question of individual harm, remedies for that, and a duty of care in relation to individual harm isn't enough. It's got to go broader into societal harm.

We've got a road to travel. We've got draft legislation coming in very, very soon this spring. We've got then legislation later on in the year, but actually getting it right is going to require a huge amount of concentration.

Also, we're going to have to fight off objections on the basis of freedom of expression and so on and so forth. We are going to have to reroute our determination in principle, basically. I think there's a great deal of support out there, particularly in terms of protection of young people and things of that sort that we're actually determined to see happen.

Political messages and digital literacy

Michael Krigsman: Is there the political will, do you think, to follow through with these kinds of changes you're describing?

Lord David Puttnam: In the interest of a vibrant democracy, when any prime minister or president of any country looks at the options, I think they're facing many alternatives. I can't really imagine Macron, Johnson, or anybody else looking at the options available to them.

They may find those options quite uncomfortable, and the ability in some of these platforms to embarrass politicians is considerable. But when they actually look at the options, I'm not sure they're faced with that many alternatives other than pressing down the vote that Tim just laid out for you.

Lord Tim Clement-Jones: I think the real Achilles heel, though, that David's report pointed out really clearly, and the government failed to answer satisfactorily, was the whole question of electoral regulation, basically. The use of misleading political messaging during elections, the impact of, if you like, opaque political messaging where it's not obvious where it's coming from, those sorts of things.

I think the determination of governments, especially because they are in control and they are benefiting from some of that messaging, there's a great reluctance to take on the platforms in those circumstances. Most platforms are pretty reluctant to take down any form of political advertising or messaging or, in a sense, moderate political content.

That for me is the bit that I think is going to be the agenda that we'll probably be fighting on for the next ten years.

Lord David Puttnam: Michael, it's quite interesting that both of the major parties – not Tim's party, as you behave very well – both of the major parties actually misled us. I wouldn't say lied to us, but they misled us in the evidence they gave about their use of the digital environment during an election, which was really lamentable. We called them out, but the fact that, in both places, they felt that they needed to, as necessary, break the law to give themselves an edge is a very worrying indicator of what we might be up against here.

Lord Tim Clement-Jones: The trouble is, political parties love data because targeted messages, microtargeting as it's called, is very powerful, potentially, and gaining support. It's like a drug. It's very difficult to wean politicians off what they see as a new, exciting tool to gain support.

Michael Krigsman: I work with various software companies, major software companies. Personalization based on data is such a major focus of technology, of every aspect of technology with tentacles to invade our lives. When done well, it's intuitive and it's helpful. But you're talking about the often indistinguishable case where it's done invasively and insinuating itself into the pattern of our lives. How do you even start to grapple with that?

Lord Tim Clement-Jones: It kind of bubbled up in the Cambridge Analytica case where the guy who ran the company was stupid enough to boast about what they were able to do. What it illustrated is that that was the tip of a very, very worrying nightmare for all of us.

No, I mean this is where you come back to individual responsibility. The idea that the people, the management of Facebook, the management of Google are not appalled by that possibility and aren't doing everything they can to prevent is, I think it's what gives everyone at Twitter nightmares.

I don't think they ever intended or wanted to have the power they have in these fringe areas, but they're stuck with them. The answer is, how do we work with governments to make sure they're minimized?

Lord Tim Clement-Jones: This, Michael, brings in one of David and my favorite subjects, which is digital literacy. I'm an avid reader of people who try and buck the trend. I love Jaron Lanier's book Ten Reasons for Deleting your Facebook Account [sic]. I love the book by Carissa Veliz called Privacy is Power.

Basically, that kind of understanding of what you are doing when you sign up to a platform—when you give your data away, when you don't look at the terms and conditions, you tick the boxes, you accept all cookies, all these sorts of things—it's really important that people understand the consequences of that. I think it's only a tiny minority who have this kind of idea they might possibly live off-grid. None of us can really do that, so we have to make sure that when we live with it, we are not giving away our data in those circumstances.

I don't practice what I preach half the time. We're all in a hurry. We all want to have a look at what's on that website. We hit the accept all cookies button or whatever it may be, and we go through. We've got to be more considerate about how we do these things.

Lord David Puttnam: Chapter 7 of our report is all about digital literacy. We went into it in great depth. Again, fairly lamentable failure by most Western democracies to address this.

There are exceptions. Estonia is a terrific exception. Finland is one of the exceptions. They're exceptions because they understand the danger.

Estonia sits right on the edge with its vast neighbor Russia with 20% of its population being Russian. It can't afford misinformation. Misinformation for them is catastrophe. Necessarily, they make sure their young people are really educated in the way in which they receive information, how they check facts.

We are very complacent in the West; I've got to say. I'll say this about the United States. We're unbelievably complacent in those areas and we're going to have to get smart. We've got to make sure that young people get extremely smart about the way they're fed and react and respond to information.

Lord Tim Clement-Jones: Absolutely. Our politics, right across the West, demonstrate that there's an awful lot of misinformation, which is believed – believed as the gospel, effectively.

Balancing freedom of speech on social media and cyberwarfare

Michael Krigsman: We have another question from Twitter. How do you balance social media reach versus genuine freedom of speech?

Lord David Puttnam: I thought I answered it. Obviously, I didn't. It's that you accept the fact that freedom of speech requires that people can say what they want. This goes back to the black boxes. At a certain moment, the box intervenes and says, "Whoa. Just a minute. There is no truth in what you're saying, " or worse on the case of anti-vaxxers. "There is actual harm and damage in what you're saying. We're not going to give you reach."

What you do is you limit reach until the person making those statements can validate them or affirm them or find some other way of, as it were, being allowed to amplify. It's all about amplification. It's trying to stop the amplification of distortion and lies and really quite dangerous stuff like the anti-vaxx.

We've got a perfect trial run, really, with anti-vaxxing. If we can't get this right, we can't get much right.

Lord Tim Clement-Jones: There are so many ways. When people say, "Oh, how do we do this?" you've got sites like Reddit who have a community, different communities. You have rules applying to the communities that have to conform to a particular standard.

Then you've got the Avaaz not only detoxing the algorithm, but the duty of correction. Then you've got great organizations like NewsGuard who basically, in a sense, have a sort of star system, basically, to verify some of the accuracy of news outlets. We do have the tools, but we just have to be a bit determined about how we use them.

Michael Krigsman: We have another question from Twitter that I think addresses or asks about this point, which is, how can governments set effective constraints when partisan politics benefits from misusing digital technologies and even spreading misinformation?

Lord David Puttnam: Tim laid it out for you early on why the House of Lords existed. This is where it actually gets quite interesting.

We, both Tim and I, during our careers—and we both go back, I think, 25 years—had managed to get amendments into legislation against the head. That's to say that didn't suit either the government of the day or even the lead opposition of the day. The independence of the House of Lords is wonderfully, wonderfully valuable. It is expert and it does listen.

Just a tiny example, if someone said to me or David, "Why were you not surprised that your report didn't get more traction?" it's 77,000 words long. Yeah, it's 77,000 words long because it's a bloody complicated subject. We had the time and the luxury to do it properly.

I don't think that will necessarily prove to be a stumbling block. We have enough ... [indiscernible, 00:37:01] embarrassment. The quality of the House of Lords and the ability to generate public opinion, if you like, around good, sane, sensible solutions still do function within a democracy.

But if you go down the road that Tim was just saying, if you allow the platforms to go in the route they appear to have taken, we'll be dealing with autocracy, not democracy. Then you're going to have a set of problems.

Lord Tim Clement-Jones: David is so right. The power of persuasion still survives in the House of Lords. Because the government doesn't have a majority, we can get things done if that power of persuasion is effective. We've done that quite a few times over the last 25 years, as David says.

Ministers know that. They know that if you espouse a particular cause that is clearly sensible, they're going to find that they're pretty sticky wicked or whatever the appropriate baseball analogy would be, Michael, in those circumstances. We have had some notable successes in that respect.

For instance, only a few years ago, we had a new code for age-appropriate design, which means that webpages now need to take account of the age of the individuals actually accessing them. It's now called a Children's Code. It came into effect last year and it's a major addition to our regulation. It was quite heavily resisted by the platforms and others when it came in, but by a single colleague of David and mine (supported by us) she drove it through, greatly to her credit.

Michael Krigsman: We have two questions now, one on LinkedIn and one on Twitter, that relates to the same topic. That is the speed of government, the speed of change and government's ability to keep up. On Twitter, for example, future wars are going to be cyber, and the government is just catching up. The technology is changing so rapidly that it's very difficult for the legal system to track that. How do we manage that aspect?

Lord Tim Clement-Jones: Funny enough, government think that. Their first thought is about cybersecurity. Their first thought is about their cyber, basically, their data.

We've got a new, brand new, national cybersecurity center about a year or two old now. The truth is, particularly in view of Russian activities, we now have quite good cyber controls. I'm not sure that our risk management is fantastic but, operationally, we are pretty good at this.

For instance, things like the solar winds hack of last year have been looked at pretty carefully. We don't know what the outcome is, but it's been looked at pretty carefully by our national cybersecurity center.

Strangely enough, the criticism I have with government is, if only they thought of our data in the way that they thought about their data, we'd all be in a much happier place, quite honestly.

Lord David Puttnam: I think that's true. Michael, I don't know whether this is absolutely true in the U.S. because it's such a vast country, but my experience of legislation is it can be moved very quickly when there's an incident. Now, I'll give you an example.

I was at the Department of Education at the moment where a baby was allowed to die under very unfortunate, catastrophic failure by different systems of the government. The entire department ground to a halt for about two months while this was looked at and whilst the government, whilst the department tried to explain itself and any amount of legislation was brought forward. Governments deal in crises, and this is going to be a series of crises.

The other thing governments don't like is judicial review. I think we're looking at an area here where judicial review—either by the platforms for a government decision or by civil society because of a government decision—is utterly inevitable. I actually think, longer-term, these big issues are going to be decided in the courts.

Advice for policymakers and business people

Michael Krigsman: As we finish up, can I ask you each for advice to several different groups? First is the advice that you have for governments and for policymakers.

Lord Tim Clement-Jones: Look seriously at societal harms. I think the duty of care is not enough just simply to protect individual citizens. It is all about looking at the wider picture because if you don't, then you're going to find it's too late and your own democracy is going to suffer.

I think you're right, Michael, in a sense that some politicians appear to have a conflict of interest on this. If you're in control, you don't think of what it's like to have the opposition or to be in opposition. Nevertheless, that's what they have to think about.

Lord David Puttnam: I was very impressed, indeed, tuning in to some of the judicial subcommittees at the congressional hearings on the platforms. I thought that the chairman ... [indiscernible, 00:42:35] did extremely well.

There is a lot of expertise. You've got more expertise, actually, Michael, in your country than we have in ours. Listen to the experts, understand the ramifications, and, for God's sake, politicians, it's in their interests, all their interests, irrespective of Republicans or Democrats, to get this right because getting it wrong means you are inviting the possibility of a form of government that very, very, very few people in the United States wish to even contemplate.

Michael Krigsman: What about advice to businesspeople, to the platform owners, for example?

Lord David Puttnam: Well, we had an interesting spate, didn't we, where a lot of advertisers started to take issue with Facebook, and that kind of faded away. But I would have thought that, again, it's a question of regulatory oversight and businesses understanding.

How many businesses in the U.S. want to see democracy crumble? I mean I was quite interested immediately after the January 6th thing for where the businesses walked away from, not so much the Republican party, but away from Trump.

I just think we've got to begin to hold up a mirror to ourselves and also look carefully at what the ramifications of getting it wrong are. I don't think there's a single business in the U.S. (or if there are, there are very, very few) who wish to go down that road. They're going to realize that that means they've got to act, not just react.

Lord Tim Clement-Jones: I think this is a board issue. This is the really important factor.

Looking on the other side, not the platform side because I think they are only too well aware of what they need to do, but if I'm on the other side and I'm, if you like, somebody who is using social media, as a board member, you have to understand the technology and you have to take the time to do that.

The advertising industry—really interesting, as David said—they're developing all kinds of new technology solutions like blockchain to actually track where their advertising messages are going. If they're directed in the wrong way, they find out and there's an accountability down the blockchain which is really smart in the true sense of the word.

It's using technology to understand technology, which I think you can't leave it to the chief information officer or the chief technology officer. You as the CEO or the chair, you have to understand it.

Lord David Puttnam: Tim is 100% right. I've sat in a lot of boards in my life. If you really want to grab a board's attention – I'm not saying which part of the body you're going to grab – start looking at the register and then have a conversation about how adequate directors' insurance is. It's a very lively discussion.

Lord Tim Clement-Jones: [Laughter]

Lord David Puttnam: I think this whole issue of personal responsibility, the things that insurance companies will and won't take on in terms of protecting companies and boards, that's where a lot of this could land and very interestingly.

Importance of digital education

Michael Krigsman: Let's finish up by any thoughts on the role of education and advice that you may have for educators in helping prepare our citizens to deal with these issues.

Lord Tim Clement-Jones: Funny enough, I've just developed (with a group of people) a framework for ethical AI for use in education. We're going to be launching that in March.

The equivalent is needed in many ways because of course digital literacy, digital education is incredibly important. Actually, parents and teachers, this isn't just a generation, a younger generational issue. It needs to go all the way through. I think we need to actually be much more proactive about the tools that are out there for parents and others, even main board directors.

You cannot spend enough time talking about the issues. That's why, when David mentioned Cambridge Analytica, suddenly everybody gets interested. But it's a very rare example of suddenly people becoming sensitized to an issue that they previously didn't really think about.

Lord David Puttnam: It's a parallel, really, in the sense of climate change. These are our issues. If we're going to prepare our kids – I've got three grandchildren – if we're going to prepare them properly for the remains of their lives, we have an absolute obligation to explain to them what the challenges their lives will far are, what forms of society they're going to have to rally around, what sort of governance they should reasonably expect, and how they'll participate in all of that.

If they're left in ignorance—be it on climate change or, frankly, on all the issues we've been discussing this evening—we are making them incredibly vulnerable to a form of challenge and a form of life that we've lived very privileged lives. I think that the lives of our grandchildren, unless we get this right for them and help them, will be very diminished.

I use that word a lot recently. They will live diminished lives and they'll blame us, and they'll wonder why it happened.

Michael Krigsman: Certainly, one of the key themes that I've picked up from both of you during this conversation has been this idea of responsibility, individual responsibility for the public welfare.

Lord David Puttnam: Unquestionable. It's summed up in the various duty of care. We have an absolutely overwhelming duty of care for future generations, and it applies as much to the digital environment as it does to climate.

Lord Tim Clement-Jones: Absolutely. In a way, what we're now having to overturn is this whole idea that online was somehow completely different to offline, to the physical world. Well, some of us have been living in the online remote world for the whole of last year, but why should standards be different in that online world? They shouldn't be. We should expect the same standards of behavior and we should expect people to be accountable for that in the same way as they are in the offline world.

Michael Krigsman: Okay. Well, what a very interesting conversation. I would like to express my deep thank you to Lord Tim Clement-Jones and Lord David Puttnam for joining us today.

David, before we go, I just have to ask you. Behind you and around you are a bunch of photographs and awards that seem distant from your role in the House of Lords. Would you tell us a little bit more about your background very quickly?

Lord David Puttnam: Yes. I was a filmmaker for many years. That's an Emmy sitting behind me. The reason the Emmy is sitting there is the shelf isn't deep enough to take it. But I got my Oscar up there. I've got four or five Golden Globes and three or four BAFTAs, David di Donatello, and Palme d'Or from Cannes. I had a very, very happy, wonderfully happy 30 years in the movie industry, and I've had a wonderful 25 years working with Tim in the legislature, so I'm a lucky guy, really.

https://www.cxotalk.com/episode/digital-technology-trust-social-impact


House of Lords Member talks AI Ethics, Social Impact, and Governance

CXO Talk Jan 2021

What are the social, political, and government policy aspects of artificial intelligence? To learn more, we speak with Lord Tim Clement-Jones, Chairman of the House of Lords Select Committee on AI and advisor to the Council of Europe AI Committee.

What are the unique characteristics of artificial intelligence?

Michael Krigsman: Today, we're speaking about AI, public policy, and social impact with Lord Tim Clement-Jones, CBE. What are the attributes or characteristics of artificial intelligence that make it so important from a policy-making perspective?

Lord Tim Clement-Jones: I think the really key thing is (and I always say) AI has to be our servant, not our master. I think the reason that that is such an important concept is because AI potentially has an autonomy about it.

Brad Smith calls AI software that learns from experience. Well, of course, if software learns from experience, it's effectively making things up as it goes along. It depends, obviously, on the original training data and so on, but it does mean that it can do things of its own not quite volition but certainly of its own motion, which therefore have implications for us all.

Where you place those AI applications, algorithms (call them what you like) is absolutely crucial because if they're black boxes, humans don't know what is happening, and they're placed in financial services, government decisions over sentencing, or a variety of really sensitive areas then, of course, we're all going to be poorer for it. Society will not benefit from that if we just have this range of autonomous black box solutions. In a sense, that's slightly a rather dystopian way of describing it, but it's certainly what we're trying to avoid.

Michael Krigsman: How is this different from existing technologies, data, and analytics that companies use every day to make decisions and consumers don't have access to the logic and the data (in many cases) as well?

Lord Tim Clement-Jones: Well, of course, it may not be if those data analytics are carried out by artificial intelligence applications. There are algorithms that, in a sense, operate on data and come up with their own conclusions without human intervention. They have exactly the same characteristic.

The issue for me is this autonomy aspect, data analytics. If you've got actual humans in the loop, so to speak, then that's fine. We, as you know, have slightly tighter, well, considerably tighter, data protection in Europe (as a framework) for decision-making when you're using data. The aspect of consent or using sensitive data, a lot of that is covered. One has a kind of reassurance about that that there is, if you like, a regulatory framework.

But when it comes to automaticity, it is much more difficult because, at the moment, you don't necessarily have duties relating to the explainability of algorithms or the freedom from bias of algorithms, for instance, in terms of the data that's input or the decisions that are made. You don't necessarily have an overarching rule that says AI must be developed for human benefit and not, if you like, for human detriment.

There are a number of kinds of areas which are not covered by regulation. Yet, there are high-risk areas that we really need to think about.

Algorithmic decision-making and risks

Michael Krigsman: You focus very heavily on this notion of algorithmic decision-making. Please elaborate on that, what you mean by that, and also the concerns that you have.

Lord Tim Clement-Jones: Well, it's really interesting because, actually, quite a lot of the examples that one is trying to avoid come from the States. For instance, parole decisions or decisions in terms of artificial intelligence, that live facial recognition technology using artificial intelligence.

Sometimes, you get biased decision-making of a discriminatory nature in racial terms. That was certainly true in Florida with the COMPAS parole system. It's one of the reasons why places like Oakland, Portland, and San Francisco have banned live facial recognition technology in their cities.

Those are the kinds of aspects which you really do need to have a very clear idea of how you design these AI applications, what data you're putting in, how that data trains the algorithm, and then what the output is at the end of the day. It's trying to get some really clear framework for this.

You can call it an ethical framework. Many people do. I call it just, in a sense, a set of principles that you should basically put into place for, if you like, the overall governance or the design and for the use cases that you're going to use for the AI application.

Michael Krigsman: What is the nature of the framework that you use, and what are the challenges associated with developing that kind of framework?

Lord Tim Clement-Jones: I think one of the most important aspects is that this needs to be cross-country. This needs to be international. My desire, at the end of the day, is to have a framework which, in a sense, assesses the risk.

I am not a great regulator. I don't really believe that you've got to regulate the hell out of AI. You've got to basically be quite forensic about this.

You've got to say to yourself, "What are the high-risk areas that are in operation?" It could be things like live facial recognition. It could be financial services. It could be certain quite specific areas where there are high risks of infringement of privacy or decisions being made in a biased way, which have a huge impact on you as an individual or, indeed, on society because social media algorithms are certainly not free of issues to do with disinformation and misinformation.

Basically, it starts with an assessment of what the overall risk is, and then, depending on that level of risk, you say to yourself, "Okay, a voluntary code. Fine for certain things in terms of ethical principles applied."

But if the risk is a bit high, you say to yourself, "Well, actually, we need to be a bit more prescriptive." We need to say to companies and corporations, "Look, guys. You need to be much clearer about the standards you use." There are some very good international standard bodies, so you prescribe the kinds of standards, the design, an assessment of use case, audit, impact assessments, and so on.

There are certain other things where you say, "I'm sorry, but the risk of detriment, if you like, or damage to civil liberties," or whatever it may be, "is so high that, actually, what we have to have is regulation."

You say to yourself, then you have a framework. You say to yourself you can only use, for instance, live facial recognition in this context, and you must design your application in this particular way.

I'm a great believer in a graduation, if you like, of regulation depending on the risk. To me, it seems that we're moving towards that internationally. I actually believe that the new administration in the States will move forward in that kind of way as well. It's the way of the world. Otherwise, we don't gain public trust.

Trust and confidence in AI policy

Michael Krigsman: The issue of trust is very important here. Would you elaborate on that for us?

Lord Tim Clement-Jones: There are cultural issues here. One of the examples that we used in our original House of Lords report was GM foods. There's a big gulf, as you know, between the approach to GM foods in the States and in Europe.

In Europe, we sort of overreacted and said, "Oh, no, no, no, no, no. We don't like this new technology. We're not going to have it," and so on and so forth. Well, it was handled extremely badly because it looked as though it was just a major U.S. corporation that wanted to have its monopoly over seed production and it wasn't even possible for farmers to grow seed from seed and so on.

In a sense, all the messages were got wrong. There was no overarching ethical approach to the use of GM foods, and so on. We're determined not to get that wrong this time.

The reason why GM foods didn't take off in Europe was because, basically, the public didn't have any trust. They believed, if you like, an awful lot of (frankly) the myths that were surrounding GM foods.

It wasn't all myth. They weren't convinced of the benefit. Nobody really explained the societal benefits of GM foods.

Whether it would have been different, I don't know. Whether those benefits would have been seen to outweigh some of the dangers that people foresaw, I don't know. Certainly, we did not want this kind of approach to take place with artificial intelligence.

Of course, artificial intelligence is a much broader technology. A lot of people say, "Oh, you shouldn't talk about artificial intelligence. Talk about machine learning or probabilistic learning," or whatever it may be. But AI is a very useful, overall description in my view.

Michael Krigsman: How do you balance the competing interests, for example, the genetically modified food example you were just speaking about, the interest of consumers, the interest of seed producers, and so forth?

Lord Tim Clement-Jones: I think it's really interesting because I think you have to start with the data. You could have a set of principles. You could say that app developers need to look at the public benefit and so on and so forth. But the real acid test is the data that you're going to use to train the AI, the algorithm, whatever you may describe it as.

That's the point where there is this really difficult issue about what data is legitimate to extract from individuals. What data should be publicly valued and not sold by individual companies or the state (or whatever)? It is a really difficult issue.

In the States, you've had that brilliantly written book Surveillance Capitalism by Shoshana Zuboff. Now those raise some really important issues. Should an individual's behavioral data—not just ordinary personal data, but their behavioral data—be extractable and usable and treated as part of a data set?

That's why there is so much more discussion now about, well, what value do we attribute to personal data? How do we curate personal data sets? Can we find a way of not exactly owning but, certainly, controlling (to a greater extent) the data that we impart, and is there some way that we can extract more value from that in societal terms?

I do think we have to look a bit more. Certainly, in the UK, we've been very keen on what we call data trust or social data foundations, but institutions that hold data, public data; for instance, our national health service. Obviously, you have a different health service in the States, but data held by a national health service could be held in a data trust and, therefore, people would see what the framework for governance was. This would be actually very reassuring in many ways for people to see that their data was simply going to be used back in the health service or if it was exploited by third parties, that that was again for the benefit of the national health service: vaccinations, diagnosis of rare diseases, or whatever it may be.

It's really seeing the value of that data and not just seeing it as a commercial commodity that is taken away by a social media platform, for instance, and exploited without any real accountability. Arguing that terms and conditions do the job doesn't ever – I'm a lawyer, but I still don't believe that terms and conditions are adequate in those circumstances.

Decision-making about AI policy and governance

Michael Krigsman: We have a very interesting question from Arsalan Khan, who is a regular listener and contributor to CXOTalk. Thank you, Arsalan, always, for all of your great questions. His question is very insightful, and I think also relates to the business people who watch this show. He says, "How do you bring together the expertise (both in policymaking as well as in technology) so that you can make the right decisions as you're evaluating this set of options, choices, and so forth that you've been talking about?"

Lord Tim Clement-Jones: Well, there's no substitute for government coordination, it seems to me. The White House under President Obama had somebody who really coordinated quite a lot of this aspect.

There was, there has been, in the Trump White House, an AI specialist as well. I don't think they were quite given the license to get out there and sort of coordinate the effort that was taking place, but I'm sure, under the new administration, there will be somebody specifically, in a sense, charged with creating policy on AI in all its forms.

The States belongs to the Global Partnership on AI with Canada, France, UK, and so on. And so, I think there is a general recognition that governments have a duty to pull all this together.

Of course, it's a big web. You've got all those academic institutions, powerful academic institutions, who are not only researching into AI but also delivering solutions in terms of ethics, risk assessments, and so on. Then you've got all the international institutions: OECD, Council of Europe, G20.

Then at the national level, in the UK for instance, we've got regulators of data. We have an advisory body that advises on AI, data, and innovation. We have an office for AI in government.

We have The Alan Turing Institute, which pulls together a lot of the research that is being done in our universities. Now, unless somebody is sitting there at the center and saying, "How do we pull all this together?" it becomes extremely incoherent.

We've just had a paper from our competition authority on algorithms and the way that they may create consumer detriment in certain circumstances where they're misleading. For instance, on price comparison or whatever it may be.

Now, that is very welcome. But unless we actually boat that all into what we're trying to do across government and internationally, we're going to find ourselves with a set of rules and another set of rules there. Actually, trading across borders is difficult enough as it is, and we've got all the data shield and data adequacy issues at this very moment. Well, if we start having issues about inspection of the guts of an algorithm before an export can take place—because we're not sure that it's conforming to our particular set of rules in our country—then I think that's going to be quite tricky.

I'm a big fan of elevating this and making sure that, right across the board, we've got a common approach. That's why I'm such a big fan of this risk-based approach because I think it's common sense, basically, and it doesn't have one size fits all. I think, also, it means that, culturally, I think we can all get together on that.

Michael Krigsman: Is there a risk of not capturing the nuances because this is so complex and, therefore, creating regulation or even policy frameworks that are just too broad-brushed?

Lord Tim Clement-Jones: There is a danger of that but, frankly, I think, at the end of the day, whatever you say about this, there are going to be tools. I think regulation is going to happen at a sector level, probably.

I think that it's fair enough to be relatively broad-brushed across the board in terms of risk assessment and the general principles to be adopted in terms of design and so on. You've got people like the IEEE who are doing ethically aligned design standards and so on.

It's when it gets down to the sector level that I think then you get more specific. I don't think most of us would have too much objection to that. After all, alignment by sector.

For instance, the rules relating to financial services in the States (for instance in mergers, takeovers, and such) aren't very different to those in the UK, but there is a sort of competitive drive towards aligning your regulation and your regulatory rules, so to speak. I'd be quite optimistic that, actually, if we saw that (or if you saw that) there was one type of regulation in a particular sector, you'd go for it.

Automated vehicles, actually, is a very good example where regulation can actually be a positive driver of growth because you've got a set of standards that everybody can buy into and, therefore, there's business certainty.

How to balance competing interests in AI policy

Michael Krigsman: Arsalan Khan comes back with another question, a very interesting point, talking about the balancing of competing goals and interests. If you force open those algorithmic black boxes then do you run the risk of infringing the intellectual property of the businesses that are doing whatever it is that they're doing?

Lord Tim Clement-Jones: Regulators are very used to dealing with these sorts of issues of inspection and audit. I think that it would be perfectly fine for them to do that and they wouldn't be infringing intellectual property because they wouldn't be exploiting it. They're be inspecting but not exploiting. I think, at the end of the day, that's fine.

Also, don't forget; we've got this great concept now. The regulators are much more flexible than they used to be of sandboxing.

Michael Krigsman: How do you balance the interests of corporations against the public good, especially when it comes to AI? Maybe give us some specific examples.

Lord Tim Clement-Jones: For instance, we're seeing that in the online situation with social media. We've got this big debate happening, for instance, on whether or not it's legitimate for Twitter to delist somebody in terms of their account with them. No doubt, the same is true with Facebook and so on.

Now, maybe I shouldn't talk about it not being fair to a social media platform to have to make those decisions but—because of all the freedom of speech issues—I'd much prefer to see a reasonably clear set of principles and regulations that's about when social media platforms actually ought to delist somebody.

We're developing that in the UK in terms of Online Harms so that social media will have certain duties of care towards certain parts of the community, particularly young people and the vulnerable. They will have a duty to actually not delist or take off content or what has been called detoxing the algorithm. We're going to try and get a set of principles where people are protected and social media platforms have a duty, but it isn't a blanket and it doesn't mean that social media have to make freedom of speech decisions in quite the same way.

Inevitably, public policy is a balance and the big problem is ignorance. It's ignorance on the part of the social media platforms as to why we would want to regulate them and it's ignorance on the part of politicians who actually don't understand the niceties of all of this when they're trying to regulate.

As you know, some of us are quite dedicated to joining it all up so people really do understand why we're doing these things and getting the right solutions. Getting the right solution in this online area is really tricky.

Of course, at the middle of it, and this is why it's relevant to AI, is the algorithm, is the pushing of messages in particular directions which are autonomous. We're back to this autonomous issue, Michael.

Sometimes, you need to say, "I'm sorry." You need to be a lot more transparent about how this is working. It shouldn't be working in that way, and you're going to have to change it.

Now, I know that's a big, big change of culture in this area, but it's happening and I think that with the new administration, Congress, and so on, I think we'll all be on the same page very shortly.

Michael Krigsman: I have to ask you about the concentration of power that's taken place inside social media companies. Social media companies, many of them born in San Francisco, technology central, and so the culture of technology, historically, has been, "Well, you know, we create tools that are beneficial for everyone, and leave us alone," essentially.

Lord Tim Clement-Jones: Well, that's exactly where I'm coming from in terms of that culture has to change now. There is an exception, so I think that if you talk to the senior people in the social media companies and the big platforms, they will now accept that actually the responsibility of having to make decisions about delisting people and so on or what content should be taken down is not something they feel very comfortable about and they're getting quite a lot of heat as a result of it. Therefore, I think increasingly they will welcome regulation.

Now, obviously, I'm not predicating what kind of regulation is appropriate outside the UK or what would be accepted but, certainly, that is the way it's worked with us and there's a huge consensus across parties that we need to have a framework for the social media operations. That it isn't just Section 230, as you know, which sort of more or less allows anything to happen. In that sense, you don't take responsibility as a platform. Well, you know, not that we've ever accepted that in full in Europe but, in the UK, certainly.

Now, we think that it's time for social media platforms to take responsibility but recognizing the benefits. Good heavens. I tweet like the next person. I'm on LinkedIn. I'm no longer on Facebook. I took Jaron Lanier's advice.

There are platforms that are out there which are the Wild West. We've heard about Parler as well. We need to pull it together pretty quickly, actually.

Digital ethics: The House of Lords AI Report

Michael Krigsman: We have some questions from Twitter. Let's just start going through them. I love taking questions from Twitter. They tend to be great questions.

You created the House of Lords AI Report. Were there any outcomes that resulted from that? What did those outcomes look like?

Lord Tim Clement-Jones: Somebody asked me and said, "What was the least expected outcome?" I expected the government to listen to what we had to say and, by and large, they did.

To a limited extent, in terms of coordination, they haven't moved very fast on skills. Again, to touch on skills, they haven't moved nearly fast enough on skills.

They haven't moved fast enough on education and digital understanding, although, we've got a new kind of media literacy strategy coming down the track in the UK. Some of that is due to the pandemic but, actually, it's a question of energy and so on.

They've certainly done well in terms of the climate in terms of the search investment and in terms of the kind of nearer to market type of encouragement that they've given. So, I would score their card at about six out of ten. They've done well there.

They sort of said, "Yes, we accept your ethical AI, your trustworthy AI message," which was a core of what we were trying to say. They also accepted the diversity message. In fact, if I was going to say where they've performed best in terms of taking it on board, it's this diversity in the AI workforce, which I think is the biggest plus.

The really big plus has been the way the private sector in the UK has taken on board the messages about trustworthy AI, ethical AI. Now, techUK, which is our overarching trade body in the UK, they now have a regular annual conference about ethics and AI, which is fantastic. They're genuinely engaged.

In a sense, the culture of the app developer, the AI app developer, really encompasses ethics now. We don't have this kind of hypocritic oath for developers but, certainly, the expectations are that developers are much more plugged into the principles by which they are designing artificial intelligence. I think that will continue to grow.

The education role that techUK has played with their members has been fantastic and is a general expectation (across the board) by our regulators. We've reinforced each other, I think, probably, in that area, which I think has been very good because, let's face it, the people who are going to develop the apps are the private sector.

The public sector, by and large, procure these things. They've had sets of ethical principles now for procurement that they've put in place: World Economic Forum principles, data-sharing frameworks, and so on, or ethical data sharing frameworks.

Generally, I think we've seen a fair bit of progress. But we did point out in our just most recent report where they ran the risk of being complacent and we warned against that, basically.

Michael Krigsman: We have a really interesting question from Wayne Anderson. Wayne makes the point that it's difficult to define digital ethics at scale because of the competing interests across society that you've been describing. He said, "Who owns this decision-making, ultimately? Is it the government? Is it the people? How does it manifest? And who decides what AI is allowed to do?"

Lord Tim Clement-Jones: That's exactly my risk-based approach. It depends on what the application is. You do not want a big brother type government approach to every application of AI. That would be quite stupid. They couldn't cope anyway and it would just restrict innovation.

What you have to do—and this is back to my risk assessment approach—you have to say, "What are the areas where there's potential of detriment to the citizens, to the consumers, to society? What are those areas and then what do we do about them? What are the highest risks?"

I think that is a proportionate way of looking at dealing with AI. That is the way forward for me, and I think it's something we can agree on, basically, because risk is something that we understand. Now, we don't always get the language right, but that's something I think we can agree on.

Michael Krigsman: Wayne Anderson follows up with another very interesting question. He says, "When you talk about machine learning and statistical models, it's not soundbite friendly. To what degree is ignorance of the problem and the nature of what's going on and the media inflaming the challenges here?"

Lord Tim Clement-Jones: The narrative of AI is one of the most difficult and the biggest barriers to understanding: public understanding, understanding by developers, and so on.

Unfortunately, we're victims in the West of a sort of 3,000-year-old narrative. Kumar wrote about robots. Jason and the Argonauts had to escape from a robot walking around the Isle of Crete. That was 3,000 years ago.

It's been in our myths. We've had Frankenstein, the Prague Golem, you name it. We are frightened, societally existentially frightened by "other," by the "other," by alien creatures.

We think of AI as embedded in physical form, in robots, and this is the trouble. We've seen headlines about terminator robots.

For instance, when we launched our House of Lords report, we had headlines about House of Lords saying there must be an ethical code to prevent terminator robots. You can't get away from the narrative, so you have to double up and keep doubling up on the public trust in terms of the reassurance about the principles that are applied, about the benefits of AI applications, and so on.

This is why I raised the GM foods point because—let's face it—without much narrative about GM foods, they were called Frankenfoods. They didn't have a thousand years of history about aliens, but we do in AI, so the job is bigger.

Impact of AI on society and employment

Michael Krigsman: Any conversation around AI ethics must include a discussion of the economic impacts of AI on society and the displacement, worker displacement, and economic displacements that are taking place. How do we bring that into the mix?

Lord Tim Clement-Jones: There are different forecasts and we have to accept the fact that some people are very pessimistic about the impact on the workforce of artificial intelligence and others who are much more sanguine about it. But there are choices to be made.

We have been here before. If you look at 5th Avenue in 1903, what do you see? You see all horses. If you look at 5th Avenue in 1913, you see all carriages. I think you see one horse in the photograph.

This is something that society can adjust to but you have to get it right in terms of reskilling. One of the big problems is that we're not moving fast enough.

Not only is it about education in schools—which is not just scientific and technological education—it's about how we use AI creatively, how we use it to augment what we do, to add to what we do, not just simply substitute for what we do. There are creative ways we need to learn about in terms of using AI.

Then, of course, we have to recognize that we have to keep reinventing ourselves as adults. We can't just expect to have the same job for 30 years now. We have to keep adjusting to the technology as it comes along.

To do that, you can't just do it by yourself. You have to have—I don't know—support from government like a life-long learning account as if you were getting a university loan or grant. You've got to have employers who actually make the effort to make sure that their worker skills don't simply become obsolete. You've got to be on the case for that sort of thing. We don't want a kind of digital rustbelt in all of this.

We've got to be on the case and it's a mixture of educators, employers, government, and individuals, of course. Individuals have to have the understanding to know that they can't just simply take a job and be there forever.

Michael Krigsman: Again, it seems like there's this balancing that's taking place. For example, in the role of government in helping ease this set of economic transitions but, at the same time, recognizing that there will be pain and that individuals also have to take responsibility. Do I have that right, more or less?

Lord Tim Clement-Jones: Absolutely. I'm not a great fan of the government doing everything for us because they don't always know what they need to do. To expect government to simply solve all the problems with a wave of the financial wand, I think, is unreasonable.

But I do think this is a collaboration that needs to take place. We need to get our education establishment—particularly universities and further education in terms of pre-university colleges and, if you like, those developing different kinds of more practical skills—involved so that we actually have an idea about the kinds of skills we're going to need in the future. We need to continually be looking forward to that and adjusting our training and our education to that.

At the moment, I just don't feel we're moving nearly fast enough. We're going to wake up with a dreadful hangover (if we're not careful) with people without the right skills but the jobs can't be filled and, yet, we have people who can't get jobs.

This is a real issue. I'm not one of the great pessimists. I just think that, at any pace, we have a big challenge.

Michael Krigsman: We also need to talk about COVID-19. Where are you, in the UK, dealing with this issue? As somebody in the House of Lords, what is your role in helping manage this?

Lord Tim Clement-Jones: My job is to push and pull and kick and shove and try and move government on, but also be a bit of a hinge between the private sector, academia, and so on. We've got quite a community now of people who are really interested in artificial intelligence, the implications, how we further it to public benefit, and so on. I want to make sure that that community is retained and that government ministers actually listen to that community and are a part of that community.

Now, you know I get frustrated sometimes because government doesn't move as fast as we all want it to sometimes. Algorithmic decision-making in government, our government hasn't yet woken up to the need to have a fairly clear governance and compliance framework, but they'll come along. I'd love it if they were a bit faster, but I've still got enough energy to keep pushing them as fast as I can go.

Michael Krigsman: Any thoughts on what the post-pandemic work world will look like?

Lord Tim Clement-Jones: [Loud exhale] I mean this is the existential fret because, if you like, of the combination of COVID and the acceleration of remote working, particularly where lightbulbs have gone off in a lot of board rooms about what is possible now in terms of use of technology, which weren't there before. If we're not careful, and if people don't make the right decisions in those boardrooms, we're going to find substitution by technology of people taking place to quite a high degree without thinking about how the best combination between technology and humans work, basically. It's just going to be seen as, "Well, we can save costs and so on," without thinking about the human implications.

If I were going to issue any kind of gypsy's warning, that's what I'd say is that, actually, we're going to find ourselves in a double whammy after the pandemic because of new technology being accelerated. All those forecasts, actually, are going to come through quicker than we thought if we're not careful.

Michael Krigsman: Any final closing thoughts as we finish up?

Lord Tim Clement-Jones: I use the word "community" a fair bit, but what I really like about the world of AI (in all its forms) whatever we're interested in—skills, ethics, regulation, risk, development, benefit, and so on—is the fact that we're a tribe of people who like discussing these things, who want to see results, and it's international. I really do believe that the kind of conversation you and I have had today, Michael, is really important in all of this. We've got international institutions that are sharing all this.

The worst thing would be if we had a race to the bottom with AI and its principles. "Okay, no, we won't have that because that's going to damage our competitiveness," or something. I think I would want to see us collaborate very heavily, and they're used to that in academia. We've got to make sure that happens in every other sphere.

Michael Krigsman: All right. Well, a very fast-moving conversation. I want to say thank you to Lord Tim Clement-Jones, CBE, for taking time to be with us today. Thank you for coming back.

Lord Tim Clement-Jones: Pleasure. Absolute pleasure, Michael.

https://www.cxotalk.com/episode/house-lords-member-talks-ai-ethics-social-impact-governance


UK at risk without a national data strategy

Leading peers on the House of Lords Select Committee on Artificial Intelligence worry that the UK will not benefit or control AI as national data strategy is delayed.

By Mark Chillingworth

IDG Connect | MAR 21, 2021 11:30 PM PDT

The UK has no national data strategy, which places the businesses and citizens of the European country at risk, according to the chair of the House of Lords Select Committee on Artificial Intelligence (AI). A national data strategy was promised in the autumn of 2020, but the chair of the AI Select Committee says a government consultation programme that closed in December 2020 was too shallow to provide the UK with the framework needed to derive economic, societal and innovative benefit. 

“The National Data Strategy has been delayed and will report in small parts, which will not encourage debate,” says Lord William Wallace, a Cabinet spokesperson in the House of Lords, the second chamber of British politics. Lord Wallace and his fellow Liberal Democrat party peer Lord Tim Clement-Jones are at the forefront of a campaign within the corridors of British political power to get the National Data Strategy debated properly by those it will impact - UK businesses and citizens - and then in practice under the leadership of a UK government Chief Data Officer.  

“The questions in the consultation were closed in nature and very much suggested the government already had a view and did not want to encourage debate,” Lord Wallace adds. The current government, which has been in place since 2010, has been incredibly vocal over the last decade of the importance of data to the UK. “They talk of nothing else and set up bodies like NHSX, and Dominic Cummings was a big fan of data,” Wallace says of the former advisor to Vote Leave, the Conservative Party and Prime Minister Boris Johnson. Lord Tim Clement-Jones worries that the attitudes of Cummings - who was forced out of the government in late 2020 - have coloured the government’s approach to a national data strategy. “He treated data as a commodity, and if data is in the hands of somebody that sees it as a commodity, it will not be protected, and that is not good for society. Palantir has a very similar view; the data is not about citizen empowerment,” Lord Clement-Jones says of the US data firm that was working on a UK Covid-19 data store. 

“A small minority of politicians are following this issue, and the National Data Strategy is under the remit of the Department for Culture Media and Sport (DCMS), which is not the most powerful department in the Cabinet,” Lord Wallace says. 

In December, the House of Lords Select Committee on Artificial Intelligence published a report: AI in the UK: No Room for Complacency, which called for the establishment of a Cabinet Committee “to commission and approved a five-year strategy for AI…ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.”

Lord Clement-Jones says a Cabinet-level committee is vital due to the ad hoc status of the committee he chairs. In addition, the rate of AI growth requires the government to pay close attention to the detail and impact of AI. As the report revealed: “in 2015, the UK saw £245 million invested in AI. By 2018, this had increased to over £760 million. In 2019 this was £1.3 billion...It is being used to help tackle the COVID-19 pandemic, but is also being used to underpin facial recognition technology, deep fakes, and other ethically challenging uses.”

“One of the big issues for us is, where do you draw the line for public usage? AI raises lots of issues, and as a select committee, we are navigating the new world of converging technologies such as the Internet of Things, cloud computing and the issue of sovereignty. And we have seen in the last few months that this government will subordinate all sorts of issues to sovereignty,” Lord Clement-Jones says. Adding that as a result of the sovereignty debate, businesses on both sides of the channel have lost vital mutual benefits.

“You have to look at these issues incredibly carefully. If people are too cavalier about things, like the Home Office has been over work permits, then it's very concerning...Take the recent trade deal with Japan, it is not at all clear that UK health data is part of this deal, and the government is walking blindly into this stuff,” Lord Clement-Jones says.

Data adequacy between the UK and Europe ends in June 2021 and a number of CIOs report concerns about the loss of existing data standards and protocols with the UK’s largest trading partner. “Relationships between government and business are very poor,” the Lord adds.

Despite the attitude of “F**k business” from English Prime Minister Boris Johnson, Lord William Wallace says there is a vibrant debate about data and ethics amongst the UK business and technology community, which has to be harnessed because he says, data is not debated enough in politics or the mainstream media. “We only hear the lurid headlines about Cambridge Analytica and never the benefits this technology offers.”

Data did, momentarily, become mainstream during the worst periods of the pandemic, with local government and health agencies revealing that they were not being given full access to Covid-19 data by the central government. “The over-centralisation is very much part of the problem, we have not used public health authorities effectively, for example,” Lord Wallace says. He adds that how local and national governments collect and release data to one another needs to be discussed and addressed. “We have some really powerful combined authorities in the UK now, and their data is really granular,” adding that now GPs and local health bodies are in charge of the UK Covid vaccination programme, successful results are being delivered. Centralisation of the initial pandemic response in the UK has led to the highest death toll in Europe and one of the highest mortality rates in the world. 

Global standing

As the UK exited the European Union, there was a narrative from Boris Johnson that the UK’s trading future would be closely aligned with the USA, but with Johnson’s close ally Donald Trump losing the US presidential election in 2020, the two Lords wonder if Johnson can be so assured, especially when it comes to data, and they worry about the impact on British business. “The government don’t stop to look at where data flows,” Lord Clement Jones says of the poor business relationship leading to a poor understanding. On the USA, they believe the new Biden administration will have to move towards greater data protection. On the flip side of this, Lord Wallace points out that the government has been championing the UK’s role in the Five Eyes security services pact, but it is not clear if the USA security services is able to carry out mass data collection in the UK from the shared intelligence centre Menwith Hill in the UK, claiming that there is no written agreement between the two nations. 

It is for this reason the two Lords believe it is vital that the UK engages in a national debate about data’s benefits and public concerns. “The public are most scared about health data as it is the one they are most aware of, yet the debate about the government’s collection of data is absent from public debate,” Lord Wallace says. Lord Clement-Jones adds that he is concerned that there is a danger of public distrust growing. “So now it is about how do we create a debate so that we create a circular flow of data that benefits society and involves important and respected organisations like the Ada Lovelace Institute, Big Brother Watch and the Open Data Institute?”

“The UK remains an attractive place to learn, develop, and deploy AI. It has a strong legal system, coupled with world-leading academic institutions, and industry ready and willing to take advantage of the opportunities presented by AI,” concludes the Lord’s report AI in the UK: No Room for Complacency.

https://www.idgconnect.com/article/3611769/uk-at-risk-without-a-national-data-strategy.html


Lord Clement-Jones on protecting and valuing our healthcare data December 2020

Future Care Capital Guest blog December 2020

https://futurecarecapital.org.uk/latest/lord-clement-jones/

With the EU/UK negotiations on a knife edge, the recent conclusion of a UK/Japan trade agreement, consultation on a National Data Strategy and the current passage of a Trade Bill through parliament, data issues are front and centre of policy making.

NHS data in particular of course is a precious commodity especially given the many transactions between technology, telecoms and pharma companies concerned with NHS data. EY in a recent report estimated the value of NHS data could be around £10 billion a year in the benefit delivered.

The Department for Health and Social Care is preparing to publish its National Health and Care Data Strategy in the New Year, in which it is expected to prioritise the “Safe, effective and ethical use of data-driven technologies, such as Artificial Intelligence, to deliver fairer health outcomes”. Health professionals have strongly argued that free trade deals risk compromising the safe storage and processing of NHS data.

The objective must be to ensure that the NHS and not the US big tech companies and drug giants reap the benefit of all this data. Harnessing the value of healthcare data must be allied with ensuring that adequate protections are put in place in trade agreements if that value isn’t to be given or traded away.

There is also the need for data adequacy to ensure that personal data transfers to third countries outside the EU are protected, in line with the principles of the GDPR. Watering down the UK’s data protection legislation will only reduce the chances of receiving an adequacy decision.

"NHS data in particular of course is a precious commodity especially given the many transactions between technology, telecoms and pharma companies concerned with NHS data."

Lord Clement-Jones

There is also a concern that the proposed National Data Strategy will lead to the weakening of data protection legislation, just as it becomes ever more necessary for securing citizens’ rights. There should however be no conflict between good data governance and economic growth and better government through effective use of data.

The section of the Final Impact Assessment of the Comprehensive Economic Partnership Agreement between the UK and Japan, which deals with Digital trade provisions, says that the agreement “contains commitments to uphold world-leading standards of protection for individuals’ personal data, in line with the UK’s Data Protection Act 2018, when data is being transferred across borders. This ensures that both consumer and business data can flow across borders in a safe and secure manner.”

But the agreement has Article 8.3 a which appears to provide a general exception for data flows where this is “necessary to protect public security or public morals or to maintain public order or…… to protect human, animal or plant life or health”. So, the question has been raised whether this will override UK data protection law and give access to access to source code and algorithms.

To date there have been shortcomings in the sharing of data between various parts of the health service, care sector and civil service. The process of development of the COVID 19 App has not improved public trust in the Government’s approach to data use.

There is also a danger that the UK will fall behind Europe and the rest of the world unless it takes back control of its data and begins to invest in its own cloud capabilities.

Specifically, we need to ensure genuine sovereignty of NHS data and that it is monetized in a safe way focused on benefitting the NHS and our citizens.

With a new National Data Strategy in the offing there is now the opportunity for the government to maximize the opportunities afforded through the collection of data and position the UK as leader in data capability and data protection. We can do this and restore credibility and trust through:

  • Guaranteeing greater transparency of how patient data is handled, where it is stored and with whom and what it is being used for especially through vehicles such as data trusts and social data foundations
  • Appropriate and sufficient regulation that strikes the right balance between credibility, trust, ethics and innovation
  • Ensuring service providers that handle patient data operate within a tight ethical framework
  • Ensuring that the UK’s data protection regulation isn’t watered down as a consequence of Brexit or through trade agreements
  • Making the UK the safest place in the world to process and store data. In delivering this last objective there is a real opportunity for government to lead by example–not just the UK, but the rest of the world by developing its own sovereign data capability.

Retention of control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain the UK’s position as a leading life science economy and innovator and that is where as part of the new trade legislation being put in place clear safeguards are needed to ensure that in trade deals our publicly held data is safe from exploitation except as determined by our own government’s democratically taken decisions.


Lord Clement-Jones on Trustworthy Trade and Healthcare Data April

Future Care Capital Guest Blog April 2021

I’m an enthusiast for the adoption of new technology in healthcare but it is concerning when a body such as Axrem, which represents a number of health tech companies,  has said that while there is much interest in pilots and proof of concept projects, the broad adoption of AI is still problematic for many providers for reasons that include the fact that “some early healthcare AI projects have failed to manage patient data effectively, leading to scepticism and concern among professionals and the public.”

I share this concern – especially when we know that some big tech and big pharma companies seem to have a special relationship with the DHSC (Department for Health and Social Care) – and in the light of the fact that one of the new 10 government priorities is:-

“Championing free and fair digital trade: As an independent nation with a thriving digital economy, the UK will lead the way in a new age of digital trade. We will ensure our trade deals include cutting-edge digital provisions, as we did with Japan, and forge new digital partnerships and investment opportunities across the globe”

The question is what guarantee do we have that our health data will be used in an ethical manner, assigned its true value and used for the benefit of UK healthcare?

Back in April 2018 in our House of Lords AI Select Committee Report, ‘AI in the UK:  Ready Willing and Able?’ we identified the issue:

  1. The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped.

This received the bland government response:

“We will continue to work with ICO, NDG, regulatory bodies, the wider NHS and partners to ensure that appropriate regulatory frameworks, codes of conduct and guidance are available.”

Since then, of course, we have had a whole series of documents designed to reassure on NHS data governance:

But all lack assurance on the mechanisms for oversight and compliance.

Then in July last year the CDEI (Centre for Data Ethics and Innovation) published “Addressing trust in public sector data use” which gives the game away.  They said

“Efforts to address the issue of public trust directly will have only limited success if they rely on the well-trodden path of developing high-level governance principles and extolling the benefits of successful initiatives.

“While principles and promotion of the societal benefits are necessary, a trusted and trustworthy approach needs to be built on stronger foundations.  Indeed, even in terms of communication there is a wider challenge around reflecting public acceptability and highlighting the potential value of data sharing in specific contexts

So the key question is, what is actually happening in practice?

We debated this during the passage of both the Trade Bill and the Medicines and Medical Devices Bill and the results were not reassuring. In both bills we tried to safeguard state control of policy-making and the use of publicly funded health and care data as a significant national asset.

As regards the Japan/UK Trade Agreement for example the Government Minister said – when pressed – at Report Stage it “removes unjustified barriers to data flows to ensure UK companies can access the Japanese market and provide digital services.  It does this by limiting the ability for governments to put in place unjustified rules that prevent data from flowing and create barriers to trade.”

But as Lord Freyberg rightly said at the time, there is widespread recognition that the NHS uniquely controls nationwide longitudinal healthcare data, which has the potential to generate clinical, social and economic development as well as commercial value. He argued that the Government should take steps to protect and harness the value of that data and, in the context of the Trade Bill, ensure that the public can be satisfied that that value will be safeguarded and, where appropriate, ring-fenced and reinvested in the UK’s health and care system.

On a Medicines Bill debate in January, Lord Bethell employed an extraordinarily circular argument:

“It is important to highlight that we could only disclose information under this power where disclosure is required in order to give effect to an international agreement or arrangement concerning the regulation of human medicines, medical devices or veterinary medicines. In that regard, the clause already allows disclosure only for a particular purpose.  As international co-operation in this area is important and a good, even necessary, thing, such agreements or arrangements would be in the public interest by default.

So, it is clear we still do not have adequate provisions regarding the exploitation internationally of health data, which according to a report by EY, could be around £10 billion a year in the benefit delivered.

We were promised the arrival of a National Health and Care Data Strategy last autumn. In the meantime, trade agreements are made, Medicine Bills are passed, and we have little transparency about what is happening as regards NHS data – especially in terms of contracts with companies like Palantir and Amazon.

The Government is seeking to champion the free flow of data almost as an ideology. This is clear from the replies we received during the Trade and Medicines and Medical Devices Bills and indeed a recent statement by John Whittingdale, the Minister for Media and Data. He talks about the:

“…UK’s new, bold approach to international data transfers”,

Our international strategy will also explore ways in which we can use data as a strategic asset in the global arena and improve data sharing and innovation between our international partners.”

and finally…

Our objective is for personal data to flow as freely and as safely as possible around the world, while maintaining high standards of data protection.”

What do I prescribe?

At the time when these issues were being debated, I received an excellent briefing from Future Care Capital which proposed that “Any proceeds from data collaborations that the Government agrees to, integral to any ‘replacement’ or ‘new’ trade deals, should be ring-fenced for reinvestment in the health and care system, pursuant with FCC’s long-standing call to establish a Sovereign Health Fund.”

This is an extremely attractive concept. Retaining control over our publicly generated data, particularly health data, for planning, research and innovation is vital if the UK is to maintain its position as a leading life science economy and innovator.

Furthermore, with a new National Data Strategy in the offing there is now the opportunity for the government to maximize the opportunities afforded through the collection of data and position the UK as leader in data capability and data protection.

We can do this and restore credibility and trust through guaranteeing greater transparency of how patient data is handled, where it is stored and with whom and what it is being used for, especially through vehicles such as data trusts and social data foundations.

As the Understanding Patient Data and Ada Lovelace report ‘Foundations of Fairness’ published in March 2020 said:

Public accountability, good governance and transparency are critical to maintain public confidence.  People care about NHS data and should be able to find out how it is used. Decisions about third party access to NHS data should go through a transparent process and be subject to external oversight.”

This needs to go together with ensuring:

  • Appropriate and sufficient regulation that strikes the right balance between credibility, trust, ethics and innovation;
  • service providers that handle patient data operate within a tight ethical framework;
  • that the UK’s data protection regulation is not watered down as a consequence of Brexit or through trade agreements; and
  •  the UK develops its own sovereign data capability to process and store data.

To conclude

As the report “NHS Data: Maximising its impact on the health and wealth of the United Kingdom” last February from Imperial College’s Institute of Health Innovation said:

Proving that NHS and other health data are being used to benefit the wider public is critical to retaining trust in this endeavour.”

At the moment that trust is being lost.

Lord Clement-Jones was made CBE for political services in 1988 and a life peer in 1998. He is the Liberal Democrat House of Lords spokesperson for Digital (2017-), previously spokesperson on the Creative Industries (2015-17). He is the former Chair of the House of Lords Select Committee on Artificial Intelligence which sat from 2017 to 2018 and Co-Chairs the All-Party Parliamentary Group on AI. Tim is a founding member of the OECD Parliamentary Group on AI and a member of the Council of Europe’s Ad-hoc Committee on AI (CAHAI). He is a former member of the House of Lords Select Committees on Communications and the Built Environment. Currently, he is a member of the House of Lords Select Committee on Risk Assessment and Risk Planning. He is a Consultant of global law firm DLA Piper where previous positions held included London Managing Partner (2011-16), Head of UK Government Affairs, Chairman of its China and Middle East Desks, International Business Relations Partner and Co-Chairman of Global Government Relations. He is Chair of Ombudsman Services Limited, the not for profit, independent ombudsman service providing dispute resolution for the communications, energy, property and copyright licensing industries. He is Chair of Council of Queen Mary University of London and Chairs the Advisory Council of the Institute for Ethical AI in Education, led by Sir Anthony Seldon. He is a Senior Fellow of the Atlantic Council’s GeoTech Center which focusses on technology, altruism, geopolitics and competition. He is President of Ambitious About Autism, an autism education charity and school.

https://futurecarecapital.org.uk/latest/guest-blog-lord-clement-jones/