New Surveillance Code Incompatible with Human Rights

Recently the Government Introduced a revised Surveillance Camera Code of Practice which it claims make the police's use of live facial recognition compliant with the Bridges Case. This is my my speech on the regret motion I tabled in response with very helpful support from Liberty.

That this House regrets the Surveillance Camera Code of Practice because (1) it does not constitute a legitimate legal or ethical framework for the police’s use of facial recognition technology, and (2) it is incompatible with human rights requirements surrounding such technology.


Artificial Intelligence and Intellectual Property: incentivize human innovation and creation

Christian Gordon-Pullar and I recently responded to the  Government's Consultation Paper on Artificial Intelligence and Intellectual Property: Copyright and Patents

This is what we said;

As Artificial Intelligence (AI) becomes embedded in people’s lives, the United Kingdom (UK) is at a pivotal inflection point. The UK’s National AI Strategy rightly recognises Artificial Intelligence (AI) as the ‘fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life’ and estimates that AI could deliver a 10% increase in UK GDP in 2030.

The UK is, potentially, well-positioned to be a world-leader in AI, over time, as a genuine research and innovation powerhouse, a hub for global talent and a progressive regulatory and business environment.  Achieving this will involve attracting, retaining and incentivising business to create, protect and locate investment efforts in the UK.   The UK has the potential to gain impetus  from a position of strength in AI research, enterprise and ethical regulation, and, with its recent history of support for AI, it stands among the best in the world.  To attract talent, incentivise investment in AI-powered or AI-focused innovation, influence global markets and shape global governance, the nature of the Intellectual Property regime in the UK relating to AI will be crucial.

Specifically in relation to the three headline areas of focus in the Consultation Paper:

 

1. Copyright: Computer Generated Works

The UK is one of only a handful of countries to protect works generated by a computer where there is no human creator. The “author” of a “computer-generated work” (CGW) is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. Protection lasts for 50 years from the date the work is made.

In the same way  the owner of the literary work and the copyright subsisting in it, if it were original, would be, alternatively:

  1. a) the operator of an AI system (aligning its inputs and selecting its datasets and data fields); or
  2. b) their employer, if employed; or
  3. c) a third party if the operator has a contract assigning such rights outside of employment context.  

To be original, a work must be an author’s or artist’s own intellectual creation, reflecting their personality (see the decisions of the EU Court of Justice in Infopaq, C-5/08, and Painer, C-145/10).  

At the other end of the scale, a human who simply provides training datato an AI system and presses “analyse” is unlikely to be considered the author of the resulting work.

In this way we believe that the existing copyright legislative framework under the CDPA adequately addresses the current needs of AI developers. New entrants and disruptors can, in our opinion, work within the existing framework which adequately caters for the existing and foreseeable future.

Indeed realistic hypothetical future scenarios may well involve an AI system having access to content from global providers and creating derivative content (whether under licence or not) and doing so at great speed with little or no investment or “sweat of the brow” and, therefore it can be argued that in fact the level of protection should be reduced to be proportionate to the time effort and investment  involved.

Further, we would also urge that copyright law is clarified to ensure that it is the operator (or his /her employer) of the AI system (that is, the person that guides the AI system to apply certain data or parameters and shapes the outcome) that is the copyright owner and not the owner of the AI system.  

One can see a future scenario where “AI-as-a-Service” is offered whereby a content user or hirer of the AI system is allowed to apply their own rules, parameters and data/inputs to a problem whilst ‘hiring’ or using the AI system as a service (just as SaaS exists today). The operator of the AI system (not owner of the AI system ) should in that case be the first owner of the copyright in the resulting work (subject to contractual rights that may be transferred, licensed or otherwise assigned thereafter).

Ranking Options in order:

  1. We would therefore urge the IPO to choose Option 2 – a lesser term of copyright protection should apply e.g.  5-15 Years for AI generated Copyright works e.g. music, art etc. which, as described above, require little investment or “sweat of the brow”
  2. Failing 1, we would urge the IPO to choose option 0 – Make no legal change.  
  3. Option 1, removing the protection is not a viable or desirable option in our opinion.

2. Copyright: Text and Data Mining

The Government rightly believe that that there is a need to promote and further enable AI development. This must however be balanced with a commensurate and proportionate recognition of the critical importance and value of data as raw material.

AI developers rely on high-quality data to develop reliable and innovative AI-driven inventions and applications. Licensing regimes under existing IP law are designed to cater for the needs of AI developers. 

By the same token content and data-driven businesses themselves have seen a rapid increase in the use of AI technology and machine-learning, either for news summaries, data gathering efforts, translations for research and journalistic purposes or to assist organisations to save time by processing large amounts of text and other data at scale and speed.   Digital technologies, including AI, are and will continue to be of critical importance to these industries, helping create content, new products and value-added services to deliver to a broad range of corporate and retail clients. Whether in news media or cross-industry research, publishers are themselves investing in AI; continued collaboration with start-ups and academia are creating tailored materials for wide populations of beneficiaries (students, academia, research organisations, and even marketers of consumer publishing products).  

It is of paramount importance to balance the needs of future AI development with the legal, commercial and economic rights of data-owners and the need to incentivize new AI adoption with recognition of the rights of existing content owners. 

We have however seen no evidence the existing copyright legislative framework fails to adequately address the current needs of AI developers. Moreover it is particularly important, in our view, to ensure that the development of AI is not enabled at the expense of the underlying investment by copyright and data-owners. (see endnote 1). 

If the content owners of underlying data materials withhold the licensing of, or access to, such materials or attempt to price them at a level that is unfair, the answer is for Government via the Competition and Markets Authority/the new Digital Markets Unit (or indeed other regulators who form part of the Digital Regulation Cooperation Forum) to put in place competition measures to ensure there is a clear legal recourse in such situations.  

In summary we do not believe that current copyright law creates a disparity between the interests of AI developers and investors and content owners. The existing copyright regime under the CDPA reflects a balance that fairly protects those investing in data creation without giving an unfair advantage to technology companies offering AI-enabled content creation services. In particular the current framework provides a balanced regime for data and text mining and we believe no changes are required at present. However, we recommend a watching brief, and that the IPO consider and take account of changes to copyright laws in other countries that may make it more attractive for AI operators to base their operations in those extraterritorial locations so that  text and data mining activities, machine learning, etc. become more easily performed elsewhere or permitted with incentives not offered in UK.

Ranking Options in order:

  1. We would therefore urge the IPO to elect Option 0 – Make no legal change.   No other option is currently justifiable given the lack of evidence of an adverse commercial environment preventing access to data or text by AI-enabled content creators. Should the Government or IPO consider that there needs to be increased access to data at lower cost, it should look at other policy levers to stimulate such uptake, such as providing tax incentives for content owners to license content, rather than reducing copyright protection.   
  2. We also concur with industry leads who consider that forcing rightsholders to opt in to protection, as suggested in option 3 would be complicated and costly for many businesses and industries who own literally millions of works, when licensing is far simpler, and would be against the spirit of international treaties on copyright.

3. Patents:

If UK patents were to protect AI-devised inventions, how should the inventor be identified, and who should be the patent owner? What effects does this have on incentivising and rewarding AI-devised inventions?

As we described above the author and first owner  of any AI-assisted or created work will be the person who creates the work  or their employer if that person is an employee or or a third party if the operator has a contract assigning such rights outside of employment context 

As the emphasis in copyright law suggests, creating a ‘work’ is in essence a human activity.  This is given additional support by the reference to the automatic transfer of copyright from employee to employer; an AI system cannot be said to be an employee.  

Similar principles in our view apply to patents as with copyright. For patentability the applicant inventor must be a ‘person’.

Authoritative guidance on how AI-created inventions fit into this scheme, where no human inventor  is mentioned is given in the decision in Thaler v Comptroller General of Patents Trade Marks and Designs (aka ‘Thaler’ or ‘DABUS case’) and in particular in our view in the statements by Lord Justice Birss (L.J. Birss) in his dissenting opinion (See paragraphs numbered 8, 58 78 et seq. of the DABUS case, and the Conclusion).

In summary, L.J. Birss. set out his views on the lower courts’ erroneous interpretations of the law and in conclusion stated:

  • The inventor of an invention under the 1977 Act is the person who actually devised the invention.
  • Dr Thaler has complied with his obligations under s13(2) of the 1977 Act because he has given a statement identifying the person(s) he believes the inventor to be (s13(2)(a)) and indicating the derivation of his right to be granted the patent (s13(2)(b)).
  • It is no part of the Comptroller's functions under the 1977 Act to deem the applications as withdrawn simply because the applicant's statement under s13(2)(a) does not identify any person who is the inventor. Since the statement honestly reflects the applicant's belief, it satisfies s13(2)(a).
  •  It is no part of the Comptroller's functions under the 1977 Act to in any way be satisfied that the applicant's claim to the right to be granted the patent is good. In granting a patent to an applicant the Comptroller is not ratifying the applicant's claim to derivation. Dr Thaler's asserted claim, if correct, would mean he was entitled to the grant. Therefore the statement satisfies s13(2)(b).
  1. The fact that the creator of the inventions in this case was a machine is no impediment to patents being granted to this applicant.

All three judges in Thaler agreed that under the Patents Act (PA) 1977 an inventor must be a person, and as a machine is not a person it, therefore, cannot be an "inventor" for the purposes of section 7(2) of the Act.   L.J. Birss however dissented on the crucial point whether it was an  impediment to the grant of an application that the creator of an invention was a machine, as such. He stated  that it was simply that a machine inventor cannot be treated  as an inventor for the purpose of granting the application.

In Australia the Court has taken a slightly different view but there, the law is different.   As L.J. Birss in Thaler remarked in his judgment: 

After the hearing the appellant sent the court a copy of the judgment of BeachJ of 30th July 2021 in the Federal Court of Australia Thaler v Commissoner of Patents [2021] FCA 879. The judgment deals with another parallel case about applications for the same inventions. Beach J decided the case in Dr Thaler's favour. However yet again the relevant legislation is quite distinct from that in the UK. The applications reached the Australian Patent Office via the Patent Cooperation Treaty (PCT), which meant that a local rule (reg 3.2C(2)(aa)) applied which requires the applicant to provide the name of the inventor. That rule is in different terms from s13(2) and the present case is not a PCT application ( i.e. in Australia the name of the inventor must be provided unlike under UK legislation). If it were then the operation of s13(2) would be affected by a deeming provision (s89B(1)(c)) which we do not have to consider”. 

We believe that in principle LJ Birss is correct and that the patentability of such inventions where created by AI, or with the assistance of AI, provided the basic criteria under the relevant legislation are met, has been established. There  is therefore absolutely no need for the patent system to identify AI as the inventor or to create entirely new rights 

If the IPO takes the view or on appeal it is established that the law has not been correctly expressed by LJ Birss, it should be clarified to accord with his judgment. Failing that , for instance if AI systems themselves are treated as inventors, in our view, the system of innovation and inventorship in the UK will be eroded, the benefits and incentives for human inventors will be reduced, and ultimately firms could invest more in AI systems than in human innovation. 

Without changes in taxes on AI-inventorship and commensurate incentives to balance the negative impact, such a change would be detrimental to the ethos of the patent system and its focus on “a person” being the inventor mentioned in a patent application.  

Whilst it is unclear exactly what the future regulation of AI and associated IP rights will look like in the UK at this stage, it is clear however that an internationally harmonised approach to the protection and recognition accorded to AI generated inventions would be desirable.

it is also our view right in principle, to cite L.J. Birss,  that ‘there is no rule of law that a new intangible produced by existing tangible property is the property of the owner of the tangible property’, as Dr Thaler contended, and certainly no rule that the property contemplated by section 7(2)(b) in an invention created by a machine is owned by the owner of the machine. Accordingly, the hearing officer and the judge were correct to hold that Dr Thaler is not entitled to apply for patents in respect of the inventions given the premise that DABUS made the inventions’. 

In our view, as with AI creations for copyright purposes, the key is the operation and control of the machine/AI producing the invention not ownership of the AI itself.

Ranking Options in order:

  1. We would therefore urge the IPO to elect Option 1 whereby it is clarified that  “Inventor” includes a human responsible for the inventive activity of the AI system that lead to the invention or which devises inventions (e.g. where that humanoperator selects or guides the AI with relevant data, parameters, data-sets or programming logic for the AI’s function or purpose, which leads it to create an invention). This would also cater for the analogous scenario (to that mentioned above under 1, where AI becomes prevalent in the first instance as “AI-as-a-service”, whereupon there should be a presumption of ownership by the AI Operator (not the AI-system owner) and where transfers of ownership and rights can be addressed contractually at the point of use where AI is used ‘…as-a-service’.
  2. As a second-best option, as requested-particularly if the opinion of LJ Birss is subsequently confirmed by the Supreme Court - we would advocate Option 0 – no change.

Endnotes

  1. Reference: In Authors Guild v. Google 721 F.3d 132 (2d Cir. 2015), a copyright case heard in the United States District Court for the Southern District of New York, and on appeal to the United States Court of Appeals for the Second Circuit between 2005 and 2015. The case concerned fair use in copyright law and the transformation of printed copyrighted books into an online searchable database through scanning and digitization. The case centered on the legality of the Google Book Search (originally named as Google Print) Library Partner project that had been launched in 2003.  Though there was general agreement that Google's attempt to digitise books through scanning and computer-aided recognition for searching online was seen as a transformative step for libraries, many authors and publishers had expressed concern that Google had not sought their permission to make scans of the books still under copyright and offered them to users.  
  2. Two separate lawsuits, including one from three authors represented by the Authors Guild and another by Association of American Publishers, were filed in 2005 charging Google with copyright infringement. Google worked with the litigants in both suits to develop a settlement agreement (the Google Book Search Settlement Agreement) that would have allowed it to continue the program though paying out for works it had previously scanned, creating a revenue program for future books that were part of the search engine, and allowing authors and publishers to opt-out. The settlement received much criticism as it also applied to all books worldwide, included works that may have been out of print but still under copyright, and may have violated antitrust aspects given Google's dominant position within the Internet industry. A reworked proposal to address some of these concerns was met with similar criticism, and ultimately the settlement was rejected by 2011, allowing the two lawsuits to be joined for a combined trial.  In late 2013, after the class action status was challenged, the District Court granted summary judgement in favour of Google, dismissing the lawsuit and affirming the Google Books project met all legal requirements for fair use. The Second Circuit Court of Appeal upheld the District Court's summary judgement in October 2015, ruling Google's "project provides a public service without violating intellectual property law."[1] The U.S. Supreme Court subsequently denied a petition to hear the case.[2]

A big thank you to Christian for all his hard work on this response.

 


Lord C-J : Protect Pure Maths

During the Report Stage of the Advanced Research and Invention Agency Bill I spoke in favour of changes to the bill to ensure that pure maths research was included in the definition of scientific research.

This is the recording

https://twitter.com/i/status/1470883981973463049

And this is what I said:

My Lords, I have signed and I support Amendments 12, 13 and 14. As someone immersed in issues relating to AI, machine learning and the application of algorithms to decision-making over the years, I, too, support Protect Pure Maths in its campaign to protect pure maths and advance the mathematical sciences in the UK—and these amendments, tabled by the noble and gallant Lord, Lord Craig, reflect that.

The campaign points out that pure maths has been a great British success story, with Alan Turing, Andrew Wiles and Roger Penrose, the Nobel Prize winner—and, of course, more recently Hannah Fry has popularised mathematics. Stephen Hawking was a great exemplar, too. However, despite its value to society, maths does not always receive the funding and support that it warrants. Giving new funding to AI, for instance, risks overlooking the fundamental importance of maths to technology.

As Protect Pure Maths says, the 2004 BEIS guidelines on research and development, updated in 2010, currently limit the definition of science and research and development for tax purposes to the systematic study of the nature and behaviour of the physical and material universe. We should ensure that the ARIA Bill does not make the same mistake, and that the focus and capacity of the Bill’s provisions also explicitly include the mathematical sciences, including pure maths. Maths needs to be explicitly included as a part of scientific knowledge and research, and I very much hope that the Government accept these amendments.


Lord C-J helps to launch Rolls-Royce Aletheia Framework version 2

The Aletheia Framework is a practical one-page toolkit that guides developers, executives and boards both prior to deploying an AI, and during its use. A second version has been dwveloped by Caroline Gorski and her team at R2 Data Labs, Rolls Royce to be applicable accross a wise range of secotors.

This is how they describe it:

"It asks them to consider 32 facets of social impact, governance and trust and transparency and to provide evidence which can then be used to engage with approvers, stakeholders or auditors.

A new module added in December 2021 is a tried and tested way to identify and help mitigate the risk of bias in training data and AIs. This complements the existing five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias in service or malfunction and allow human intervention to control and correct it."

aletheia-framework-worksheet

I commented on the original version of The Aletheia Framework, and it deals with many of the same areas in education as it does for Rolls-Royce in manufacturing – ethics, impact, compliance, data protection. So, I saw an equivalence there and the Institute for AI Ethics  in Eduction adapted The Aletheia Framework for its needs.

Here are the two videos I made with Rolls Royce to mark the new version:

First on why practical ethics matters right now to build public trust

https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx

Second to describe how we adapted the Aletheia Framework for education

https://www.lordclementjones.org/wp-content/uploads/2021/12/Education-case-study.mp4

 

 

 


Launch of AI Landscape Overview: Lord C-J on AI Regulation

It was good to launch the new Artificial Intelligence Industry in the the UK Landscape Overview 2021: Companies, Investors, Influencers and Trends with the authors from Deep Knowledge Analytica nd Big Innovation Centre and  my APPG AI Co Chair Stephen Metcalfe MP, Professor Stuart Risssell the Reith lecturer , Charles Kerrigan of CMS and Dr Scott Steedman of the BSI

Here is the full report online

https://mindmaps.innovationeye.com/reports/ai-in-uk

And here is what I said about AI Regulation at the launch:

A little under 5 years ago we started work on the AI Select Committee enquiry that led to our Report AI in the UK: Ready Willing and Able? The Hall/Pesenti Review of 2017 came at around the same time.

Since then many great institutions have played a positive role in the development of ethical AI. Some are newish like the Centre for Data Ethics and Innovation, the AI Council and the Office for AI; others are established regulators such as the ICO, Ofcom, the Financial Conduct Authority and the CMA whichhave put together a new Digital Regulators Cooperation Forum to pool expertise in this field. This role includes sandboxing and  input from a variety of expert institutes on areas such as risk assessment, audit data trusts and standards such as the Turing, Open Data, the Ada Lovelace, the OII and the British Standards Institute. Our Intellectual Property Office too is currently grappling with issues relating to IP created by AI .

The publication of National AI strategy this Autumn is a good time to take stock of where we are heading on regulation. We need to be clear above all , as organisations such as techUK are, that regulation is not the enemy of innovation, it can in fact be the stimulus and be the key to gaining and retaining public  trust around AI and its adoption so we can realise the benefits and minimise the risks.

I have personally just completed a very intense examination of the Government’s proposals on online safety where many of the concerns derive from the power of the algorithm in targetting messages and amplifying them. The essence of our recommendations revolves around safety by design and risk assessment.

As is evident from the work internationally by the Council of Europe, the OECD, UNESCO, the Global Partnership on AI and the  EU with its proposal for an AI Act, in the UK we need to move forward with proposals for a risk based regulatory framework which I hope will be contained in the forthcoming AI Governance White Paper.

Some of the signs are good. The National AI strategy accepts the fact that we need to prepare for AGI and in the National Strategy too they talk too about

  • public trust and the need for trustworthy AI,
  • that Government should set an example,
  • the need for international standards and an ecosystem of AI assurance tools

and in fact the Government have recently produced a set of Transparency Standards for AI in the public sector.

On the other hand

  • Despite little appetite in the business or the research communities they are consulting on major changes to the GDPR post Brexit in particular the suggestion that we get rid of Article 22 which is the one bit of the GDPR dealing with a human in the loop and and not requiring firms to have a DPO or DPIAs
  • Most recently after a year’s work by the Council of Europe’s Ad Hoc Committee on the elements of a legal framework on AI at the very last minute the Government put in a reservation saying they couldn’t yet support the document going to the Council because more gap analysis was needed despite extensive work on this in the feasibility study
  • We also have no settled regulation for intrusive AI technology such as live facial recognition
  • Above all It is not yet clear whether they are still wedded to sectoral regulation rather than horizontal

So I hope that when the White Paper does emerge that there is recognition that we need a considerable degree of convergence between ourselves the EU and members of the COE in particular, for the benefit of our developers and cross border business that recognizes that a risk based form of horizontal regulation is required which operationalizes the common ethical values we have all come to accept such as the OECD principles.

Above all this means agreeing on standards for risk and impact assessments alongside tools for audit and continuous monitoring for higher risk applications.That way I believe we can draw the US too into the fold as well.

This of course not to mention the whole Defence and Lethal Autonomous Systems space, the subject of Stuart Russell’s  Second Reith Lecture  which  despite the promise of a Defence AI Strategy  is another and much more depressing story!

 

 

 

 

 


Lord C-J : Lords Diary in the House Magazine

From the House Magazine

https://www.politicshome.com/thehouse/article/lords-diary-lord-clement-jones

Down the corridor in the Commons, early November was dominated by No 10’s Owen Paterson U-turn and in Glasgow by events at COP26 – but for me it was an opportunity to raise a number of issues relevant to innovation and the development of technology and the future of our creative industries. It started with the great news that Queen Mary University of London, whose council I chair, after many trials and tribulations, has agreed a property deal next to our Whitechapel campus with the Department of Health and Social Care. It paves the way for the development, with Barts Life Sciences, of a major research centre within a new UK Whitechapel Life Sciences Cluster.

And tied in with the theme of tech innovation, a couple of welcome in-person industry events too – the techUK dinner and the Institute of Chartered Accountants in England and Wales’ annual reception – where a new initiative, to increase investment in the UK’s high-growth companies, was launched. Guest speaker Lord Willetts made the point about the UK’s great R&D but relatively poor track record in commercialisation.

All highly relevant to a late session on the second reading of the bill setting up the new ARIA (Advanced Research and Development Agency). A cautiously positive reception, as most speakers were uncertain where it fits in our R&D and innovation landscape. It may be designed to be free of bureaucracy, but the question arises, how does that reflect on the relatively recent UK Research and Innovation infrastructure?

The risks posed by some new technology were highlighted in my oral question the day before on whether the UK will join in steps to limit Lethal Autonomous Weapons. The MoD still seems to be hiding behind the lack of an agreed international definition whilst insisting, not very reassuringly, that we do not use systems that employ lethal force without “context-appropriate human involvement”.

Other digital harms were to the fore with successive sessions of the Joint Committee on the Draft Online Safety Bill. The first, taking evidence from Ofcom CEO Melanie Dawes; in command of her material, but questions still remain over whether Ofcom will have the powers and independence it needs. Then a round table looking at how and whether the bill protects press freedom, and our final evidence session with Nadine Dorries, the new secretary of state, in listening mode and hugely committed to effectively eradicating harms online. There are many improvements to the bill needed but we are up against the necessity for early implementation.

"The news broke that facial recognition software in cashless payment systems had been adopted in nine Ayrshire schools"

And, so, the Lib Dem debate day. The first on government policy and spending on the creative sector in the United Kingdom superbly introduced by colleague Lynne Featherstone. I focused on a number of sectors under threat: independent TV and film producers, publishers, from potential changes to exhaustion of copyright, authors, from the closure of libraries and the aftermath of Covid, and the music industry, from lockdowns and post Brexit inability to touring in the EU. The overarching theme of the debate so relevant to innovation and our tech sector was that creativity is important not just in the cultural sector but across the whole economy – a point it seems well taken by the minister, Stephen Parkinson, in his new role, with a meeting with his (also new) ministerial counterpart in the Department for Education in the offing.

Back to technology risk, in the next debate, about the use of facial and other biometric recognition technologies in schools. A little over two weeks before, the news broke that facial recognition software in cashless payment systems had been adopted in nine Ayrshire schools. Its introduction has been temporarily paused, and the Information Commissioner’s Office is now producing a report, but it is clear that current regulation is inadequate.

Quite a week. It ends with a welcome unwinding over a Friday evening negroni and excellent Italian meal, with close family, some not seen in person for two years!

 

 


Government refuses to rule out development of lethal autonomous weapons

1 November 2021

 

Here is UNA-UK's Write up of a recent oral question I asked about the status of UK negotiations

https://una.org.uk/news/government-refuses-rule-out-development-lethal-autonomous-weapons-parliamentary-debate

During a parliamentary debate in the House of Lords, the UK Government repeatedly refused to rule out the possibility that the UK may deploy lethal autonomous weapons in the future.

At the dedicated discussion on autonomous weapons (also known as “killer robots”), members of the Liberal Democrats, Labour and Conservative parties all expressed concern around the development of autonomous weapons. Several parliamentarians, including Lord Coaker speaking for Labour and Baroness Smith speaking for the Liberal Democrats, asked the UK to unequivocally state that there will always be a human in the loop when decisions over the use of lethal force are taken. Responding to these calls, defence minister Baroness Goldie simply stated that the “UK does not use systems that employ lethal force without context appropriate human involvement”.

This formulation, which was used twice by the Minister, offers less reassurance over the UK’s possible future use of these weapons than the UK’s previous position that Britain “does not possess fully autonomous weapon systems and has no intention of developing them”.

The debate was triggered by Lord Clement-Jones, former chair of the Lords Artificial Intelligence Committee and member of the UK Campaign to Stop Killer Robots’s parliamentary network, who described the Minister’s unwillingness to rule out lethal autonomous weapons as “disappointing”. He went on to describe the UK’s refusal to support calls for a legally binding treaty on this issue as “at odds with almost 70 countries, thousands of scientists [...] and the UN Secretary-General”.

Former First Sea Lord, Lord West, raised that alarm that “nations are sleepwalking into disaster...engineers are making autonomous drones the size of my hand that have facial recognition and can carry a small, shaped charge and they will kill a person”, mentioning that thousands of such weapons could be unleashed on a city to “horrifying” effect.

Speaking from the Shadow Front Bench, Lord Coaker asked for an unequivocal commitment from the Government that there will “always be human oversight when it comes to AI [...] involvement in any decision about the lethal use of force”. This builds on Labour’s position articulated by Shadow Defence Secretary, John Healey MP, in his September 2021 conference speech where he stated that a Labour Government would “lead moves in the UN to negotiate new multilateral arms controls and rules of conflict for space, cyber and AI.”

Former Defence Secretary, Lord Browne, asked for the UK to publicly reaffirm its commitment to ethical AI and questioned the Minister on the MOD’s forthcoming Strategy on AI. Minister Goldie explained that the MOD’s planned Defence AI Strategy was slated for Autumn while conceding that Autumn “had pretty well come and gone” adding that the Strategy would be published “in early course”. Conservative MP Lord Holmes asked a related question on the need for public engagement and consultation on the UK’s approach to AI across sectors. UNA-UK agrees with this imperative for public consultation, and was concerned to learn through an FOI request that “the MOD has carried out no formal public consultations or calls for evidence on the subject of military AI ethics since 2019”.

Today’s activity in the House of Lords follows similar calls made in December 2020 in the House of Commons, when another member of our parliamentary network: the SNP’s spokesperson on foreign affairs Alyn Smith MP made a powerful call for the UK to support a ban on lethal autonomous weapons.

Next steps

On 2 December 2021, the Group of Governmental Experts to the Convention on Conventional Weapons (CCW) will decide on whether to proceed with negotiations to establish a new international treaty to regulate and establish limitations on autonomous weapons systems. So far nearly 70 states have called for a new, legally binding framework to address autonomy in weapons systems. However, following almost 8 years of talks at the UN, progress has been stalled largely due to the stance of a small number of states, including the United Kingdom, who regard the creation of a new international law as premature.

Growing momentum in the UK on this issue is not confined to parliament. Ahead of the UN talks, the UK campaign will release a paper voicing concerns identified by members of the tech community as well as a compendium of research by our junior fellows showing that the ethical framework around which research in UK universities is proceeding is not fit for purpose (early findings from the research has been published here and related research from the Cambridge Tech and Society group can be found here).

The UK Campaign to Stop Killer Robots, a coalition of NGOs, academics and tech experts, believes that UK plans for significant investments in military AI and autonomy -- as announced in the UK Integrated Review of Security, Defence, Development and Foreign Policy-- must be accompanied by a commitment to work internationally to upgrade arms control treaties to ensure that human rights, ethical and moral standards are retained.

While we welcome the assertion made in the Integrated Review that the “UK remains at the forefront of the rapidly- evolving debate on responsible development and use of AI and Autonomy, working with liberal- democratic partners to shape international legal, ethical & regulatory norms & standards" we are concerned that urgency is missing from the UK’s response to this issue. We hope the growing chorus of voices calling for action will help convince the UK to support the UN Secretary-General’s appeal for states to develop a new, binding treaty to address the urgent threats posed by lethal autonomous weapons systems at the critical UN meeting in December 2021.


Lord C-J: We urgently need to tighten data protection laws to protect children from facial recognition in schools

From the House Live November 2021

https://www.politicshome.com/thehouse/article/we-urgently-need-to-tighten-data-protection-laws-to-protect-children-from-facial-recognition-in-schools

A little over two weeks ago, the news broke that facial recognition software in cashless payment systems – piloted in a Gateshead School last summer – had been adopted in nine Ayrshire schools. It is now clear that this software is becoming widely adopted, both sides of the border, with 27 schools already using it in England and another 35 or so in the pipeline.

Its use has been “temporarily paused” by North Ayrshire council after objections from privacy campaigners and an intervention from the Information Commissioner’s Office, but this is an extraordinary use of children’s biometric data for this purpose when there are so many other alternatives to cashless payment available.

"We seem to be conditioning society to accept biometric technologies in areas that have nothing to do with national security or crime prevention"

It is clear from the surveys and evidence to the Ada Lovelace Institute, which has an ongoing Ryder review of the governance of biometric data, that the public already has strong concerns about the use of this technology. But we seem to be conditioning society to accept biometric and surveillance technologies in areas that have nothing to do with national security or crime prevention and detection.

The Department for Education (DfE) issued guidance in 2018 on the Protection of Freedoms Act 2012, which makes provision for the protection of biometric information of children in schools and the rights of parents and children as regards participation. But it seems that the DfE has no data on the use of biometrics in schools. It seems there are no compliance mechanisms to ensure schools observe the Act.

There is also the broader question, under General Data Protection Regulation (GDPR), as to whether biometrics can be used at all, given the age groups involved. The digital rights group Defend Digital Me contends, “no consent can be freely given when the power imbalance with the authority is such that it makes it hard to refuse”. It seems that children as young as 14 may have been asked for their consent.

The Scottish First Minister, despite saying that “facial recognition technologies in schools don’t appear to me to be proportionate or necessary”, went on to say that schools should carry out a privacy impact assessment and consult pupils and parents.

But this does not go far enough. We should be firmly drawing a line against this. It is totally disproportionate and unnecessary.  In some jurisdictions-New York, France and Sweden its use in schools has already been banned or severely limited.

This is however a particularly worrying example of the way that public authorities are combining the use of biometric data with AI systems without proper regard for ethical principles.

Despite the R (Bridges) v Chief Constable of South Wales Police & Information Commissioner case (2020), the Home Office and the police have driven ahead with the adoption of live facial recognition technology. But as the Ada Lovelace Institute and Big Brother Watch have urged – and the Commons Science and Technology Committee in 2019 recommended – there should be a voluntary pause on the sale and use of live facial recognition technology to allow public engagement and consultation to take place.

In their response to the Select Committee’s call, the government insisted that there is already a comprehensive legal framework in place – which they are taking measures to improve. Given the increasing danger of damage to public trust, the government should rethink its complacent response.

The capture of biometric data and use of LFR in schools is a highly sensitive area. We should not be using children as guinea pigs.

I hope the Information Commissioner's Office’s (ICO’s) report will be completed as a matter of urgency. But we urgently need to tighten our data protection laws to ensure that children under the age of 18 are much more comprehensively protected from the use of facial recognition technology than they are at present.

 


Lord C-J Comments on National AI Strategy

The UK Government has recently unveiled its National AI Strategy and promised a White Paper on AI Governance and regulation in the near future.

MELISSA HEIKKILÄ at Politico's AI :Decoded newsletter covered my comments

https://www.politico.eu/article/uk-charts-post-brexit-path-with-ai-strategy/

UK charts post-Brexit path with AI strategy

The U.K. wants the world to know that unlike in Brussels, it will not bother AI innovators with regulatory drama.

While the EU frets about risky AI and product safety, the U.K.'s AI strategy, unveiled on Wednesday, promised to create a “pro-innovation” environment that will make the country an attractive place to develop and deploy artificial intelligence technologies, all the while keeping regulation “to a minimum,” according to digital minister Chris Philp.

The U.K.'s strategy, which markedly contrasts the EU's own AI proposed rules, indicates that it's embracing the freedom that comes from not being tied to Brussels, and that it's keen to ensure that freedom delivers it an economic boost.

In the strategy, the country sets outs how the U.K. will invest in AI applications and help other industries integrate artificial intelligence into their operations. Absent from the strategy are its plans on how to regulate the tech, which has already demonstrated potential harms, like the exams scandal last year in which an algorithm downgraded students' predicted grades.

The government will present its plans to regulate AI early next year.

Speaking at an event in London, Philp said the U.K. government wants to take a “pro-innovation” approach to regulation, with a light-touch approach from the government.

“We intend to keep any form of regulatory intervention to a minimum," the minister said. "We will seek to use existing structures rather than setting up new ones, and we will approach the issue with a permissive mindset, aiming to make innovation easy and straightforward, while avoiding any public harm while there is clear evidence that exists.”

Despite the strategy's ambition, the lack of specific policy proposals in it means industry will be watching out for what exactly a "pro-innovation" policy will look like, according tech lobby TechUK’s Katherine Holden.

“The U.K. government is trying to strike some kind of healthy balance within the middle, recognizing that there's the need for appropriate governance and regulatory structures to be put in place… but make sure it's not at the detriment of innovation,” said Holden.

Break out, or fit in?

The U.K.'s potential revisions to its data protection rules are one sign of what its strategy entails. The country is considering scrapping a rule that prohibits automated decision-making without human oversight, arguing that it stifles innovation.

Like AI, the government considers its data policy as an instrument to boost growth, even as a crucial data flows agreement with the EU relies on the U.K. keeping its own data laws equivalent to the EU's.

A divergent approach to AI regulation could make it harder for U.K.-based AI developers to operate in the EU, which will likely finalize its own AI laws next year.

“If this is tending in a direction which is diverging substantially from EU proposals on AI, and indeed the GDPR [the EU's data protection rules] itself, which is so closely linked to AI, then we would have a problem,” said Timothy Clement-Jones, a former chair of the House of Lords’ artificial intelligence liaison committee.

Clement-Jones was also skeptical of the U.K.'s stated ambition to become a global AI standard-setter. “I don't think the U.K. has got the clout to determine the global standard,” Clement-Jones said.  “We have to make sure that we fit in with the standards” he said, adding that the U.K. needs to continue to work its AI diplomacy in international fora, such as at the OECD and the Council of Europe.

In its AI Act, the European Commission also sets out a plan to boost AI innovation, but it will strictly regulate applications that could impinge on fundamental rights and product safety, and includes bans for some “unacceptable” uses of artificial intelligence such as government-conducted social scoring. The EU institutions will be far along in their legislative work on AI by the time the U.K. comes out with its own proposal.

“Depending on the timeline for AI regulation pursued by Brussels, the U.K. runs the risk of having to harmonize with EU AI rules if it doesn't articulate its own approach soon,” said Carly Kind, the director of the Ada Lovelace Institute, which researches AI and data policy.

“If the UK wants to live up to its ambition of becoming an AI superpower, the development of a clear approach to AI regulation needs to be a priority,” Kind said.

 

 


SCL /Queen Mary Global Policy Institute Discussion: AI Ethics and Regulations

I recently took part in a panel with David Satola, Lead ICT Counsel of the World Bank, Patricia Shaw, SCL Chair and CEO, Beyond Reach Consulting, Jacob Turner, Barrister and author of ‘Robot Rules: Regulating Artificial Intelligence’  and Dr Julia Ive, Lecturer in Natural Language Processing at Queen Mary. It was chaired by Fernando Barrio, SCL Trustee and Senior Lecturer in Business Law, School of Business and Management at Queen Mary  and Academic Lead for Resilience and Sustainability, Queen Mary Global Policy Institute.

The regulatory and policy environment of Artificial Intelligence is one that is both in a state of flux and attracting increasing attention by policy makers and business leaders at a global scale. AI is having an impact on most aspects of government activities, business operations and people’s lives, and the proposals for letting the technology advance unregulated are under question, with increasing realization of the potential pitfalls of such unhindered development

Our chair asked the following questions

1- There has been much talk and debate about the need to set up ethical rules for the use of AI in different sectors of society and the economy; what are your views in relation to the establishment of such rules, vis a vis the possibility of enacting a regulatory framework for AI?

2- Assuming that there is a consensus about the need to regulate AI development and deployment: what should be the basis of such regulatory framework? (vg, risk-based, principle-based, etc)

3- In the same way that almost every aspect of ICT, AI posses challenges to the domestic regulation of its activities; what is the role you envisage of different levels of potential regulatory bodies, local, national, regional and/or global?

Here is the podcast of the session.

https://www.scl.org/podcasts/12367-ai-ethics-and-regulations?_se=bWFyay5vY29ub3JAZGxhcGlwZXIuY29t