Have you ever considered where this information goes, aside from unlocking our phones?

14 February 2022

Interview with

Gareth Mitchell, BBC & Stephanie Hare, Author & Lord Clement-Jones

 

When we think of our personal data, we often consider information like our phone number, bank details, or email address. But what about our eyes, ears, mouth, and nose? Facial recognition is increasingly being used to tag and track our individual activities, and while commonplace in unlocking personal devices like laptops and phones, certain institutions are keen to use our features for much more than mugshots. This includes the US Treasury, who last week backtracked on plans for mandatory facial verification for people logging their tax returns. So why are some people wary of firms having their faces on file? Robert Spencer finds out more…

Robert – It’s a question that appears time and time again. How comfortable are we as a society with facial recognition? As unlocking your phone shows, in some respects the answer is clear, but when it comes to having your face scanned as you walk down the street, the issue becomes more murky.

Gareth – It’s a biometric identifier. That means using aspects of your body for identification. The issue is that all of us are walking around in public showing our faces, meaning that anybody with a scanner, if they want to can mount a camera, and use an algorithm to identify us. We don’t have any control over who is using our face as the identifier.

Robert – That’s Gareth Mitchell who presents Digital Planet on the BBC world service. This lack of control and consent is key to one of the central paradoxes in the discussion around facial recognition. It speaks to the differences in technologies involved as Stephanie Hare explains in her new book, Technology Is Not Neutral: A Short Guide to Technology Ethics.

Stephanie – There are different types of facial recognition technology. So let’s start with facial verification. That’s the kind that you would use to unlock your own smartphone. That’s not a very high risk use of facial recognition technology because the biometric never leaves your phone. A higher risk example is going to be when the police are using live facial recognition technology to identify people in a crowd. This might be high risk because it can have a chilling effect on free speech. If people fear that when they’re going to these protests, they’re being scanned by the police.

Robert – But it’s not just about giving consent and having control of your biometrics. The algorithms themselves are large complex computer programs, often hidden behind company secrets. And it turns out, they aren’t always as accurate as we’d like.

Stephanie – It doesn’t work as well on people with darker skin. It works particularly poorly on women with darker skin, but it can also be a problem with children, with trans people and with elderly people.

Robert – The fix though might not be as simple as it seems.

Gareth – In order for the algorithms to get better at recognizing a whole diversity of faces, that would mean training those algorithms on more and more faces. And so opponents would say, well, that just adds to the problem. One problem is the algorithms are not very good at identifying a particular group of people. So let’s just go and get loads of profiles of these kinds of people and put them into our databases. Well, then you scanned even more faces you’ve potentially compromised more people’s privacy and that’s made the problem even worse.

Robert – Police forces around the UK also disagree on the use of the technology known as live facial recognition. The Met uses facial recognition to find offenders on watchlists, but Scottish police have halted its use.

Stephanie – Right now, our experience of this technology who’s using it and how it’s even discussed in law differs depending on your postcode.

Gareth – And another reason why facial ID has been so controversial is that some of these police forces have been rolling it out before there was a regulatory framework in effect to protect us and, if necessary, them.

Robert – This lack of legal framework also concerns Lord Clement-Jones who debated the issued last week in the house of Lords.

Lord Clement-Jones – And the general conclusion was that there was no single piece of legislation that really covered the use of live facial recognition. It’s very easy to say, we need to ban this technology and I’m not quite in that camp. What I want to see, and this was the common ground, is a review. We want to see what basis there should be for legislation, we want to see how the technology performs, and then we want to be able to decide whether we should ban it or, whether there are some uses to which it could be put with the right framework.

Robert – It’s hard to ignore the distinct advantages facial recognition carries. It’s fast and hands free. The ability to accurately and instantly identify a fugitive in a crowd would make the world a safer place.

Gareth – There was bound to be a trade off between our liberties and our security. We should be having conversations that are diverse, where a wide range of people are coming to the table with their views and their issues.

Stephanie – I would want to be hearing from scientists, the people who manufactured this tech, from the military, from the police, from medical professionals, from civil liberties groups. And I think it’s the first step on a long journey that we have to have in the United Kingdom.

Robert – Lord Clement-Jones is optimistic.

Lord Clement-Jones – The public ought to take away from this debate, that there are a great many parliamentarians concerned about the use of new technology without proper oversight. But they should put pressure on their own MPs, to say, well, what is happening much more seriously.

Robert – It’s clear then that we need to have this discussion sooner rather than later. In the meantime, though, I’m going to keep using my face to unlock my phone. I’m not sure where the line in the sand is, but for me, it’s a bit past this level of convenience.