Sunday, January 19, 2020
The new threat posed by face recognition technology
Back in May 2019, a former Liberal Democrat Councillor in Cardiff went to court to claim that the suspected use of facial recognition technology on him by South Wales police was an unlawful violation of privacy. Ed Bridges alleged that his image was captured by facial recognition cameras when he popped out for a sandwich in his lunch break and, quite rightly, objected to what he saw as a further extension of the surveillance state.
Facial recognition technology maps faces in a crowd and then compares them to a watchlist of images, which can include suspects, missing people and persons of interest to the police. The cameras scan faces in large crowds in public places such as streets, shopping centres, football crowds and music events such as the Notting Hill carnival:
Dan Squires QC, representing Bridges, said: “What AFR [automated facial recognition] enables the police to do is to monitor people’s activity in public in a way they have never done before.”
He said: “The reason AFR represents such a step change is you are able to capture almost instantaneously the biometric data of thousands of people.
“It has profound consequences for privacy and data protection rights, and the legal framework which currently applies to the use of AFR by the police does not ensure those rights are sufficiently protected.”
Just how big a threat to individual privacy and freedom is now becoming apparent, with this article in the New York Times outlining some frightening developments in this technology that leaves it open to widescale abuse.
The paper reports on a company called Clearview AI, which has devised a ground-breaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
According to the paper, Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases:
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analysed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
As the New York Times points out, Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.
The paper says that police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.
This goes far beyond anything South Wales Police have used, and underlines the frightening potential of technology that has now fallen into the hands of the private sector, with no guaranteed security for their databases. You could potentially be scanned anywhere, and the person in control of that technology would instantly have access to every piece of data you have put on-line.
But before anybody starts to think this is an effective law enforcement tool, even Clearview admit that it is not always accurate and that mismatches can occur. The company says its tool finds matches up to 75 percent of the time. But the paper say it is unclear how often the tool delivers false matches, because it has not been tested by an independent party such as the National Institute of Standards and Technology, a federal agency that rates the performance of facial recognition algorithms.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
The article's conclusion is chilling: Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.
Surely it is time for legislation to protect people's privacy before this technology hits the UK.
Facial recognition technology maps faces in a crowd and then compares them to a watchlist of images, which can include suspects, missing people and persons of interest to the police. The cameras scan faces in large crowds in public places such as streets, shopping centres, football crowds and music events such as the Notting Hill carnival:
Dan Squires QC, representing Bridges, said: “What AFR [automated facial recognition] enables the police to do is to monitor people’s activity in public in a way they have never done before.”
He said: “The reason AFR represents such a step change is you are able to capture almost instantaneously the biometric data of thousands of people.
“It has profound consequences for privacy and data protection rights, and the legal framework which currently applies to the use of AFR by the police does not ensure those rights are sufficiently protected.”
Just how big a threat to individual privacy and freedom is now becoming apparent, with this article in the New York Times outlining some frightening developments in this technology that leaves it open to widescale abuse.
The paper reports on a company called Clearview AI, which has devised a ground-breaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
According to the paper, Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases:
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analysed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
As the New York Times points out, Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.
The paper says that police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.
This goes far beyond anything South Wales Police have used, and underlines the frightening potential of technology that has now fallen into the hands of the private sector, with no guaranteed security for their databases. You could potentially be scanned anywhere, and the person in control of that technology would instantly have access to every piece of data you have put on-line.
But before anybody starts to think this is an effective law enforcement tool, even Clearview admit that it is not always accurate and that mismatches can occur. The company says its tool finds matches up to 75 percent of the time. But the paper say it is unclear how often the tool delivers false matches, because it has not been tested by an independent party such as the National Institute of Standards and Technology, a federal agency that rates the performance of facial recognition algorithms.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
The article's conclusion is chilling: Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.
Surely it is time for legislation to protect people's privacy before this technology hits the UK.