Tech companies like Google have refrained from developing some systems because of fears it could be used “in a very bad way”, writes Angus Robertson.
Surveillance technology has become an everyday reality that most people have come to expect and accept. In the UK, there are more than 100,000 publicly operated CCTV cameras and millions of private surveillance units. The use of number plate recognition systems, body-borne video cameras and drones is increasingly standard. Only occasionally have civil liberty concerns been raised, including interventions from the surveillance commissioner Tony Porter, who has previously warned about the public being complacent about its encroachment.
Recent television coverage of civil unrest in Hong Kong has highlighted the growing role of the surveillance state in a territory which is not fully democratic. Rioters in the Chinese territory actively targeted ‘smart’ lampposts fitted with sensors and cameras. They believed them to be networked for the use of security forces, so they carried umbrellas to hide their identities from facial recognition technology and used lasers to target CCTV cameras and render them inoperable.
The United States has now reached a new privacy milestone which raises all kinds of questions about whether it will be adopted on this side of the Atlantic.
Billions of online images
A small tech start-up has developed a facial recognition app which matches photographs with more than three billion “scraped” images from Facebook, YouTube and millions of other websites.
Very few people out there have avoided their picture being taken and shared online, one way or the other. Chances are your picture is in the database of Clearview AI, whose records are being offered to US law enforcement agencies from local and state police forces to the FBI and Department of Homeland Security.
According to the company: “Clearview is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes.
“Clearview’s technology has helped law enforcement track down hundreds of at-large criminals, including paedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud.”
The company says it is hugely effective and publicises testimonials from law enforcement professionals like an unnamed detective constable in the Canadian Sex Crimes Unit who said ”Clearview is hands-down the best thing that has happened to victim identification in the last ten years. Within a week and a half of using Clearview, [we] made eight identifications of either victims or offenders through the use of this new tool.”
‘A very bad way’
The growing use of Clearview has just been reported on in great detail by Kashmir Hill, the technology reporter of The New York Times. In a ground-breaking investigation, she highlights that leading tech companies like Google have refrained from developing similar technology because, to quote a past chairman of Google, it can be used “in a very bad way”.
An example of that is what would happen if the app were paired with augmented-reality glasses – allowing the identification of everyone being viewed. The computer code used by Clearview makes that possible.
You don’t need to be a civil liberties or privacy campaigner to understand that this is an altogether new level of potential intrusion into the lives of innocent law-abiding citizens.
Laws exist to safeguard citizens’ rights and limit the ability of people, agencies or governments to abuse technology like this.
Given how effective it is proving in the identification of criminal perpetrators in North America, there will be calls for its use here. The least we should do is have an informed debate about it first.