Facial Recognition: In recent years, facial recognition technology has advanced considerably and, as we are uncovering, hence have the privacy and security issues that threaten its use. Since most government authorities require people to wear masks, it makes others anxious during the COVID-19 pandemic. As part of the BlueLeaks breach, a May 2020 memo from the Department of Homeland Security revealed that “face recognition systems used to support security operations in public facilities will be less relevant” as masks are used.
This is one example of escalating tensions over the technology of facial recognition, which has erupted in use during the last few years. Security experts are pushing back as the technology is deployed by both the private and public sectors, raising concern about its impact on human rights.
Facial recognition isn’t a recent technology, but the barriers to its use have been declining over the last 10 years. More powerful computers have enabled people to train deep learning algorithms more easily and improve algorithmic accuracy.
Systems that match faces against a unique record for verification purposes – like the detection systems on Apple’s Face ID or Microsoft’s Windows Hello – aren’t what’s drawing concern. It’s the one-to-many matching systems, which claim to pluck a face from millions of faces, that worry activists.
Facial Recognition For Always
There have been some productive uses for this technology. Child protection charity Thorn worked with Amazon’s Rekognition system to create Spotlight, a tool that implements facial recognition and text analysis to help track down trafficked children. Investigators have used the tool to match images of missing children on Facebook to online sex ads, recovering children that have been sold online.
In other cases, authorities have identified fraudulent behavior using facial recognition. In one example, Kansas officials recognized the ringleader of a forced labor trafficking ring that had conducted dozens of immigrants into the country illegally under false pretenses.
Despite these successes, privacy groups worry about the potential effects of facial recognition on human rights. “Like biological or nuclear weapons, facial recognition poses such a profound threat to the future of humanity and our fundamental rights that any potential benefits are far outweighed by the inevitable harms,” argues Caitlin Seeley George, campaign director at privacy advocacy group Fight for the Future.
Concerns span several categories. One of the biggest involves privacy. “For facial recognition to work, people need handing over their biometric information which puts everyone in the system in danger of potential abuse and security breaches,” Seeley George says.
Like biological or nuclear weapons, facial recognition poses such a profound threat to the future of humanity and our basic rights
Facial Recognition: Danger Of Over-Reach
The privacy concerns focus on situations of over-reach, where facial recognition systems use without the subject’s consent. Clearview.ai has drawn attention from privacy regulators after it reportedly scraped billions of facial images from social media sites without permission and folded them into its facial recognition database.
A breach of its client list in February 2020 revealed that it had sold access to a large number of organizations, often to individuals without proper oversight. The Georgetown Law Center on Privacy & Technology also found that the FBI and ICE had been mining driver’s license photos for facial recognition searches, allegedly without their consent.
Government over-reach is bad enough, but the private sector presents more worries. “The federal government is actually bound by regulations that make it slightly more transparent,” says Brenda Leong, senior counsel at the Future of Privacy Forum (FPF).
However, the use of the technology in the private sector is more opaque, which can make people nervous. “I think people have that feeling just about the technology in general because it’s also moving very quickly in commercial applications,” she says.
The statistics bear this out. In a survey of nearly 500 consumers, software recommendation company GetApp found that only 32% were comfortable having their face scanned by a private company.
The security breaches that worry Seeley George are already happening. In August 2019, the owner of biometrics system BioStar 2 exposed over 27.8 million records, including over one million fingerprint records and facial recognition images, in a misconfigured Elasticsearch database.
Facial Recognition: Reliability Issues
Another worry for facial recognition skeptics obtains the accuracy of the technology. False positive rates (matching the wrong person) and false negative rates (failing to spot the right people) both have potentially disastrous outcomes.
Opponents of the technology point to a study by the National Institute of Standards and Technology (NIST), which found false positive rates up to 100-times greater for Asian and African American faces than for Caucasian faces. This doesn’t impress Jake Parker, senior director of government relations for the Security Industry Association (SIA), an industry group representing electronic and physical security companies.
“The lowest performing (algorithms) received many media attention, but as far as US government programs that use the technology, we’re actually already using the highest performing algorithms that have literally no difference across demographics,” he mentions. “Many of the lower-performing algorithms are experimental in nature and so not all these products are available in products sold.”
Nevertheless, other studies have also uncovered problems with commercial products. MIT computer scientist Joy Adowaa Buolamwini discovered gender bias in facial recognition systems from IBM, Microsoft and Amazon. Error rates soared from less than 1% for lighter-skinned men to 35% for darker-skinned women, she found.
Facial Recognition: Best Practices
These concerns have developed a vigorous reaction against facial recognition among both regulators and activist groups, along with debates around what constitutes responsible use of the technology. The Electronic Frontier Foundation (EFF) has called for a moratorium on any use of face surveillance using federal funds.
In 2018, the EFF released a set of seven principles of facial recognition technology in consumer applications. There were: consent, the respect for context when using the technology, transparency over what the data are used for proper data security, privacy by design, proper access to the data and accountability. These closely reflect the privacy principles we’ve seen in other broader-ranging laws like the GDPR and the California Consumer Privacy Act.
The technology industry has also addressed concerns over the technology by suspending or moderating its use. Microsoft said in June that it will not sell it to police departments until there is a federal law to regulate it, following similar moves by Amazon (which committed in June to not selling its Rekognition system to police for a year) and IBM, which pulled out of facial recognition research altogether in a letter to Congress a few days later.
“The danger of addressing facial recognition piecemeal across the country is that it exposes vulnerable populations to corporate and police abuse”
Facial Recognition: Municipal Bans
Faced with a legislative vacuum at the federal level, cities have taken to banning the technology. San Francisco kicked it off with a ban in May 2019, followed by others including Boston in June 2020. At the state level, Washington passed legislation in March 2020 permitting the technology’s use, with restrictions.
Jake Parker points out that all this activity has centered around a minimal number of flashpoints in California and Massachusetts. Nevertheless, Seeley George worries that it creates an uneven legislative patchwork that abandons many at risk.
“The danger of addressing facial recognition piecemeal across the country is that it exposes vulnerable populations to corporate and police abuse of their biometric data, in places where officials that could regulate the technology may either be ignorant of its harms or complicit with bad actors who prioritize control of citizens over the rights of citizens,” she aforesaid.
Facial Recognition: Broader Regulation
Lawmakers are already working on federal legislation to address the issue more broadly. In late June, Democratic senators introduced the Facial Recognition and Biometric Technology Moratorium Act of 2020. It would ban the use of facial recognition technology and other biometric technologies by federal entities and would also make some public safety grants conditional on suspending biometric surveillance.
Some privacy groups are in full support, but the SIA is not. “That would basically halt some the longest-standing and most effective uses of the technology in ways that really benefit citizens,” Parker stated.
The SIA supports a more permissive approach to regulation, identifying and banning specific uses of the technology deemed irresponsible while allowing everything else. The Act and its supporters adopt a more restrictive approach, forcing a broader halt until lawmakers have had the chance to examine things.
There are other potential ways to regulate the technology. The Algorithmic Justice League (AJL), which Buolamwini founded in 2016, has called for an FDA-style organization that would regulate the use of facial recognition technology.
The danger with focusing extremely firmly on this biometric technology is that it could hedge out regulatory instruments covering broader issues like AI, warns Leong. “The FPF definitely prefers having a broader general privacy law as opposed to a technology-specific privacy law,” she states.
It’s more likely to suggest the global issue of data that we’re trying to protect or limiting use, as opposed to just what comes in through one technology channel
The worry is facial recognition could assist authorities to erode human rights. In China, for example, authorities have introduced mandatory facial recognition for mobile-phone users and are now routinely scanning people without consent on public transport. At present, it’s selling its facial surveillance technology across the world. Oh, and it can recognize faces under masks, too.
As activists worry about the use of the technology to undermine human rights, Leong worries more broadly about maintaining a society that prioritizes those rights in the first place.
“We weren’t like China before the introduction of facial recognition technology and facial recognition independently can’t make us like China,” she says. “The sole way we’re advancing to end up in a terrible place is if we undermine those rights and facial recognition exclusively can’t achieve that.”
No technology can threaten liberty on its own, but a dystopia mix of authoritarian leadership and inadequate oversight could allow or even encourage its abuse.