Please note that by playing this clip YouTube and Google will place a long term cookie on your computer.
Above: a company claims its face recognition system can identify a face amidst 36 million images in mere seconds.
You are walking down the street minding your own business when a police officer taps you on the shoulder from behind. “Excuse me, ma’am,” he says, “Could I see some identification?” You know your rights and you say no. “You don’t really have a choice, ma’am. We have a positive identification of you from a camera that says you are Jill Stokes. Are you Ms. Stokes?” You nod, confused. “You owe the city $500 in parking tickets, ma’am.”
That’s a benign but totally plausible example of how the eradication of anonymity in public via unchecked identification and surveillance technologies will fundamentally change our lives. For even more troubling examples, think about how easily a government agency (or corrupt cop) could identify and track a political activist, whistleblower, or domestic violence survivor in the once-anonymous, bustling streets of our (now surveillance camera laden) cities. The agent or cop doesn't even need to be in the same city as his target. He could be reclining at home with his iPad hundreds of miles away, watching you move through your daily routine as you pass by hundreds of cameras, the eyes that never blink.
Just imagine it.
Imagine a world in which every face is catalogued and indexed in networked government databases, available to check in real time against the never ending stream of images captured by the ubiquitous surveillance cameras that dot our urban (and even rural) landscapes like man hole covers. That world is coming, fast. Unless, of course, we stop it.
Why stop it?
Sure, cop shows on television have taught us to think of face recognition as a magic bullet that catches the bad guy every time. But those shows leave out a crucial element to the technology's success. Face recognition can only work like it does on CSI if every single one of our faces is in a database accessible to the FBI or to local police. After all, plenty of people are first time criminals. So to catch every criminal using these tools, you'd need to have a database ready with every face in it -- whether or not someone has an arrest record, no matter what that arrest record looks like. For many of us, this kind of “guilty until proven innocent,” “everyone is a suspect” policing methodology seems Orwellian and distasteful. It’s hard to say it wouldn’t have an impact in terms of reducing crime, but do we want to live in such a world?
Unsurprisingly, companies that profit from government acquisitions of these advanced tools have a habit of saying the same thing the police and the FBI say when the subject of Big Brother arises. The rejoinder to “But what about my privacy?” is usually one of two things: either that participation in the program is voluntary (in the corporate context) or that it is only deployed to track, target and otherwise monitor “those other people” -- criminals, sex offenders, terrorists, gang members and the like.
One company set to profit handsomely off of expanded face recognition efforts in San Diego, FaceFirst, told a local news outlet there that ordinary people shouldn’t be worried about the government’s deployment of the powerful tool. Why? The government’s databases “usually are filled with criminals and suspects,” channel 10 news wrote. That may be largely true for now (though it isn’t at all certain), but if the FBI gets what it wants, it won’t be the case for long.
A face recognition program is only as good as its face database. After all, the whole point is to identify people. So the government needs to have millions of faces to check images against -- whether those images come from surveillance drones, CCTV in public transportation or city streets, surreptitiously snapped photographs of political protesters, or your Facebook profile. But a database alone won’t do anything for police or investigators; they need the facial recognition algorithms in order to positively identify someone in one photo, compared to another.
Fortunately for law enforcement, the feds and the locals have each others' back. Where the federal government has resources to expend on the underlying technology (the algorithms), the states have lots of data. And they are starting to share in a very big, albeit quiet, way. State and local officials are increasingly granting the federal government access to their registry of motor vehicles databases and, thanks to an FBI program announced to the public in August 2012, deploying their own powerful face recognition tools.
On August 16, 2012, Nextgov.com wrote:
Within weeks, police nationwide should be able to obtain free software for matching photos of unidentified suspects against the FBI’s biometric database of 12 million mug shots, according to an Office of the Director of National Intelligence agency.
The FBI and Homeland Security Department are experimenting with facial recognition to determine the real names of illegal immigrants, identify persons of interest in candid photos, and fulfill other law enforcement responsibilities. To make that happen, however, law enforcement agencies at every level of government must share images with compatible technology that they can afford, former FBI officials say.
So, the bureau is offering agencies some of the equipment at no cost.
The application accepts scanned images, photos from digital cameras or pictures saved as digital files. The tool then translates each copy into a new file that can be matched against images in NGI, or deposited there for others to search.
The quid-pro-quo for the feds' generosity may not explicitly go beyond the FBI being able to hold on to the images state and locals send to search against the big federal database. But there's another reason the FBI may be feeling generous with state and local law enforcement: states control the holy grail of face databases at their registries of motor vehicles.
The federal government appears to have already laid the groundwork for the face recognition revolution at our friendly motor vehicle registration departments.
Thanks to federal grant programs from the Department of Transportation, at least 35 states have active face recognition programs at their registries of motor vehicles. The states say they use the software to detect fraud and abuse, for example to catch someone applying for a drivers’ license under a false name when they already have an ID under their real name. But numerous states are increasingly using the face recognition programs for law enforcement purposes, too. And a new FBI program, disclosed to the public in documents submitted to the Senate Judiciary Committee during the July 2012 hearing on face recognition, makes good use of that federally-funded face recognition technology at motor vehicle registries nationwide for purposes quite apart from fraud detection.
From the FBI’s written testimony:
"Project Facemask" was initiated in 2007 as a collaborative effort by the FBI and the North Carolina (NC) Department of Motor Vehicles (DMV) to use the NC DMV's facial recognition program as a means of locating fugitives and missing persons. [snip] Upon the successful conclusion of the pilot in 2010, the capabilities were evaluated to assess the operational value of creating an FBI facial searching service. Based on this evaluation, the FBI created a Facial Analysis Comparison and Evaluation (FACE) Services Unit. The FACE Services Unit has begun establishing Memoranda of Understanding (MOUs) with the DMVs of states whose laws allow them to share DMV information for law enforcement purposes, as permitted by Federal law regarding the use of state motor vehicle records. [snip] The FACE team will compare the facial images of subjects of active FBI investigations with images housed in select FBI databases and other databases to which the FBI has access for law enforcement purposes. In addition, for states with which we have established MOUs, FBI fugitives' and subjects' identities will also be queried in the DMV records, with the results returned to the FACE team for comparison analysis.
The FBI is working with a number of states to bolster the locals’ face recognition capabilities, as well:
In February 2012, the state of Michigan successfully completed an end-to-end Facial Recognition Pilot transaction and is currently submitting facial recognition searches to CJIS. MOUs have also been executed with Hawaii and Maryland, and South Carolina, Ohio, and New Mexico are engaged in the MOU review process for Facial Recognition Pilot participation. Kansas, Arizona, Tennessee, Nebraska, and Missouri are also interested in Facial Recognition Pilot participation.
Suffice it to say that the feds have their hands full of projects working with state and local governments to augment their respective face recognition capabilities. Additionally, there's quite a bit of regional activity happening at the state level, with fusion centers at the center of localized information sharing efforts. There are plenty of examples of such regional programs; I'll explore some of them in later blogs.
Zooming out to the bigger picture we find the mother of all biometrics programs, the FBI’s Next Generation Identification (NGI) -- the biometrics databank to beat all biometrics databanks. It will contain fingerprints, iris scans, face recognition ready photographs, palm prints, DNA and innumerable other biometric data points on tens if not hundreds of millions of people, both US citizens and foreigners.
Where does the FBI get this data? It comes from the Department of Defense, the Department of Homeland Security, the State Department, foreign nations, state and local police who arrest and book suspects, and from state and local civil authorities, who capture fingerprints (and soon photographs?) for civil license applications and background checks.
EFF’s Jennifer Lynch, the civil liberties community’s foremost expert on the nascent NGI and biometric technologies, testified at a 2012 senate hearing on face recognition about the government’s plans for its all encompassing databases:
The biggest and perhaps most controversial change brought about by NGI will be the addition of face-recognition ready photographs. The FBI has already started collecting such photographs through a pilot program with a handful of states. Unlike traditional mug shots, the new NGI photos may be taken from any angle and may include close-ups of scars, marks and tattoos. They may come from public and private sources, including from private security cameras, and may or may not be linked to a specific person’s record (for example, NGI may include crowd photos in which many subjects may not be identified). NGI will allow law enforcement, correctional facilities, and criminal justice agencies at the local, state, federal, and international level to submit and access photos, and will allow them to submit photos in bulk.
The FBI has stated that a future goal of NGI is to allow law-enforcement agencies to identify subjects in “public datasets,” which could include publicly available photographs, such as those posted on Facebook or elsewhere on the Internet. Although a 2008 FBI Privacy Impact Assessment (PIA) stated that the NGI/IAFIS photo database does not collect information from “commercial data aggregators,” the PIA acknowledged this information could be collected and added to the database by other NGI users such as state and local law-enforcement agencies. The FBI has also stated that it hopes to be able to use NGI to track people as they move from one location to another.
When arguing against face recognition or for strict limitations on its use, we privacy advocates usually deploy arguments about efficacy first and then about values. I prefer the latter argument, both because the former is quickly receding into the ether (along with such relics as the Compact Disc and the flip phone) and because principles are more important. (Nevertheless, the efficacy argument still carries water, practically speaking. As late as 2011, the Massachusetts registry of motor vehicles face recognition program encountered at least 1,000 false reads per year, causing pretty substantial inconvenience to people accused of fraud by an imperfect computer program, and mucking up the system for everyone else.)
The values arguments against omnipresent face recognition technology are clear and important, and we’ve got to start talking about them more seriously -- and about the real threats that various identification and surveillance technologies pose to our anonymity and our liberty. We theoretically should have rights, after all.
For one thing, we are guaranteed a constitutional right to anonymous speech and association in the United States, but the unchecked spread of these technologies without attendant privacy laws limiting their power makes it increasingly unlikely that we’ll be able to meaningfully access these rights in the future. Once face recognition tools infect the backbone systems of the surveillance cameras that dot our urban landscapes, anonymously speaking our minds in public may very well become impossible. (In fact, it already is impossible. All a police officer has to do to identify anyone in a crowd is snap a digital photograph of them and send it off to the registry of motor vehicles or to the FBI for a face check.)
The extreme contractions in private or anonymized space we are currently witnessing in what computer geeks call "meat space" are particularly meaningful given the landscape in our digital world, the internet. As you likely know, our options for anonymous speech online don’t look so hot, either.
In 2012, the ACLU of Massachusetts tried to stop a state prosecutor from obtaining the Twitter user data of our client, a political activist. We failed because the judge said that our client lacked standing to defend his Twitter information; only the company could challenge the subpoena since it legally owned the users’ data, he ruled. Nonetheless the judge appeared to be troubled by the larger set of issues raised in the case, namely the fact that our client wasn’t able to speak anonymously online, through a widely used communication platform like Twitter. He asked the state prosecutor how anyone in the 21st century might speak anonymously under such circumstances. The prosecutor replied in open court that if our client wanted to speak without being identified, he could put on a mask and pass out flyers in Dewey square -- then the site of Occupy Boston.
We now know that the FBI was monitoring the occupy movement, and any activist could tell you that wearing a mask makes you an obvious target of government interest. With access to face recognition-enabled binoculars and the Massachusetts registry of motor vehicles database, it wouldn’t be hard for either the feds or the local police to identify someone simply for expressing their First Amendment rights at Dewey Square. You can’t wear a mask forever, after all.
So how can we speak anonymously when courts and prosecutors tell us we do not own our speech online, and when the state saturates our cities with facial recognition-enabled surveillance cameras?
Putting on a mask to go about our daily lives or to attend a protest isn’t something many people are likely to do. But it may very well become necessary if we want to avoid being cataloged by an aggressively metastasizing surveillance state -- whether we are political activists or not.
A Terminator-like future is approaching, and fast. These technologies threaten us because absolute power corrupts. Besides, sometimes we all need a place to hide from prying eyes -- and not because we've done something illegal or hurt someone else, simply because we are human and we want to be left alone.
Luckily, there’s still time for us to act before it’s too late. Find out how you can get involved at the local level to learn about what kinds of technologies your local and state police are using, and how to push back if you don’t like what you uncover.