Face Surveillance Was Always Flawed

The mugshot was invented in the 1880s. A century later, face surveillance has gone digital but remains as flawed as ever.

In 1879, Alphonse Bertillon joined the Parisian police department. At the time, people who committed multiple crimes were having a good run of it—branding had been banned and fingerprinting had not yet been widely adopted. Officers’ memories, as Richard Farebrother and Julian Champkin have detailed, were the main remedy against repeat offenders, who gave false names, claimed it was their first offense, and were able to avoid serious consequences. The only alternative for 19th-century law enforcement: sifting through stacks of notecards cataloging prior arrests, which were completed idiosyncratically and organized haphazardly.

Struck by the system’s inefficiency, Bertillon created a cunning new method to identify alleged offenders. He envisioned “giving every human being an identity, an individuality that is certain, durable, invariable, always recognizable, and which can be established with ease.” Photography had just begun to arrive on the scene, and Bertillon recognized its potential for cataloging people who had been arrested. He pioneered the idea of seating accused people in the same chair, set a standard distance from the camera, and photographing them at standard angles. In so doing, Bertillon invented the mugshot.

More than a century later, face surveillance has gone digital. In January 2020, journalist Kashmir Hill revealed that a secretive start-up company called Clearview AI had scraped photographs from untold numbers of websites, collecting more than three billion pictures (now closer to 10 billion) to fuel face recognition technology used by law enforcement. The public was horrified. But the massive consentless collection of face data for carceral purposes is a foundational approach to face surveillance that originated with the two men who developed it, first as an analog technique and later as an automated technology.

As Bertillon’s story makes clear, the earliest applications of face surveillance were also not rooted in consent. Nearly 150 years ago, using images of faces to identify people who’d committed crimes was done without permission of those portrayed. Law enforcement officers were the primary users. And that tradition continued when Woodrow Wilson “Woody” Bledsoe automated face surveillance in the 1960s.

These essential flaws cannot be corrected with greater accuracy or oversight. Despite more than a century of time for potential introspection about face surveillance, it seems that law enforcement has not truly grappled with the collateral consequence of its expansion: the elimination of privacy for everyone. Instead, more law enforcement agencies are adopting face surveillance as you read this article. Face surveillance began as a carceral technology, and that application continues to ensure that the technology will be available, attractive, and financially advantageous to law enforcement and the companies who serve it.

Bertillon and Bledsoe’s work assumed that law enforcement personnel should have the power to surveil the faces of the people they were supposed to protect and serve. Today, law enforcement continues to buy into that belief. But Bertillon’s and Bledsoe’s stories reveal that face surveillance was flawed from the start and illustrate why those flaws persist.

Bertillon believed that “every measurement slowly reveals the workings of the criminal. Careful observation and patience will reveal the truth.” His approach, as Kelly Gates details, was not actually intended to reveal the inner workings of the individual—unlike physiognomy, an old, racist pseudoscience that postulated that certain facial features predicted criminality. Bertillon’s use of scientific measurement seemed a stunning development. He coupled his mugshots with intrusive measurements of people who had been arrested, such as their arm length and head circumference, to create measurements collectively known as a Bertillonage. He was assisted by Amelie Notar, a woman with impeccable handwriting whom he met crossing the road and who later became his wife.

It seemed improbable that an arrested person would share the same Bertillonage with any other person (though that assumption was ultimately proven wrong). Bertillon’s mathematical approach to identifying people who had allegedly committed crimes was popular and effective. Ultimately, Bertillonage identified more than 3,500 repeat offenders in its inaugural decade.

Bertillon himself became quite famous. Sir Arthur Conan Doyle referred to his own famed detective, Sherlock Holmes, as the “second highest expert in Europe”—after Bertillon, of course. But his methods proved fallible.

In 1903, a Black man named Will West arrived at the U.S. Penitentiary in Leavenworth, Kansas. As was the standard procedure, identification clerks took his Bertillonage and matched the measurements with one William West. Police expressed no surprise that Will West was a repeat offender, nor that he denied being one. But upon investigation, Will West was discovered to be an entirely different man serving a life sentence for murder in the same penitentiary. The two men had identical measurements, something Bertillon and other police had believed impossible.

The fallibility of Bertillonage is among the least problematic aspects of Bertillon’s legacy. When he was not photographing people who had been arrested, Bertillon was writing a book called Ethnographie Moderne: Les Races Sauvages—translated, Modern Ethnography: The Savage Races. Bertillon also concocted a handwriting test used in the Dreyfus Affair, a now infamous case of government anti-Semitism. His participation in the trial spurred his brother, married to a Jewish woman, not to speak to him for years.

By the turn of the century, Bertillon’s “science” had been discredited. But the mugshot lived on. And it fueled generations of corporations, governments, and researchers curious about mathematically measuring the faces of people accused of committing crimes.

Law enforcement agencies are adopting face surveillance as you read this article.

In 1953, a newly minted mathematics doctorate named Woody Bledsoe turned down several offers to teach at elite institutions in order to join the Sandia Corporation. Along with his colleague Iben Browning, Bledsoe began developing pattern recognition technology that could identify printed and handwritten letters—a precursor of the algorithms used by the modern United States Postal Service to decipher addresses. Not long after, Bledsoe, Browning, and another colleague went on to start their own company focused on contracts with the U.S. Department of Defense and other intelligence agencies. As Bledsoe put it, the organization “was one of the first AI groups before the term ‘Artificial Intelligence’ came into use.” And there was a particular application of AI that intrigued Bledsoe: face recognition.

In the mid-’60s, as Shaun Raviv details in his stunning telling of Bledsoe’s life and work, Bledsoe took funding from a Central Intelligence Agency shell company and, with his colleague Helen Chan, turned to the challenge of teaching a computer to recognize human faces. Bledsoe’s son and a friend sifted through a pile of photographs of faces and took contemporary versions of Bertillonage, documenting 22 measurements for each face. Chan wrote an algorithm that could interpret the data. Their man-machine approach matched every set of measurements with the correct photograph. Bertillonage worked after all.

Funding remained inconsistent, however, and Bledsoe left the company he cofounded for one of those elite teaching jobs he’d turned down years before. But he briefly returned to face recognition research for a singular purpose: to assist law enforcement in sifting through mugshots to match to new photographs. Two years prior, Bledsoe had written to the Advanced Research Projects Agency (ARPA) that “there exists a very large number of anthropological measurements which have been made on people throughout the world from a variety of racial and environmental backgrounds. This extensive and valuable store of data, collected over the years at considerable expense and effort, has not been properly exploited.” Raviv documents how Bledsoe used the mugshots of 400 adult white men to annotate 46 measurements for each face, resizing the images to standardize the scale. (Bledsoe’s research mentions no women or people of color.) The computer demolished humans at the task. The fastest person took six hours to match subsets of 100 faces; the computer took roughly three minutes. Bledsoe’s methodology revolutionized face recognition, but, “for government reasons,” a colleague told Raviv, Bledsoe’s groundbreaking research was never published.

Bledsoe’s belief in the promise of face recognition cannot be overstated. In 1985, he opened an American Association for Artificial Intelligence presidential address with his own version of an “I Have a Dream” speech, stating, “I had a dream, a daydream, if you will … I dreamed of a special kind of computer, which had eyes and ears and arms and legs, in addition to its ‘brain.’ … My computer friend had the ability to recognize faces.” In law enforcement agencies across the globe, parts of Bledsoe’s dream have become a reality.


Prison Tech Comes Home

By Erin McElroy et al.

Face surveillance has come a long way since the days of Bertillon and Beldsoe, but their work lives on. The pair pioneered and popularized mugshots as sources of data. Mugshots continue to be captured without consent and aren’t limited to people convicted of crimes: these photos are taken at the moment of arrest, not conviction, and the images remain easily accessible on the Internet even if the individual is never charged. (Mugshots also pose no real risk of copyright enforcement, a hurdle for many types of training data.)

The use of mugshots has expanded beyond law enforcement. Corporations use them as training data—Clearview AI announced its mission to acquire access to every mugshot from the last fifteen years. The government curates a data set of mugshots for training “automated mugshot recognition systems.” Researchers rely on mugshots for face recognition scholarship. Even the American Civil Liberties Union used mugshots to reveal demographic biases in Amazon Rekognition’s face recognition technology, demonstrating that the algorithm misidentified 28 members of Congress.

Mugshots paved the way for normalizing other types of consentless collection of faces, such as the systemic scraping by Clearview AI and even companies like Microsoft and IBM. Bertillon’s and Bledsoe’s entitlement to using photographs of people’s faces for surveillance continues to be baked into the technology.

Face surveillance is also still used to surveil and imprison. Activist and poet Malkia Devich-Cyril warned that “facial-recognition [will] be used to supercharge police abuses of power and worsen racial discrimination.” They have been proven right. In 2019, Georgetown’s Center on Privacy & Technology discovered that Immigration and Customs Enforcement (ICE) worked with state officials to run face recognition searches using driver license databases in states that provide licenses to undocumented immigrants. Undoubtedly, these immigrants would not have participated in the program if they had known the state would betray their privacy to ICE. In 2020, the New York City Police Department revealed that it used face recognition to surveil Black Lives Matter activist Derrick Ingram, despite public assurances that the department “does not use facial recognition technology to monitor and identify people in crowds or political rallies.” And to date, three Black men have been misidentified by face recognition algorithms, arrested, and imprisoned before detectives realized that “the computer got it wrong.”

Since the days of Bertillon and Bledsoe, we’ve had ample opportunity to interrogate whether it is ethical to use photographs of people without their consent to train face surveillance systems used by law enforcement, and whether law enforcement should be permitted to match names to faces in a way that effectively destroys privacy. While these fundamental flaws of face surveillance continue in contemporary technology, the answer to both has not yet been a resounding “no.” We remain stuck asking the same questions, and only recently offering answers that reflect resistance. As it stands, the most reliable applications of face surveillance are eroding privacy and entrenching power. And those final flaws were present all along.


This article was commissioned by Mona Sloaneicon

Featured Image: Anthropometric data sheet of Alphonse Bertillon / Service Regional d'Identité Judiciaire, Préfecture de Police, Paris. Wikimedia. (CC0 1.0)