INSIGHTS
B. Scott Swann,
IDEMIA NSS President & CEO
Face Recognition - A Critical Tool if Deployed Responsibly
Recently, San Francisco became the first major American city to ban local government agencies’ use of Face Recognition technology (FRT). Overall anti-police sentiment has contributed to a dozen U.S. cities passing similar laws, with 20 or so others weighing similar legislation. While most consumers have willingly sacrificed digital privacy in exchange for convenience, many are staunchly opposed to any attempts by law enforcement agencies to use biometrics. While it would be dismissive to say that these concerns are unfounded, bad actors may use public pushback on FRT to their advantage.
FRT first entered the public consciousness in the early 2000s through films such as Minority Report and Enemy of the State. The dramatic representation of the technology’s capabilities on screen has distorted public perception dramatically. Many now believe that cameras in public spaces record our every movement, and that video of past activities can be recovered and matched instantly. As expected, the privacy community has become greatly concerned, especially with video surveillance married to face recognition technology.
At the same time, however, consumers have embraced FRT. With Apple’s iPhone X, a face scan unlocks the smartphone in lieu of a password. Because of convenience, usability and security benefits, consumers have embraced the technology and Apple has extended it to other product lines. FRT is finding greater public acceptance in many application areas including Facebook.
The U.S. Department of Homeland Security has started pilot projects that use FRT for international air departure tracking which shows real promise in addressing persistent immigration challenges. But government FRT applications like this have agitated the privacy community even though they use images provided with appropriate consent, e.g. passport and Visa application photos. Data has shown that 98% of citizens willingly participate in these programs.
Unsurprisingly, law enforcement advocacy groups such as Stop Crime SF and industry groups such as the International Biometrics & Identity Association (IBIA) have responded with vehement opposition to San Francisco’s ban. “The section of the Ordinance banning the use of facial recognition is misguided and unfounded, because it failed to take into account all relevant factors in making its critical decision,” the IBIA stated in a press release. While both sides in the FRT debate have made valid points, the lack of any substantive dialogue has undermined what many consider to be a fact: that the responsible use of Facial Recognition technology can be a tremendous investigative tool and national security safeguard.
Despite portrayals on television and in films, the vast majority of today’s FRT applications require trained human operators to make final identification determinations. Current utilization of FRT tools help to expedite the review of video evidence, but do not make final decisions regarding human identification. Current systems generally provide a list of identification candidates to trained operators for further adjudication.
For example, the New York Police Department has become increasingly vocal regarding the appropriate use of FRT. The NYPD’s Real Time Crime Center (RTCC) oversees the Face Identification Section where face recognition algorithms are applied to images and video obtained from crime scenes to help investigators identify potential suspects. Trained facial examiners review hundreds of candidate photos when attempting to identify an individual. Misidentification of individuals generally stems from poor image quality and untrained FRT examiners.
Those of us in the Facial Recognition industry are working hard to develop the most accurate and unbiased FRT tools imaginable, but there are challenges. The Privacy Act of 1974 prevents the federal government from sharing even anonymized extracts from operational systems. Without access to this data, FRT developers have limited testing data to train their algorithms. We hope that the U.S. government will eventually make anonymized data available to FRT manufacturers – of course, with appropriate data stewardship requirements. In addition, FRT training, qualification and ongoing proficiency testing should be required for all FRT examiners working in law enforcement to ensure match results meet industry standards.
It would be irresponsible to ignore Facial Recognition criticisms and attest that the technology is flawless. Of course it is not and should not be the sole means of identifying individuals without the review of a properly trained and qualified FRT examiner. Ensuring national security and public safety is imperative, but not at the expense of civil liberties and basic human dignity. Now is the time for a deliberative discussion about how to responsibly deploy FRT in the United States, without sacrificing life, liberty, and the pursuit of happiness.