International Biometrics and Identification Association Draft Privacy Best Practices for Commercial Biometrics

The following draft best practices for privacy in the commercial use of biometrics were released by the International Biometrics and Identification Association and posted to the National Telecommunications and Information Administration website on June 17, 2014.

IBIA-PrivacyBestPractices

IBIA Privacy Best Practice Recommendations for Commercial Biometric Use

  • 9 pages
  • June 17, 2014

Download

This IBIA document has two (2) parts:

IBIA best practice recommendations for commercial applications of biometric technology incorporate the following essential findings:

o Transparency and protection of data are IBIA’s fundamental privacy tenets.
o The biometric industry lacks any legal authority to impose rules of conduct on users of the technology and the industry, therefore can only recommend best practices.
o Given the variety and numerous existing uses, as well as potential uses, it is not feasible or practical to develop specific /detailed practices.
o The general guidelines are intended to provide a roadmap that will enable users and customers to tailor appropriate privacy practices to their specific contexts.

IBIA findings and perspective on privacy risks in the digital age that form the basis for its recommended best practices. The key findings are:

o Privacy is vulnerable to abuse by many means and methods in our digital age.
o IBIA’s primary privacy policy is that all data should be protected.
o The level of protection should be consistent with the level of risk associated with its use and the consequences of abuse. The level of protection should also be applicable and tailored per the context of the specific biometric use.
o The easy availability online and offline of vast amounts of detailed personal information is the greatest privacy risk.
o The pervasive privacy risk in our society is the result of the advent of the digital age and big data and is completely independent of biometric technology, let alone a single modality.
o As has been the case throughout human history, new methods of authenticating identity are necessary to augment existing conventions and meet current needs. Today, biometric technologies do this and, as a major privacy‐enhancing technology, preserve privacy at the same time.
o The commercial application of these best practices enhances and strengthens personal privacy protections.

These findings, coupled with the Fair Information Practice Principles1 of transparency and protection of data and the reality of numerous and diverse biometric applications, provide the framework for implementing biometric technologies in commercial applications. Specifically, the IBIA’s Privacy Best Practice Recommendations for Commercial Biometric Use incorporate general guidelines for commercial use of biometrics, leaving it to implementers and operators to determine what is most appropriate given the application, the risk and consequence of abuse, the non‐biometric data used, and the purpose of the undertaking.

IBIA Findings and Perspective on Privacy Risks in the Digital Age

Following is a list of the key privacy risks that should be considered:

o Privacy in our society is vulnerable to abuse by many means and methods.
o The primary privacy risk today is the ready availability online and offline of vast amounts of detailed personal information that needs to be protected. This is completely independent of biometrics (including facial recognition).
o Privacy of our personal data has been defined and limited by the rise of the digital age that incorporates big data, completely independent of biometrics.
o Anonymity and privacy are not synonymous terms. The former is forfeited if one chooses to live in society.
o Covert surveillance methods are already widely deployed, again independent of biometrics.
o Biometric identification is filling today’s void in the need for security and privacy in uses throughout the government and commercial /consumer sectors; in law enforcement and national security; protection of health care records and financial records; to prevent imposters in professional and competency testing; in computers, mobile devices, and home door locks and safes; in school lunch programs and to protect child care facilities; and to make payments at retail establishments.
o In authentication applications like physical security access control and logical security access control for computers and networks, biometrics are privacy‐enhancing factors that provide higher security as well as privacy.
o In both one‐to‐one verification and one‐to‐many identification applications biometrics merely provides an identity result for the questions “are you who you claim to be?” or “who are you?” These results do not necessarily diminish privacy or profile a person. Instead, these applications can enhance system integrity through positive identification, can provide a higher level of user convenience and can augment privacy.

Biometrics: A Privacy Enhancing Technology

One fact should not be lost in this discussion. As has always been the case, new methods of authenticating identity, like biometric identification, are necessary to augment existing conventions and meet current needs. Biometric technologies do this and, as a major privacy‐enhancing technology, preserve privacy at the same time.

The facial template itself, like other biometric templates, provides no personal information. Indeed, protecting the non‐biometric personal information is enhanced through the use of biometric verification of identity to limit data access to only authorized persons.

Biometrics can provide a unique tool to protect and enhance both identity security and privacy and to protect against fraud and identity theft, especially as a factor in identity verification. When your personal data are protected by access mechanisms that include one or more biometric factors, it becomes much more difficult for someone else to gain access to your personal data and applications because no one else has your unique biometric attributes. This enables legitimate access and reduces the risk that a person can steal your identity and, posing as you, collect benefits; board an airplane; get a job; gain access to your personal data, etc.

Facial Recognition, Big Data, Anonymity, Surveillance, and Privacy

The perceptions that biometrics, such as facial recognition and templates generated from facial images, need to be regulated and strongly constrained because they will destroy anonymity and increase surveillance are more imagined than real, and pale in comparison to the other electronic methods that can be exploited in the digital age in which we live.

1) There is no anonymity if we choose to live in society. Anonymity and privacy are not the same. Unless we disguise ourselves, our faces are public. In society, many services are dependent on user identity. Routinely, data are used to offer goods and services to us. Anonymity cannot be used as a means to avoid accountability. Those who choose to opt in to personal offers are simply acknowledging that they want the benefits they might gain by giving up anonymity. Privacy is a different matter and surrendering a right to anonymity is not tantamount to a surrender of privacy.

Contrary to public statements, simply having access to a facial image or its template does not destroy the anonymity of a person walking down the street. This does not directly reveal a name, Social Security number, or any other personal information.

It is true that tagged photos on a social media Web site could lead you to a name or address. However, that is only one of a hundred tools that can provide the very same data. With a name alone, one can find addresses and phone numbers in public phone directories and then undertake surveillance, of a person seen on the street.

2) Surveillance is a product of the digital age, not biometrics like facial recognition. Surveillance is already a part of our daily life, thanks to the digital age and tremendous increases in computational power. Facial recognition does not increase its use.

There are two major classes of security surveillance technology in use today. One class is owned and operated by commercial businesses or individual organizations, and the other is owned or operated by local, state, or federal governments.

Commercial businesses and other non‐government organizations routinely have security cameras in and around their facilities for physical security and employee/visitor safety. Contrary to their portrayal on television programs like “24” and “Person of Interest,” among others, surveillance cameras owned by various businesses and organizations are NOT uniformly or even frequently interconnected and available to anyone with an Internet connection.

They occasionally (but rarely) run “video analytics” to automatically alert security personnel to inappropriate or unsafe activities, but almost never use automated facial recognition. If an event of interest (a crime) occurs, recordings of the event can be analyzed after the fact, and are sometimes made available to police as evidence, in accordance with the law. Images extracted from such surveillance recordings can contain faces, and these can sometimes be extracted (if the image has sufficient resolution) and converted into templates for comparison against a gallery of suspects using automated facial recognition. However, this latter process is also subject to legal rules and constraints.

There are some cities where there are a great number of centrally accessible surveillance cameras. These are of great utility in traffic management, and emergent situation assessment from a central operations center. However, resolution of the video cameras is such that they can’t practically be used for continuous facial recognition technology. The possible application of facial recognition technology is therefore generally confined to post‐event analysis, where resources can be focused on only video captured that is germane to the event being investigated, again, according to law.

Under either class of common security surveillance video technology, it isn’t practical or possible to conceive of a “face stalking” application that can be accessed and run across all the video cameras in a surveillance system. Stalking, although thankfully infrequent, occurred before the advent of facial recognition technology, and unfortunately will continue to occur, whether facial recognition becomes a factor or not. To this point, facial recognition technology has not been a factor, and likely will continue to be a non‐factor for stalking.

Share this:

Facebooktwitterredditlinkedinmail