The Modern Bertillion

Biometric factors are not suitable for authentication.

This is a bit of a radical claim - after all, biometrics are traditionally considered to be one of the traditional three factors, being "a thing you know", "a thing you have", or "a thing you are" - but biometrics fail on several very important fronts when it comes to using them in actual practice.

What is the purpose of authentication?

This becomes a bit of a philosophical question - what, exactly, constitutes authentication, and what purpose does it have? And how does it interact with identification - the other typical goal in credentialing?

Most schemes are intended to ensure that access to a resource (whether it is an account, a system, a database, or even a physical location) is restricted to persons who are authorized to access those resources, and specifically desire to take those actions or access those resources.

This introduces a third key term into discussion - authorization. A user is authorized to access a resource by the owner of that resource; they must be identified as that user, and the request for access must be authenticated.

A user may also authorize certain actions by a resource when they have been authenticated as being able to perform those actions.

(Notably, the owner themselves must be identified in order to access resources of this type - the owner constitutes a kind of user, after all.)

Proper authentication requires consent.

One very key aspect of authentication is that it must stem from a purposeful action on the part of the user - simply existing is not enough to access a resource or activate a process. The user must make a conscious choice to access the resource, and the means by which they indicate this choice to the system governing access to the resource is by authenticating this access.

This is the critical difference between biometrics and other forms of authentication: as a biometric measurement is inherent to the user, there is no conscious choice required by the user to be identified to the system with that credential. Instead of consciously entering a password, or providing purposeful access to some form of token, the mere presence of the identified user is presumed by the system designer to indicate a specific desire for authentication to the system, and an authorization for any subsequent actions.

This lack of ability to withdraw consent for biometric factors when the user does not desire them to be used is a critical weakness in every such scheme. Biometrics are inherent to the user; anyone who is able to place the user under duress - whether a criminal or a police force - can then identify the user to any systems the user is registered to, and can make use of that user's level of access to the system.

These issues have been thrown into the public eye by certain court cases where persons have been judicially compelled to provide the biometric factors that will authorize access to specific resources.

There must be a capability for change.

When a credential is compromised, the standard practice is to invalidate any credentials that were used or stored in the compromised system, and to generate new credentials to access the system.

Unfortunately, biometrics, as they are inherent to the user (and supposedly identify the user uniquely) fail badly in this respect. Short of destruction of the fingers, for instance, there is no way to change fingerprints.

There must be a capability to keep the factor private.

Passwords/passphrases and keys of various kinds can be kept private - that's the whole point, after all; there's no sense in using publicly available information to authenticate a request, else everyone would have that level of access.

Biometrics drastically fail this test, as individual users' biometric markers of all sorts are very widely available - few people are going to go about entirely covered in a niqab, wearing gloves, and assiduously monitoring for lost skin cells and hairs, after all.

Various methods have been proposed to attempt to mitigate the very obvious replay attacks that biometric authentication sequences are vulnerable to - things like attempting to measure blood flow, or requiring a specific action by the user during authentication. Unfortunately, for every such measure proposed, methods to bypass them are often quite quickly developed - the very presence of such measures informs the attacker of how to bypass them, and cuts down the research space considerably.

Biometrics have little to no hardening against replay attacks

This is related to the previous point - biometrics are, at their base, intended to be a static indicator of an individual's identity.

The other two traditional factors can be used to authenticate a specific transaction as being currently authorized - generally by means of signing a nonce value generated immediately prior to the request, and using the value provided by the passphrase or key to generate some derived value that can be compared by the requesting agency to an expected value on their side.

Biometrics, however, do not have the capability to do this to any meaningful extent. You can't change your fingerprint according to a hashed value; any attempts at voiceprinting are going to be limited to a relatively small space of words, and there is plenty of prior art for ways to defeat these schemes.

Biometrics are a username, not a password replacement

Ultimately, the only thing that a biometric indicator can prove is that someone can provide a data point that they are identifiable as a given user.

There is absolutely no indication that the user in question has intended to, or consented to, performing whatever actions are being identified.

It is, ultimately, this lack of consent that is most troubling about using biometrics as an authenticating factor, but the other issues presented here certainly contribute to the overall problem.

If an authentication measure cannot provide positive assurance to the system that the specific action is being performed with conscious consent, it cannot be suitable for authenticating that action.