you have ten thumbs, that means 8.00000001 more than the average person, that's some high level of security there :D but you're right. I don't understand why face ID is supposed to be secure. Anyone can take and use a picture of you. Same with fingerprints. Some hackers from CCC in Germany detected Angela Merkel's fingerprint from a press photo a few years ago, just to show how easily it can be done. These auth methods all require an extra factor that proves that it's you, in front of the screen, and not someone else, which is then more tedious than entering a password.
creating 3d data from a portrait photo is not that hard... nor is setting up a camera anywhere necessary, to get your face's 3d data. My point is, the "secrets" (your face, your thumb) are out there in the open, and there are a lot of creative ways to steal and store them, unbeknownst of the user. That's not good, if you ask me.
A big difference in the threat model is attack surface:
1. Someone can steal or phish a password remotely from anywhere in the world (see: haveibeenpwned for plenty of known examples)
2. That same someone can make use of a stolen password in most websites and web applications remotely from anywhere in the world
3. Someone else can attempt through distributed brute force to crack a password from anywhere in the world as fast as a given web API will allow them
Versus:
1. Someone can capture your face data from a camera in your physical proximity
2. That someone can use your captured face data to gain access to specific devices only in physical proximity to those devices
3. There is no public API in iOS to try to brute force face data, much less remotely or in a distributed attack (and no currently known CVEs on Apple's Secure Enclave)
That's still an attack surface, certainly, but it's such a much reduced attack surface versus traditional passwords that it is much better for the median and mode threat models of average users.
I can't tell you what your threat model is, of course, and would never suggest that there aren't good reasons to be paranoid about biometric unlocks. What I can encourage: know your threat models. Figure out exactly what it is you are scared of and learn how to defend against it. (I have my own paranoia, but I also learn my mitigations: I use face ID regularly, appreciate its convenience, and I also know that holding the top two buttons [Vol Up and Lock; the "Power" button combo] on an iPhone until the system vibrates is a fast and efficient way to temporarily disable all biometric locks until I next use a device PIN or cloud password.)
There is not only attack surface in the equation. I guess the basic calculation is something like:
(attack risk x attack surface x attack damage) - (security costs) must be greater than 0.
The damage you get when your security token is a piece of immutable data is relatively high. With a mutable password caught by phishing, you can retain/recover your accounts / machines and data and hopefully just clean them up. Breach closed. With immutable 3d data of your face... what do you replace it with, once it got stolen / reverse engineered?
I'd also like to point out that attack surface may be higher than you think for face recognition. The attack vector for any of your online accounts is ... your public profile picture.
I think your points are valid points regarding the current situation. However, I think that some practical and low-entry barrier automation around passwords and TOTP would be far more secure than biometrics.
Face data isn't that immutable: you can change your glasses, you can change your makeup, you can wear a face mask, your face will go through different acne and pore health period cycles on its own.
Face data is enrolled per-device and subject to a lot of whims of various neural network recognizers and lighting conditions at the time of enrollment and so much more.
There's no "universal form" of face data and no "universal recognizer" that all vendors that support face IDs agree on. Even under a single vendor you have to enroll every device separately and they will train subtly different recognizers each time. (Every new iPhone you have to do a fresh face profile.)
So far most of the attacks on face data succeed only at beating one single device enrollment and simply re-enrolling a device in a different room with different lighting conditions defeats it.
Tools like "Find My" when a device is stolen can wipe biometric recognizers before hostile parties have a chance to even try to crack your biometric recognizers on your device.
(Admittedly there are much more universal data formats for fingerprints and a lot more standardization among fingerprint recognizers, and in my own paranoia I don't trust fingerprint readers in my personal threat model for devices that leave my house. But today's face ID tech has far fewer "universals" involves a lot more Machine Learning statistical model casinos, for good and bad, that I feel more comfortable with that attack surface today. I reserve the right to change my mind on that given changes in the state of the art, of course. And again, I don't think you are wrong if you don't trust it in your personal threat models, you just may have a different threat model than mine.)
> I'd also like to point out that attack surface may be higher than you think for face recognition. The attack vector for any of your online accounts is ... your public profile picture.
If that's a part of your threat model that you are concerned that your public profile pictures contain enough view of your face to crack a face recognizer, then why are you using pictures of your face as your profile images?
I've got plenty of other reasons to distrust/dislike photos of myself and generally as a rule don't use photos of my face as avatars and profile pictures on my online accounts.
That said, every study I've seen on face recognizer attacks needed a lot more detail than is at all visible in any normal public profile picture, even when taken from the same cameras used in the recognition processes and especially when not taken from those same selfie cameras. I know there is still a lot of reason to be paranoid about such attacks, but so far none of them have "scaled" to general use in a way that concerns me, yet. (Again, I am keeping eye, and I will use the power to revise my threat models in the future. It's just not something that concerns my current threat models.)
> However, I think that some practical and low-entry barrier automation around passwords and TOTP would be far more secure than biometrics.
1) If they were practical and low-entry barrier automations, I'd expect them to already be in place everywhere.
2) Biometrics aren't the sole security in "passwordless" operations, they are just the (convenient) "front door". (And often not the only possible "front door": you can still use device PINs instead of biometrics as a personal preference on every platform I've seen.) Biometrics are used to unlock device-local (device-only) secure enclaves, the biometric data is designed to never leave the device at all (and clear from memory once the secure enclave is unlocked). Once the secure enclaves are unlocked in most of these "passwordless" systems (including and especially the Passkeys standards) all communicated data is switched over to asymmetric (public key/private key) operations, which generally is far more secure than passwords with or without TOTP.
2) you force me to be more precise. you are right. Biometrics are not immutable. However, you as a user have only a certain amount of control over their mutability. Can't change the nose you have, can you? A passoword's mutability is more in the control of the user, I'd argue.
3 ) I'm not worried about my personal profile picture and personal data. I am worried about society as a whole (including corporations etc.). Modern social media and corporate tools like slack increasingly "pressure" / "massage" people into uploading a real photo of your face. If face ID is to become a standard, I guess that's something that should be taken into consideration? Don't you think so?
4 ) If face ID is the "front door", does breaching the "front door" not make whole systems vulnerable, as credentials for other, more "secure" forms of authentication are hidden behind that "front door"?
5 ) Is handing over the "intelligence" to detect a valid / invalid credential to AI such a good idea?
6) Food for thought. I had to load a Chinese app on my phone for a business trip. It required face recognition, and required me to blink during face identification, presumably to fend off the "putting a picture in front of the camera" attack vector. Assuming they do not implement security mechanisms for nothing, because it costs money to do so, must I now conclude that standard face ID without blinking is insecure?
I'm just asking questions to which I don't have all the answers here. I remain unconvinced, but I wouldn't be offended if someone proved me wrong.
There are even more creative and common ways to make someone reveal their password, unbeknownst to them. Just good old phishing! Actually that’s demonstrably more probable than your face or thumb print getting stolen.