Hacker News new | past | comments | ask | show | jobs | submit login

It sounded to me like the parent poster wasn't saying not to use it, but simply that it cannot be relied upon. In other words, a deepfake could fail a 'turn sideways' test and that would be useful, but you shouldn't rely on a 'passing' test.



Another way to think of it might be that it can be relied on - until it can't. Be ready and wary of that happening, but until then you have what's probably a good mitigation of the problem.


I think the concern is complacency, and the inertia that existing security practices leads to security gaps in the future. "However, I don't know one organisation that doesn't have some outdated security guideline that they cling to, e.g. old school password rules and rotations."

Or put another way, humans can't be ready and wary, constantly and indefinitely. At some point, fatigue sets in. People move in and out of the organization. Periodic reviews of security practices don't always catch everything. Why something was implemented was forgotten by institutional memory. And then there's the cost for retraining people.


The flip side of that is people feeling/assuming there's nothing they can really do with the resources they have therefore they choose to do nothing.

Also, those that are actively using mitigations that are going to be outdated at some point are probably far more likely to be aware of how close they are to being outdated by encountering more ambiguous cases, as seeing the state of the art progress right in front of them.

As for people sticking to outdated security practices? That's a problem of people and organizations being introspective and examining themselves, and is not linked to any one thing. We all have that problem to a lesser or greater degree in all aspects of what we do, so either you have systems in place to mitigate it or you don't.


Therefore, developing and customizing a proper framework for security and privacy starts by accurately assessing statutory, regulatory, and contractual obligations, and the organization's appetite for risks in balance with the organization's mission and vision, before developing the policies and and specific practices that organizational members should be doing.

To use a Go (the game, not the language) metaphor, skilled players always assess the whole board rather than automatically make a local move in response to a local threat. What's right for one organization is not going to be right for another. Asking the caller to turn sideways to protect against deepfakes should be considered within the organization's own framework, along with the various risks involved with deepfakes, and many other risks aside from deep fake video calls.


Asking the caller to turn sideways is also a cheap countermeasure without serious side-effects. So there's low risk to adopting it.


If that is conclusion that is considered within the organization’s custom security and privacy framework, sure.

If there is no such framework, this is no different than yoloing lines of code in a production app by a team that does not have at least some grasp of the architectural principles and constraints at play. Or worse, not understanding the “job to be done” and building the wrong product and solving for the wrong problem.


How do you find out that it doesn't work?


Exactly. Even the article gave a couple cases of convincing profile deepfakes. Admittedly they’re exceptional cases, but in general progress tends to be made.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: