Hacker News new | past | comments | ask | show | jobs | submit login

The arguments around this-- and the people who make them-- have always disappointed me. The expectation that we've drilled into users' heads is that when they see the lock icon their traffic is private. That is not the case, and we could warn them about it, and we don't. In my eyes that's a failure.

The arguments around warning fatigue are specious. The exact same mechanism that currently sets the number of warnings you get due to a pinning failure stemming from a user added certificate to 0 could easily make it 1 instead, or be tied to a "don't show me these warnings again" checkbox. Experimentation and data could determine whether and to what degree this was effective, as is routinely done with related warnings changes that have far less potential upside. But when you bring up the possibility of solving the question with real data the argument morphs into pure philosophy.

The philosophical points are twofold: first, a claim already raised here that fighting local admins is pointless because they'll always win and you don't want to get into an arms race. I attribute this to the fact that browsers developed on poorly sandboxed desktop platforms where admins are de facto root and no intelligent statement of any kind can be made about limitations of their behavior. On those platforms this isn't a crazy approach (although its shoulder-shrugging fatalism is distasteful to me even there). Fortunately, those aren't the only platforms we have today: on systems like Android the expectation is that corporate admins act through narrow, carefully controlled channels and will have no powers beyond those. There, the platform wins arguments with admins pretty much all the time. The arms race was over before it began. Without the risk of escalation from admins, the only question is whether the user is properly aware of the consequences of having had an extra CA added to their trust store, and again I refer to the point I made above: this can be settled with data. Rather than bend over backwards to give admins the benefit of the doubt, let's gather actual data on the degree to which users are comfortable with this behavior. And if they aren't, well, then the admin is an adversary and we have a duty to protect the user.

When you make this argument however the discussion becomes /really/ philosophical: people will start saying that limiting admin powers is anti-user-freedom, despite the fact that the user of the device clearly has a greater ability to make decisions for themselves about their security than in the free-for-all common to platforms of yore. Why that matters in this discussion is beyond me: even if you subscribe to this belief the horse is out of the barn and no amount of smugly screwing users will fix that. And some will assert that admins are users too, and that we need to serve those markets well. But the fact that people will give you money does not mean you should take it: if the data gathered above indicates that users do not want their traffic intercepted then that, in my mind, should be final-- if the amount of money on the other side convinces members of the security community to hurt users then in my view we should just give up the pretense that we're the good guys.




> The expectation that we've drilled into users' heads is that when they see the lock icon their traffic is private.

Except it isn't. Even simple things like cloudflare's SSL termination allows the traffic to go unencrypted over the internet and be intercepted by 3rd parties.

http://www.theregister.co.uk/2016/07/14/cloudflare_investiga...




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: