Hacker News new | past | comments | ask | show | jobs | submit login

Users/user agents need to know whether to expect a connection to be secure. Unfortunately, you can't necessarily trust any random link you follow to reliably tell you. If I can get you to use HTTP when you should've used HTTPS, I might be able to sniff your traffic. If I can get you to use HTTPS when you should've used HTTP, it might be a DoS.

Incidentally, this is the same problem as public key distribution. You need a trusted channel to receive public keys, and a trusted channel to know whether to use a public key. Why can't these be the same channel? Right now we have HSTS preloading[1] for the latter, but in that case why not preload certificates (or hashes thereof) too?

Then we can finally cut out the middle-men and realize the truth: that the browser is the ultimate certificate authority.

[1] https://hstspreload.appspot.com/




> If I can get you to use HTTP when you should've used HTTPS, I might be able to sniff your traffic

That is not worst case scenario. If someone can force http, they can also inject malicious code into the stream and do anything from bank transfers to create botnets. With the worst case scenario of always https being DoS, and worst case scenario of allowing http is code injection, I would prefer deprecating http in favor of https.


There are a few use-cases for standalone unencrypted HTTP. The two big ones:

• HTTP is redundant and costly when you're already in some other tunnel: a pre-negotiated IPSec tunnel for port 80 traffic to a given peer (e.g. a load balancer to its backend); talking directly to an HTTP proxy sitting on the jump box you're VPNed or SSH tunnelled into; etc.

• HTTP is actually a great wire protocol for non-networked RPC, such as between Nginx and your application server, running on the same box, over a Unix socket. FCGI, WSGI, etc. are just half-assed implementations of HTTP; you may as well just use HTTP. (Though the non-front-of-line-blocking benefits of HTTP2 RPC would be even better here, for green-threaded runtimes that can C10K.)

I do agree, though, that unencrypted HTTP can likely be deprecated for web browsers. The browser-addressible web is really a pretty strictly-bounded subset of the web as a whole, and we should strive to make it safe to browse.

That being said, such statements put me in mind of a future where your browser literally is not allowed to talk to all those old servers from 1997 that are still hosting whatever they were hosting back then. Instead, all requests for those "legacy" domains that nobody's updating any more would have to go to some trusted mirroring site served over HTTPS, like the Internet Archive. (The spidering logic for such "legacy mirroring" would also have to be slightly different from today's "latest mirroring" logic: if the IA's spider got MITMed to see something else, it should "doubt" the new version based on how long the previous site endured without change, and if its confidence is low enough, just continue showing people the old version.)

Is that a future we want? I'm honestly not sure.


Incorrect on public keys. You do not need a trusted channel to receive a key. You could receive one via smoke signal, carrier pigeon, or billboard. Existing key distribution systems may or may not be encrypted, but the reason for encrypting the channel is far more to protect the interests of the requestor than the integrity of the key itself. That last is independent of the key distribution channel.

What matters is that the web of trust associated with that key is sound (that is, you have assurance that the key belongs to whom you think it does), and that the integrity of the private key has been maintained.

The first of those problems is difficult, but not intractable. The second problem is rather difficult, especially in the case of persistent data, though the core requirement is that the key was valid when a message was generated, if you're looking at the sender of information. For your own information, you are relying on the recipient to maintain integrity over their private (decryption) key going forward, such that the data you'd transmitted remains encrypted against all others.

The first problem you point out, that any encrypted channel is not necessarily a secure channel, is valid, though given your misunderstanding on subsequent points I'm not sure how well that applies to this discussion (I still need to RTFA).


> Existing key distribution systems may or may not be encrypted, but the reason for encrypting the channel is far more to protect the interests of the requestor than the integrity of the key itself.

btrask wasn't saying that encryption is necessary for key distribution; he/she was saying that HTTPS guarantees identity and integrity, both of which are necessary to trust a key.

> What matters is that the web of trust associated with that key is sound (that is, you have assurance that the key belongs to whom you think it does), and that the integrity of the private key has been maintained.

That's a possible alternative to btrask's proposal, though you're equating "assurance that the key belongs to whom you think it does" with "web of trust". btrask's proposal is a special case of that, in which the web of trust is simply the sender.

> The first problem you point out, that any encrypted channel is not necessarily a secure channel

Correct, but not what btrask said. The first problem he pointed out was the fact that clients need to know whether a host expects secure communication before ever connecting to it.

> though given your misunderstanding on subsequent points I'm not sure how well that applies to this discussion

That's not very nice.


Clarifying my own post: I'm insisting that neither a trusted or an ecrypted channel are necessary.

I said that an encrypted channel could be used, and that it might not be, but that if used encryption would largely serve as a protection to the requestor, who might otherwise be subject to traffic and/or interest analysis based on the specific keys they requested, which could be presumed to be of interest, or signing keys (I'm thinking PGP protocol here) of keys of interest. Either piece of information would reduce search space for an Eve.

I'm not equating trust of keys to web of trust, I'm stating that in existing (PKI/PGP) protocols, that is the assurance mechanism. And it is independent of either trust OR encryption of the key delivery channel itself.

There seems to be a rather profound difficulty in distinguishing what I've said with what I've said btrask said. I'm not sure how I could be clearer, but I'm open to pointers.


You're both right. I only replied because you responded to points that btrask hadn't made, then claimed he/she misunderstood the topic.


I said trusted, not encrypted. I wasn't talking about private keys at all. I think I understand the issues involved. Thanks though.


And I still disagree on that point.

Maybe I misunderstand (though I also think I understand teh issues involved pretty well), or maybe one or the other or both of us are communicating poorly.

How would you distinguish a trusted, encrypted, and untrusted channels, say?


In the context of sharing public keys, I'd say you merely need authentication. Web of Trust being one possible mechanism. This isn't a particularly advanced topic.

Relevant to my original post, information about whether the connection should be encrypted also merely needs to be authenticated, not encrypted itself. Of course, the HSTS preloading site uses HTTPS (with encryption) because it's easy and why not.


Thanks. So re keysharing, authentication is a form of secure channel.

I'm reading the auth and channel as independent. Auth is something of a metachannel, perhaps.


Fair enough. :)


What if someone intercepts the carrier pigeon and swaps in a different public key of their own?


Then the signatures don't match, or the fingerprint is wrong. If you're relying on long-term data access, messages encrypted against or signed by the true key don't match. This is an area in which PGP and SSH differ markedly. PGP is used to encrypt and authenticate data which tends to persist, SSH data used only in session. While both can use long-lived keypairs, it's the PGP keys you're more likely to notice changing (though SSH cclients tend to report this happening).

Yes, that means verifying your keys, and probably through an out-of-band method.


Chrome does preload public key pins for large sites, not that it's the ultimate solution to what you describe.


Firefox also does this.


>> If I can get you to use HTTPS when you should've used HTTP, it might be a DoS.

Can you explain what you mean by this? Genuinely curious to know how it can lead to DoS?


One way would be to re-direct cacheable assets to HTTPS, thus foiling edge caching and increasing load on the origin server.

In general, caching is a big problem with the naive approach to "HTTPS everywhere." A mechanism to deliver signed cacheable payloads would be great, so that static assets etc. can continue to be edge-cached.


It would still need to be encrypted, or I could tell a lot about what you're doing on the site by looking at what cacheable resources you fetch.


Not everything is about privacy, and arguably privacy advocates have done a lot to harm our ability to have a trusted internet by conflating verification, encryption and anonymity.

The most annoying thing about HTTPS everywhere is that it ruins cacheability. This is a problem the distros solved ages ago by signing their content but acknowledging it's mostly pointless hiding it in transit.

But its absurd that in HTTP2 we have out of the box encryption, but we don't have a mechanism for doing authenticated caching.


> arguably privacy advocates have done a lot to harm our ability to have a trusted internet

We don't have a trusted internet. Not when a country on the other side of the world can mis-configure BGP and re-route all traffic through them. Not when our ISPs intercept and modify our traffic. Not when there are nearly 10x as many "trusted" root certificates as there are nations in our world.

The internet is the wild west, and we need to protect our computers from it. Currently, encryption is our best bet for doing that. If edge caching is a casualty, then so be it.

If someone can come up with a method for protecting content from end-to-end while keeping it secure against tampering and eavesdropping (because this too matters, both to us in the first world and the majority of others who are not), then let's start getting it put in place.


Agreed that unencrypted signed static assets provide a vehicle for activity monitoring.

Your statement can be misinterpreted to imply that merely by encrypting all traffic, such analysis can be prevented. There's plenty of metadata in a typical encrypted page load that can be used to do so.

For example, the view-discussion page might download three static assets, a js file, and two CSS files, one small and one large, whereas the post-comment page might load zero assets and js files, and one small CSS file.

Point being, making a privacy-protecting website takes careful planning even when fully encrypted. As such, it'd be great to have tools (such as signed content) available for performance optimization. Sure, naive usage might lead to attack vectors, but naive usage of HTTPS already leafs to many such attack vectors anyways.


That seems like a good idea -- a simple scheme where the browser validates every http response from a particular domain against a key specified in that domain's SSL cert (if the appropriate field is present in the https cert) -- seems like it would work well?



the only way I can think of is if the site doesn't support https


> Users/user agents need to know whether to expect a connection to be secure.

Why not expect it to be secure? Connect to https before http.


Behavior like that needs to come with a huge warning label.

It would be trivial for any man-in-the-middle to block https and server http.


This is exactly why browsers warn about such redirects. That said, this reminds me of a similar discussion on mail servers. There, STARTTLS sees much more use.

The main problem is preventing downgrade attacks. With mail it is easy to just remember the setting for every server. Not so with websites.


I've seen quite a bit of criticism of it for mail servers [1] because an attacker can simply block the 'STARTTLS' message and (many) clients will silently accept that.

[1] https://www.agwa.name/blog/post/starttls_considered_harmful


They could display that same "this page is not secure"-page that they display on broken certificates.


I'm not sure you can assume that the same URL with https will be the same content as at http. It could be an entirely different site that you may not have wanted.


That isn't really viable yet. Browser vendors could decide that they will introduce this functionality in a few years though.

IMHO, the feature would need to be implemented as some have suggested, by enabling any website to transmit securely or insecurely, but for the web browser to request a secure TLS connection first (trying HTTP and HTTPS to reduce incompatibilities) and if a website appeared to have issues, then try insecure connections. If an insecure page were to be served, the browser should indicate this with a broken padlock.

Furthermore, I believe that browsers should warn when any data is input, e.g. clicking items that cause JS calls or text is typed - this strict implementation is important. Single page JS applications have made it possible to send any input data via JSON, we cannot only warn the user on a form submission, since it would be very possible to capture details via AJAX. E.g. If I were impersonating an e-commerce solution, I could hope the user would not notice the padlock and use AJAX to send the data preventing any form submission warnings. This would be annoying for users when they were using such websites regularly, but this would be a good thing - pressuring websites that handle user inputs to act responsibly and use encryption via TLS.


what if you block the https request in some way? You can now force an Insecure connection.


A good solution for HTTP sites is to load the https version after first loading over http. If they have similar content, show a bar at the top of the browser with a message along the lines of "A secure connection is possible, click here to go to use the secure version of this page".

Then it would be good to remember this setting and always pull the HTTPS.


Hen and Egg, where do you get the secure browser from?


It's bundled with your OS. If you don't like the bundled one you can use it to download a different one using HTTPS.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: