Hacker News new | past | comments | ask | show | jobs | submit login
Why HTTPS for Everything? (cio.gov)
224 points by maxt on Jan 2, 2017 | hide | past | favorite | 223 comments



Here's a Mashable article about adopting HTTPS served via plain old HTTP:

http://mashable.com/2011/05/31/https-web-security/

It worries me that major websites like this have still not made the switch to HTTPS/TLS yet. Quite irksome are the reasons (actually, excuses) site owners sometimes give like overhead, claiming switching over to HTTP/TLS will be costly and annoying, or even worse - that their threat model doesn't include HTTPS, and the burden is on the visitor to encrypt their connection to the site. The onus is on both parties to encrypt, instead of shunting the encryption to the visitor. As for threat models, the news can be a sensitive topic for some, and HTTPS can be of great service to visitors who enjoy their privacy.

I enjoy initiatives like Secure The News[1] which is a small public awareness campaign urging news outlets to adopt HTTPS/TLS. Initiatives like Google's HTTPS Transparency Report[2] are great too and give us great insight into the adoption rate of HTTPS/TLS:

[1] https://securethe.news/

[2] https://www.google.com/transparencyreport/https/grid/


im a lead dev for a large publisher. when we switched over to https we faced the following non-trivial issues:

1. a lot of third party advertisers/ad servers still run on http, these ads need to be embedded via DFP usually, which you can understand does not work out well. We made the switch months ago, to this date i am still making advertisers switch to https.

2. google says they give better ranking to https sites, thats simply not true so far as i have seen. in fact, in the short run your site takes a hit. not only that, in google webmaster console and google news, you cant shift from http to https, you have to make new accounts for ur https sites. to this day i do not know which ones is google crawling. For google news, my new https account has yet to be approved after months, it looks like google just magically shifts to https in google news. but if feels icky and hacky: explicit is always better than implicit.

3. microservices. remember those microservises that were all the rage? well, its a bunch of different servers and subdomains, you have to shift all to https when you shift the mothership to https.

while above points are valid, i still pushed in my org to shift to https. we now use shiny stuff like http2 and web push, which is awesome. i'd recommend all publishers to do so. but its understandable that management finds all this scary, esp cuz its sounds like a major overhaul of your web assets (which is everything when ure a web publisher) - even though it isnt really actually an overhaul or anything.


> 1. a lot of third party advertisers/ad servers still run on http, these ads need to be embedded via DFP usually, which you can understand does not work out well. We made the switch months ago, to this date i am still making advertisers switch to https.

So not only the ads collect personal data, slow down web experience and make every non-adblocked site a pain to watch... you go the extra step to not even encrypt transfer due to them?


Adblocking in 2016: using a HTTP firewall, blocking all HTTP requests by default.


How wonder how much you could learn about a person based on the ads they get served?


I don't get point 3. Microservices are for the backend usually, right? You don't want multiple TCP/HTTP[S] connections from the client to all your services - pointless overhead. Worst case scenario, if you need direct client-microservice connectivity, then throw all the services behind nginx and terminate SSL there.


im talking about when microservices expose apis consumed via ajax. then https-http connections dont work.


As the parent suggested I would terminate the HTTPS connection in an Nginx in front of all your microservices. No microservice needs to handle HTTPS then.


I thought it was recommended to use https between microservices for all communication? Otherwise the user might think their data is encrypted even though it travels plain-text through networks after the first server, as not all services will run in a separated network.


That's entirely up to the app. With Cloudflare it's not even normal for HTTPS to mean your data got to the server encrypted.

And anyways you can solve it in the same way: nginx LBs to terminate SSL internally.


Maybe it's such a large publisher that they need separate Nginx instances in front of each of the micro-services.


That goes against the basics of load balancing... And it shouldn't even be an issue to have multiple Nginx instances with similar HTTPS configs.


Honestly, #3 can be solved completely by putting your own HAproxy or Nginx reverse proxy in front of the microservice or API. You shouldn't be directly exposing your containers to clients anyway - you want a reverse proxy or load balancer to be able to throttle the traffic and provide basic security.

Also, if you have an ad network that can't serve https, just stick another reverse proxy like HAproxy in place and convert their http content into https for your clients. It works very easily for that type of thing as well - in fact, you can even do path-based routing like https://www.mysite.com/ad/* goes to http://ad-network.com (while https://www.mysite.com goes to your regular back-end), and HAproxy will do all the rewriting for you.


Ad networks generally won't let you do that. To them it looks like click fraud.

Also, most ads are now html5 with many resources and scripts etc. for interactivity. You can't just host it on another domain - there will likley be lots of internal absolute paths and paths assembled with JavaScript that one can't practically rewrite.

Remember most sites use an ad-exchange, so we aren't talking fixing up one or two ads here - there are probably over 1 billion unique creatives to fix up.


This is really sad and disturbing to read. Especially the ad networks side of things. But I really hope that people like you pressure these advertisers into providing HTTPS. It's not this "hip new web tech".

It's an established standard and now completely free to set up (thanks to Let's Encrypt ;)). I don't see how these two (yes, frankly, two reasons. let's really ignore the third one) can keep anyone from providing HTTPS connections to their dear readers and users.


> now completely free to set up (thanks to Let's Encrypt)

To be fair, I would expect any reasonably large company to obtain an EV (Extended Verification) certificate, which is very much not free. Let's Encrypt only does DV (Domain Verification). EV includes verification that you're actually the incorporated company that you're telling everyone you are.


Are you using proper forwards from your old HTTP to your new HTTPS URLs? Wired also took a seo hit and this was probably their problem.

If you use permanent forwarding (HTTP 301) then you should be fine.


yes we have permanent 301 forwards set up since day 0. but still it happened in the short run.


> google says they give better ranking to https sites, thats simply not true so far as i have seen

I believe Google says they use HTTPS as yet another signal they use to rank your site, not that they will significantly push up HTTPS sites over other signals they use, like 'quality of content'.


Point 3 makes no sense...



What are the performance characteristics of HTTPS/TLS as compared to plain HTTP these days? (Serious question, I don't know.) You gloss over the overhead, but that may not be insignificant.

Every millisecond of latency counts for user engagement and bounce rate. Every round trip in the handshake hurts measurably. Particularly on slow cellular network connections, of course. It's quite possible that web sites exist where the performance penalty for TLS is worse for the business than simply running unsecured.

And anything with an ad network (or even other resources like web fonts) runs into the mixed-content problem. If the page is served over HTTPS, then so must every other asset be, or else the browser throws up warnings. And then you're dependent on every partner asset on the page to keep its own TLS certificates valid, or else you get browser warnings again and lose some amount of user engagement.

Security always has costs and tradeoffs.


"In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that."

https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


This is only server-side CPU. What about client-side latency? TLS involves extra hops to establish a connection.


In TLS 1.3, when the client sends its list of supported cipher suites, it guesses which one the server will pick and includes keying material for it in the initial ClientHello. The server does the same in its ServerHello. That reduces a full handshake to 1 round-trip when the guesses are right (vs. 2 RTTs in TLS 1.2).

TLS 1.3 also supports 0-RTT connections for servers the client has connected to before using a static "pre-shared key". There is some additional vulnerability to replay attacks (newly introduced with 0-RTT) and loss of forward secrecy (the same as 1-RTT session resumption in TLS 1.2) of the application data sent in the initial ClientHello. The rest of the data sent in the connection has the full set of TLS security guarantees. The pre-shared key approach has less overhead for servers and a simpler synchronization problem for load-balancers than the session IDs and session cookies of TLS 1.2.

See, e.g., https://blog.cloudflare.com/tls-1-3-overview-and-q-and-a/ for more.


1 extra round trip is a lot when your users are on shitty connections in India. 0 extra round trips for servers the client has connected to before doesn't matter when most of your users are first-time visitors.

Not to mention that TLS 1.3 isn't supported on IE8, a complete deal-breaker for things I've worked on.


You're thinking of 1.2. 1.3 isn't finalized yet.


Huge JS ad injectors, JS template renderers and other dynamic stuff adds about 100x more latency on most web pages than anything TLS related.


Not for first-byte latency.


If you care about client-side latency you should be using HTTP/2. And HTTP/2 is HTTPS-only.


HTTP/2 is not HTTPS-only, the browsers implementations may be.

https://http2.github.io/faq/#does-http2-require-encryption


it's HTTPS-only in all browsers that support it. http://caniuse.com/#feat=http2


Not supported on IE8. Total deal breaker.


IE8 is not supported, along with Windows XP.


so?


The overhead can be minimized on subsequent connections if you pin a client to one server handling TLS termination for the session (you can either pin to one application server, or a specific load balancer) and your servers have enough memory to keep an adequate number of sessions in memory. You probably take up roughly 10KB of memory per TLS session that's active, so 100 active sessions per 1MB or so which isn't too bad.


> subsequent connections

What if most of your users are first-time visitors coming from google results?


You take an extra hit the first time a TLS session key is established, after that all subsequent HTTP requests can do a fast resume. This has nothing to do with users over a period of time.


The overhead is so small that it's basically irrelevant if you are not youtube or netflix. If you have a performance problem with https then either you have a flaw in your software or some serious configuration mistake.

In practice in many situations you can even get better performance with HTTPS, because many modern features (for performance notably HTTP/2 and Brotli) aren't available over plain HTTP.


Yep, Netflix even switched to 100% HTTPS for video streams (and the sky didn't fall) [0].

[0]: http://techblog.netflix.com/2016/08/protecting-netflix-viewi...


Netflix has performance needs that are not representative of every site in the world. For example, first-byte latency probably doesn't matter much for them.


Youtube uses QUIC for most of its comms (to Chrome and to the Android YT player; possibly to the iOS one too, I don't remember), which does implement TLS with 0-1 RTTs for connection setup (practically speaking, no additional delay).

And IIRC, encryption overhead is pretty insignificant if you have a CPU with AES instructions, which is to say, pretty much anything from the past 5+ years.


> pretty much anything from the past 5+ years

With a cave-at: Intel's low-cost CPUs didn't include AES-NI until pretty recently. Which is why, in 2014, I went for a cheap Athlon instead of a Celeron for my custom-built homeserver with full-disk encryption.


I worked for junglee.com (a relatively unknown site owned by Amazon that used very little CPU) and extra latency on HTTPS was a massive, serious issue.

What flaw in our software or serious configuration mistake do you think we had?


Based on the current configuration: No OCSP stapling and no elliptic curve key exchange.

Also your setup suffers from TLS version intolerance, which by itself isn't a performance issue, but it is a hint that you're using a badly written TLS stack.

https://www.ssllabs.com/ssltest/analyze.html?d=junglee.com&s...


And how much do those increase first byte latency? (which is the real problem)


Tough to say without knowing more about the stack. But in terms of first byte latency, some things you can look at:

* TLS False Start

* HTTP/2 + ALPN

* Optimizing TLS record size

Some 2013-era advice for nginx here: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-...

And some other advice and links here: https://istlsfastyet.com/


Pretty much all of them matter for first byte latency. OCSP & the key exchange both happen before any data can be exchanged.


HTTP/2 significantly reduces the issue of repeated handshakes. See: https://www.httpvshttps.com/

Obviously, that site is picking a specific case to make a point, but the point is significant.


The existence of the request itself (for the handshake) is most of the cost. If a client is on the other side of the world, hundreds of ms. Otherwise if you have servers everywhere, still probably 50-70ms for the handshake.

Edit: in my experience.


    "TLS has exactly one performance problem: it is not used widely enough"
https://istlsfastyet.com/


Downvoted.

The information in that site doesn't actually back up the headline. It handwaves away the extra-round-trip issue with "well, there are experimental ways in the works to eliminate one of those extra round trips" which is worlds away from saying there are "no performance problems" with TLS in practice.


We switched back to HTTP from HTTPS due to ad revenue difference. A lot of advertisers / ad networks still do not support https.


Sites understandably don't want to mess with what works. We won't see a switchover unless and until not being HTTPS becomes something that hurts their bottom line.


I feel for later generations - learning tech gets harder and harder. Of course as the sum of knowledge increases this must be true, but as we go HTTPS-only the ability to type commands to a web server over telnet as a learning experience will be a loss.


> the ability to type commands to a web server over telnet as a learning experience will be a loss.

You can do this with openssl.

    openssl s_client -connect news.ycombinator.com:443
    GET / HTTP/1.1
    Host: news.ycombinator.com
Press enter twice and you'll get HTML.


There's also "socat", which is netcat with ssl support and it carries over its friendly command line.


That's how progress works.

50 years ago you could work on a brand new car with just the tools in your garage, now you'd need specialized knowledge, equipment, and more to the point that it's not really possible to be a home mechanic on many areas.

This isn't new for new's sake, these improvements bring real benefits. And as things get more complex, people will specialize more and more. In the future, your average "dev" might not know the ins and outs of how the transport layer works, but that's okay because there is someone else who mainly only does that.


> This isn't new for new's sake, these improvements bring real benefits.

Half of the time. In the case of HTTPS, for sure. But it is becoming increasingly hard to tell apart real improvement and change-for-change. This is especially true in the customer electronics space, but also in the cloud industry.

I would argue that a major selling point for a lot of things today IS the fact that they are "new" not that they are "better". Just think of all the "smart" devices that use a server in the cloud to connect to your phone 1m away...


Not a problem we would have if we had adopted new IPv6 years ago just because it's new. Connecting devices together when you don't own the router between is hard. It's harder when everything uses nat and stateful firewalls.

A product that relies on the user being able to change settings in their router is not a mass market product, so you don't see a lot of people build things like that. That's what I would want, that's what you might want, but most people want 'end to end' connectivity without having to mess with port forwarding, static addressing/DDNS update, or firewalling. Had those interfaces/problems been solved/simplified 10 years ago, we wouldn't have these conversations. But no one did, and we are here.


We assume that there will always be someone who knows it works.

With more systems working fully automated or administrated remotely, I think it's fully possible that for some niches there won't be anyone anymore with the exact technical details - or the only people who possess them will be sitting an ocean apart behind the walls of $megacorp and are not allowed to share the knowledge.

In fact, the current crisis in information security shows that even today, a lot of people have serious misconceptions how the systems they are using daily work - with practical consequences.


I don't think you have to assume that.

First, the knowledge is there, the tools are there, it's just that it takes time to learn. All most all of this stuff is in RFCs and built in the public eye in the standards bodies and OSS. You can read the code, you can read the standards, you can get a book about it. It's just not intuitive, because most complicated things grow past that.

Second, there will always be experts at various parts of the stack. That's how 'capitalism' works, right? We all specialize until we only do one thing really well. I don't think we could have scaled the internet to the size it is now if everyone who worked in IT was still keeping the mail server off spam blacklists, and keeping the company web-server humming in the closet.

We get to a point as a group where we have bigger problems to solve and so someone learns to solve hundreds of peoples spam problems, and someone learns to solve hundreds of peoples route buffering problems, and someone learns to solve end to end encryption for web traffic.

Everyone can't be a functional expert in everything, that's how we moved past the middle ages. And that's how we are going to move forward as well.


you can build a brand new 1967 car with just the tools in your garage? Wow


I mean... yeah, actually, you can -- if you had every (new) part from, say, a 1967 Ford F-series truck, anyone could absolutely, with just a Chiltons and a good set of tools, put together every nut, bolt, and screw from a giant crate of parts to build it.


If you're willing to gloss over the expertise needed to build a vehicle (experiences and know-how) then you might as well gloss over the same on modern platforms.

My current day vehicle requires no additional tools or software other than python and pyserial + a USB/OBD2 dongle that can be had for under 10 bucks and my garage filled with the same wrenches that I could have used to work on many previous generations of car offered in the past few decades. With these tools I can touch every subsystem that a factory service mechanic can.

One could say "You'd be insane to try to bang out every bit of a communications protocol and interface with the car that way", but I could say "You'd be insane to rely on a novice with zero experience to build a car i'd want to try and drive."

I have no doubt that the novice would get something rolling, but suspension / engine / AFR tuning is a skill set that requires experience and education; the novices product would be inferior to the craftsmen.

tl;dr: with sufficient gumption there isn't really that much more stuff in your way of being a backyard mechanic -- sure the companies try to dissuade you from doing so, but they have been doing that since the 50s Jaguars and 1000 dollar service manuals.

There is just a wider variety of required reading. And to those interested : using your CS skills to 'perform the impossible' on a car is a hoot -- As ECMs and control software become more powerful and more in control of the car, so does the technical wizard!


indeed. in some of today's cars it's really astounding how much more power you can make quite easily, if you're into that sort of thing.

hit a few keys and turn a few knobs and all of a sudden you're making 50 horsepower more, and living life in the no warranty danger zone. exciting!


Hondata is a great example of using a laptop to tune a car. For those interested in buying parts like assembling a big lego, http://www.factoryfive.com/ is a good place to check out.

Not thinking too much about it, I used letsencrypt on my business site as soon as I read that google would be looking at TLS for SEO purposes. Sadly SEO seemed more important than speed if I was using both to judge if we use TLS or not. If the speed is an issue what does it matter if someone can't find the site?


That's interesting. I wish I could buy these "kits" today, put it together myself and drive a cheap cool car. Is that was a option then?


Kit cars are still a thing: https://en.wikipedia.org/wiki/Kit_car . That said, they are almost all replicas of old cars, and in some countries getting them licensed for use on the road is hard.


From the Wikipedia article: "A kit car should not be confused with a 'hand built' car or 'special' car, which is typically built from scratch by an individual."


The parent, a few up, was talking about getting every part: to me that implies you're using pre-made body-panels, etc., rather than making them from scratch.


It wouldn't be cheap. The safest, cheapest and most efficient "packing crate" for the parts for a single car is the assembled vehicle. (It's somewhat different when you can stack 50 left front fenders.) A kit car (most use fibreglass bodies) would be a much better option. Expect to spend about 1000 hours (much of that building jigs). The second one you build goes much smoother and faster than the first, even if it's a different kit based on a completely different pair of vehicles (the "source", from which most of the drive train parts will be drawn, and "target").


Telnet/netcat seems simple only because all the layers below are hidden inside the operating system. You don't type in all the MAC/LLC/TCP headers, and you can avoid typing in TLS headers just by hiding them inside the library (LibreSSL, GnuTLS etc.). With libtls API you can think about cryptography in terms of "padlock icon" until you want to learn more.


No, but you can build your own stack from scratch fairly easily, and it's easier to debug when you can see the plaintext through the data structures.


And it's easier to debug a video codec if it's ASCII art, but there's huge benefits for using TLS everywhere instead of plaintext, and they more than outweigh "it's harder to debug."


True, though reimplementing those is child's play compared to TLS.


Having done semi-serious implementations of both, I disagree. Implementing IP and TCP and getting it hooked up to your OS so that your packets get spat out the network card and your responses get relayed back to your code is much harder than implementing a minimal TLS.


True. Minimal TCP with 1 MSS window may be easy, but proper congestion control with fast recovery, F-RTO, tail loss probe, SACK etc. is much harder. Miss one of these aspects and you get a TCP that takes minutes to recover from a lost packet in some obscure case. It took years to debug Linux TCP stack. Even BSD stack is already way behind.


Given that the context is "ability to type commands to a web server over telnet as a learning experience", I was of course talking about minimal implementations. TCP with congestion control is hard, just as TLS with proper compatibility is hard.


Yes: I was not including the difficulty of getting efficient congestion control implemented, but rather just the ability to deliver and receive TCP streams at all.


...as is the ability to inspect the traffic in your network to and from the devices you own. I think that is an even scarier situation, considering what others have discovered about "smart" devices precisely because their traffic was not encrypted. E.g. https://news.ycombinator.com/item?id=6759426

Personally, I'm for HTTPS connections to things like government websites (which is what this article seems to be mostly about), but against "HTTPS everything" in the way it's going to be implemented.


If you want to avoid telemetry, you need to use open source devices. Prohibiting HTTPS for everything non-government is not a way to go.


> but against "HTTPS everything"

You being against HTTPS everything, is the same as being in support of MITM attacks somewhere. I am curious when is that the allowable case?


Forcing the use of encryption puts a constraint on the minimum level of processor power and memory needed for HTTPS capable IoT devices. Doing a TLS handshake with 2048-bit keys can be problematic with current low end processors. Simple devices with no security concerns can be given a web interface with cheaper hardware so long as browsers still work with plain HTTP.


Even the ESP8266, which you can buy for a couple of bucks in bulk, can do TLS. Is there really a need to support lower-end chips?


I own my computing devices and I should be able to control the traffic they create. Encryption must not prevent me from doing that.


Encryption does not prevent you from doing that, just everyone else.

Of course, with non-free software and walled gardens, that might involve some amount of reverse engineering, injecting a CA certificate in a trust store so you can run a MitM proxy, or do something to bypass key pins, but that's never really stopped anyone from finding out what an application is sending on the wire.

You acknowledge that there is a certain amount of traffic that ought to be encrypted, so you really need a solution for all applications either way.


Effectively, I feel like it does prevent you from doing that due to the reverse engineering necessity. The time multiplier between engineering vs reverse engineering is too large.

Who's going to spend the time hacking through {random Chinese smart lightswitch clone #8392727} that's sold in small volume?

There's going to need to be a legal "right to decrypt traffic" on black boxes, if we're serious about this.


And that's where we run into problems. How do we make it so that You can decrypt the traffic from your devices, but random hackers, your ISP, the NSA, etc can't? It's the same arguments against special decryption keys for the Government - a backdoor for one entity can be exploited by other entities.


How do we make it so that You can decrypt the traffic from your devices, but random hackers, your ISP, the NSA, etc can't?

The suggestion made at https://news.ycombinator.com/item?id=13303650 of terminating TLS at the border addresses this --- traffic on the public Internet is encrypted, but is decrypted in the private local network. In some ways it is similar to a VPN. I run a filtering/adblocking proxy that works in the same way.


Any pointers on what the encapsulation for that would look like? It seems like one good option, but I'd say it's only feasible if it doesn't require work on the part of the manufacturer.

My other thought was just mandating a method of loading CA certs onto all IoT devices using an open standard connector. If the owner so chooses.


In fact injecting a CA into an embedded light-switch is borderline impossible. At least is much harder than installing a user CA on your phone.


> I own my computing devices and I should be able to control the traffic they create.

And then, you are advocating for MITM them, instead of plainly controlling what traffic they create.

If you really want to control them, you should be advocating for open source and the end of DRM.


I do advocate for open-source, but that is often not a practical solution. MITM is more powerful.


Just install your own root certificate on the devices you own and do MITM analysis with it.


And you still can MITM the traffic, you just have to install your MITM's cert on your device you want to MITM.


This only kind of works. Apps can embed their whole certificate chain and ignore the system one. I don't disagree we should have HTTPS everywhere, but for reverse engineering it does make things harder.


Since you own the computing device and the connection, it's theoretically possible to read the session encryption keys from its memory.

This may be really hard in practice though.


You can pretty easily. Just use a proxy that does TLS interception. Not a big deal these days.


If your device connects to a third party, and you can't change this behavior, do you really own that device?


Idea: https terminating home routers with open standard will be solution for IoT devices, may be something like UPnP?


This objection has always seemed pretty weird to me.

Of course you can type HTTP commands character-by-character into a terminal. You just have to use a TLS-aware tool to do it.

Meanwhile, you can't really just type HTTP commands to a server without tooling, because a whole bunch of TCP is happening behind the scenes. Why is "telnet" OK, but OpenSSL "s_client" isn't?


You can't type any lines beginning with Q or R with s_client.


You can use the -quiet or -ign_eof options to disable this.


You can also use socat (the multipurpose relay) to provide a bare TCP socket to which you can connect, even using telnet.


Then it should also be noted that you really shouldn't use telnet to interact with HTTP or SMTP either. Telnet has issues if you send control characters, which wouldn't be used for either protocol, but might exist in embedded data you might want to send or receive. Just use netcat.


I guess, but it's hard to think of an HTTP request you can't reasonably make with telnet, while s_client apparently won't let you use the Referer header.


I got you bro!

$:openssl s_client -connect localhost:443

CONNECTED(00000003)

...snip....

lots of cert info

...snip...

GET / HTTP/1.0


As others have pointed out, there are tools for creating a TLS socket, not much harder than nc or telnet. However, HTTP/2 will be a totally different game.


Even with HTTP you usually don't use netcat beyond playing with simple GET requests. Most web app developers and hackers use developer tools available in their browser, libraries for their favourite language (urllib for Python, LWP for Perl) and specialized command-line tools (curl).


fully agree, but I think the point was around the educational value of literally being able to handcraft HTTP requests. I learned a ton back in the 90's doing exactly that for various plaintext protocols (FTP, SMTP, POP3, IMAP, HTTP).

Yes, there are tools that let you inspect them, but there's something about being able to walk around right at the protocol level to understand it's nuances (e.g.: CR/LF issues with HTTP).

However, all of that being said, I'm sure the real old-school hackers think all this PHP/Python/Perl mumbo jumbo obscures the real C/C++ code which their interpreters actually drive. And those old-old-school hackers think those C/C++ guys are obscuring their assembly code.. okay, I kid, but you get the point. We all deal with abstractions at some point. Perhaps in time, HTTP/2 tooling will come to improve, and my concerns will vanish as well.


FWIW, OpenBSD's netcat supports creating a TLS socket. I'm not sure it will work on all Linux distributions since it depends on libtls/LibreSSL and most ship with OpenSSL.


Learning a field should get easier the more we collectively know about it - the information should be structured better. This is particularly true of software, where we decide how to structure it. If tech is getting harder, this is our failing, not an inevitability.


Other way round: the more structure we build, the more there is to learn. The blank-slate nature of software is particularly problematic here; people end up proliferating solutions faster than they can be learned.


You could type them over openssl s_client instead.


Indeed, the openssl tool is a veritable swiss army knife for PKI crypto.

It's a netcat replacement:

    openssl s_client -connect www.google.com:443
while also providing information on the TLS handshake that's useful for debugging (like the server's certificate chain or its list of trusted CAs for client certificates).

There's dozens of other subcommands to do useful things like decode certificates (x509), generate keys (genrsa/gendsa) and create certificate signing requests (req), just to name a few.


It's not a great netcat replacement, for one it's not binary transparent (eg. a SMTP "RCPT TO" causes a rekey due to the pattern "\nR.*\n"). The command line usage is atrocious too.

That said there are many good alternatives (ncat, telnet-ssl, etc), and eventually one will gain the popularity and ubiquity that nc, curl, and similar tools did before them.


openssl s_client is really valuable if you regularly work with minimal container images that don't have curl, wget or even ncat. openssl(1) is almost always there.


The bigger issue is hardware / software that manufacturers won't update. HTTPS best practices often require servers to drop support for old protocols.

So if you pull out a 10 year old palm pilot and try to go to HN it won't work due to SSL.


Indeed, computer history museums are going to only be able to show older working exhibits - cloud backed gadgets will just be objects to look at and imagine what they used to be.


You might have to do some custom work to make some of the "history" accessible.

For example, Stanford's CS144 class uses a patched version of the Linux kernel to enable people to create their own TCP/IP clients[1]. I'm sure that if stuff like this becomes really inaccessible for newbies, similar modifications will be done for other applications to allow simpler concepts to be taught and explored.

[1] http://web.stanford.edu/class/cs144/assignments/ctcp/assignm...


i think that's more a tooling problem. Making tech accessible to someone new really important.


Job security.


Related: last week I deployed my web app for the first time, on AWS. In two clicks and for free I've been able to create and configure two certificates, one for the React app on CloudFront and one for the API on BeanStalk. Really surprised and amazed how simple and smooth was the whole process. Thanks Amazon for this.


> The IETF has said that pervasive monitoring is an attack, and the Internet Architecture Board (the IETF’s parent organization) recommends that new protocols use encryption by default.

While HTTPS does prevent just anyone from monitoring, I've long been under the impression that the government, and possibly influential corporations, probably have access to any certificates issued by the large CAs.

Is this a tinfoil hat theory? Is anything legally or technically preventing this from happening, and/or are there ways for me to know when my own browsing is truly private between myself and only the party at the other end, and not other curious or intrusive uninvited third parties?


> I've long been under the impression that the government, and possibly influential corporations, probably have access to any certificates issued by the large CAs.

Everyone has that access. Click on the padlock icon on any HTTPS-using website, and after a few more clicks you can export a copy of the site's certificate (at least on Firefox).

But that gains you nothing without the corresponding private key. The private key is generated on the website's server, and is never sent to the certificate authority (what is sent is a "certificate request", which has basically the same information found on the certificate).

> are there ways for me to know when my own browsing is truly private between myself and only the party at the other end, and not other curious or intrusive uninvited third parties?

Now that's a different question. While having access to the certificates is no problem at all, being able to create a new certificate for an arbitrary website allows one to pretend to be that website. The only defense against it is that, if a CA is caught issuing these certificates, it risks being removed from the browser's trust lists, which is a death penalty for a CA's business. Also, there is a new initiative (Certificate Transparency) to make it easier for these certificates to be caught.


> Now that's a different question. While having access to the certificates is no problem at all, being able to create a new certificate for an arbitrary website allows one to pretend to be that website. The only defense against it is that, if a CA is caught issuing these certificates, it risks being removed from the browser's trust lists, which is a death penalty for a CA's business. Also, there is a new initiative (Certificate Transparency) to make it easier for these certificates to be caught.

There is a defense against rogue CAs: HTTP Public Key Pinning (HPKP) [0]. Chrome, Firefox et al use a HPKP preload list, but unlike with Strict Transport Security (HSTS) there currently appears to be no way to submit one's own site for inclusion in the preload lists. See e.g. Mozilla's policy [1].

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key...

[1] https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinn...


As well as what rhblake said, it's also always worth remembering a core aspect of all this: a lot of general authentication discussions are concerned with the extremely important issues of public scalability and ease-of-use, which is where things like the standard CA system come into play, but those don't necessarily need to apply for someone's own specific instance. So in this case:

>are there ways for me to know when my own browsing is truly private between myself and only the party at the other end, and not other curious or intrusive uninvited third parties?

Yes, you can use a side-channel. At that point it's just a tradeoff between how much effort you want to/can expend vs what value is at risk. At the simplest and easiest, this could just plain mean giving them a call (or message or even snail mail) and spending a few seconds having them verify what the certificate signature should be. Or for that matter talk to a few other people at geographically distinct locations and ask what they see. Combined with pinning (or manual trust/locking or whatever depending on your tool) merely knowing you've got the right cert and will be alerted if it changes may be all that's required and immensely raises the time/resource/expertise cost for any potential attacker. For more effort and a higher level of security, people can flat out physically exchange certs/keys (or even pure entropy) and bypass any 3rd party authentication involvement entirely. It's easy to make one's own entire CA architecture in fact, all the tools are freely available for all. It's the general sharing and trusting that is the stick wicket, but within the bounds of an existing trust relationship it's much more straightforward.

Obviously this is all more work then just going to a URL and seeing if it has a green lock, or CT or anything else, and because it's work each time it's unrealistic to expect the general population to do so in general. But if someone specifically needs communications to be "truly private between myself and only the party at the other end" then it's achievable by trading in more upfront setup work, and always has been. Much of the ongoing development and research and marketing efforts are about trying to raise the universal floor level, but everyone is free to exceed that if necessary.


You can also use the Perspectives plugin to have other servers check the certificate.

https://perspectives-project.org/


Correct me if I'm wrong, but isn't the whole point of the CA system that you only need to send the public key to the CAs?

So, unless the NSA possesses some P=NP level crypto breaking technology, it should be technically impossible for CAs to fake a particular public key, no matter how corrupt/undermined they are.

Of course there a few other more practical things an evil CA can do:

- generate another keypair and create a certificate that claims the key represents <organization of your choice>. Certificate Transparency is supposed to prevent that. Also if the organization gets to know about your fake certificate, prepare for their wrath.

- Offer a service in which you "helpfully" also generate the keypairs for your customers before creating the cert. Now you can simply save a copy of the private key and keep control. That will only work if your customers fall for it though and don't insist on bringing their own keys.


So, what's an embedded system supposed to do when it can't reach its NTP server and needs to validate a cert?


As usual: it depends.

Why does it need to validate a cert? How acceptable is it if you get it wrong? Depending on the answer, perhaps "have a more reliable clock" is the right answer (plenty of embedded devices certainly have a decent idea of what time it is, and if it's already big enough to validate TLS). It seems reasonably probable that the NTP server stops being available for a reasonable amount of time before you have no idea what time it is anymore and can no longer validate certificates; so depending on the device, telemetry might be a good idea too.

It doesn't sound like a reason to give up, though :)


What I meant is:

- Device is rebooted

- Can't reach NTP, no RTC or dead RTC battery, happy that time is January 1st, 1970

- HTTPS breaks


You can reach a web host but not NTP? That seems like an edge case.


Presume the cert is wrong.

If your system can't handle that, a few options: - put a clock in your embedded element - Pin certs - Use a frontend, embedded only connects to authenticated embedded system (say, with ssh Port forwarding). Frontend does connection correctly.


Ask the user for the time.


  > What time is it?
  Jan 2nd, 2017
  > Sorry, your certificate is expired. Access Denied.
  Wait, I wrote the wrong date, it's Jan 2nd, 2016
  > Ok. Authorized


I mean, it kinda sounds and looks silly, but if you want to bootstrap security, it should do.

Atleast I can't think of a better way to get time from a server with a cert.


Main part is "When properly configured". I recently tried out Brave browser on mobile which supports HTTPS everywhere. It slows the mobile web experience so much down that it is not funny. I really like to have HTTPS everywhere but unfortunately not everybody does a good job with that (Handshaking... SPDY, HTTP/2 etc.)


Interesting, do you get that when browsing HTTPS sites with mobile Chrome/Safari?


Funny that this comes from *.gov. It's in their (the governments) interest to keep traffic unencrypted so that they can intercept and store everything (makes life of CIA/FBI/NSA easier). Why bother enforcing HTTPS? /me confused

The people at NSA are probably like "Oh god why? No, stahp it".


It's almost as if the government is made up of many departments and people who have different goals and values...


And this is a White House policy, with their official blog post and rationale here: https://www.whitehouse.gov/blog/2015/06/08/https-everywhere-...

"It is critical that federal websites maintain the highest privacy standards for the users of its online services. With this new action, we are driving faster internet-wide adoption of HTTPS and promoting better privacy standards for the entire browsing public."

(Disclaimer: I work on https://https.cio.gov, at GSA.)


The government also have an interest in ensuring that the nation's businesses don't fuck up due to easily preventable infosec failure. Because infosec fails equals lost consumer confidence, lost economic development, lost taxes and much else besides. Governments have wider interests beyond simply sniffing all your traffic.

This is why the UK's National Cyber Security Centre - which is the friendly business-and-civilian-facing side of GCHQ - publish lots of guidance telling people to use TLS.

https://www.google.co.uk/search?num=20&q=allintext%3Ahttps+s...


Oh, I get it. Government illegal hacking and capturing of data due to increased infosec doesn't equal to lost consumer confidence, lost taxes and all that (or only to a lesser extent). Priorities, eh..


It's only in the NSA's interest so long as having open access to criminal communications is more valuable than allowing all of their flock to walk around unprotected.

Eventually the balance will flip, and our protectors will come to the rescue! Just in time, I'm sure.


The thing that convinced my employer this was actually important was the announcement that Chrome 56 would mark non-SSL login pages as unsafe. Cheers to Google!


One thing I haven't necessarily seen raised is that including subdomains on HSTS headers would also affect internal websites on the same domain.

I'd argue that it's a good reason/way to get all internal sites upgraded to HTTPS but it might not be feasible for a large organization. So something to consider.


Isn't the CIO a presidential appointment? I wonder who the next one will be. Or, will there be one at all?


Yes, the federal CIO is a presidential appointment.


All web traffic that is not encrypted is vulnerable to having its contents altered enroute.

This is a type of man in the middle vulnerability that allows for javascript, posts, etc. to be changed into something malicious.


A+ on Qualys yeaaaa

I see people mention Let's Encrypt, maybe I'm a sucker paying the $9.00 for a year's worth versus free but every 90 days.


Let's Encrypt is set up to strongly encourage a completely automated set up, including renewal. From my experience "but every 90 days" is effectively for as long as desired.


Yeah maybe I'm just using it as an excuse for not learning to do Let's Encrypt. If it's the same as a standard $9.00 domain validation SSL then I could be saving that $9.00 by not being an idiot.


Is there any work on a solution for IoT or other end user programs/devices that would benefit from being able to serve HTTPS instead of HTTP?


https.cio.gov SSL Labs test result: https://www.ssllabs.com/ssltest/analyze.html?d=https.cio.gov

Of note is that they're using a Let's Encrypt cert and running in AWS.


"Today, there is no such thing as non-sensitive web traffic..."

I just don't agree. Is it sensitive for me to google a man page or look up some algorithm or another? Because that's mainly what I use the Web for. The Web - not intranet ( which is usually quite sensitive ).

Blog reading is sensitive? Hacker News? No, no they are not.


> Is it sensitive for me to google a man page or look up some algorithm or another?

It doesn't make sense to cut out such specific uses. The real question should be "are my Google searches sensitive?" and the answer to that, for most people, is "yes".

> Blog reading is sensitive? Hacker News? No, no they are not.

They're not particularly sensitive but they show anyone watching your internet connection that you're interested in these subjects. Why let that happen when you can not?


The claim was that ALL Internet traffic is sensitive. I provided counterexamples. Position refuted.

I don't care who reads my Google searches. Hope they have plenty of coffee, because it's pretty boring stuff.

I still get on Usenet. So I know at a very deep level that it's all public. All to the better.


I use neverssl.com everyday


Interesting. I registered unencryptedwebsite.com for that purpose the other day (have not set it up yet, though).


Why?


coffee shop wifi's will not connect unless u goto an http only site.


A better question is 'why HTTP for anything'?


Having a "https." subdomain feels so weird.


Yeah, should be https.www.cio.gov .

It's too bad the www.www.extra-www.org people are gone; they could have teamed up for a new proposal.


It's worth pointing out that this is an effort to secure government sites: "A Policy to Require Secure Connections across Federal Websites and Web Services"

Which makes sense to a degree and on the surface.

But it doesn't actually make sense. From an individual standpoint of "my personal government held information should be secure" it certainly does. From the standpoint that the government should provide easy and open access to data it does not. And let's face it - browser warnings about bad certificates are generally ignored, and the fact that they haven't presented one provides no real confidence that a connection is in fact secure.

Should I really have to pay someone to find out if a car has been used as a rental car or in an accident? Is there a real benefit to that data being transmitted via an https session vs. a http session?

If I'm looking at a congressman's voting history, is there a real benefit to that data being transmitted via an https session vs. http?

What's the drawback?

There's a layer of complexity in the way that isn't necessary or helpful. Anybody who has done web based automation has run into plenty of issues with expired / incorrect / insecure / misconfigured ssl certificates. And we've all seen trusted certificate authorities compromised. Heck, some of us compromise secure http ourselves pretty regularly as part of the development and testing cycle. What happens when we're accessing data from a custom device with minimal resources? Text processing and network connectivity is one thing. Adding in encryption support and maintaining SSL is a whole different problem to deal with.

And what happens when a guy working a low level office wants to expose a web service for something like viewing an event calendar? There's additional work to be done to obtain certificates and configure his web server to use them. So now maybe he doesn't do it because the bureaucracy in place is too painful.

---

I'm not a fan of making blanket decisions based on flimsy logic. I find it ironic that their first reason for doing this points to a TAG finding that not only points out several drawbacks, but lays out that one of the primary reasons for preferring secure communication would be to minimize pervasive government monitoring.


> If I'm looking at a congressman's voting history, is there a real benefit to that data being transmitted via an https session vs. http?

Yes.

Say you're a fan of Party A, and you'd like to find out whether your congressman voted the way that you like on some issue. Wouldn't it be a shame if your ISP was a fan of Party B, and tweaked the page in-flight to show you the voting results that you were hoping for, instead of what really happened? Or if some other organization found a way to inject some JS into the page to do that? For that matter, maybe Party A did it because they wanted hide a vote he had to make that they know might anger his supporters, but they'd rather have another Party A guy in Congress, even if he isn't perfect, than risk losing the seat to Party B.

Think all of that is paranoid nonsense? Well, pretty equivalent accusations have been thrown around in this election cycle by both sides, many by very mainstream organizations. I don't know that anything like this has happened, but it doesn't seem quite so crazy these days to say that it very well could.


Like many, you are forgetting the other half of what TLS provides. It's not only confidentiality, it's also authenticity. It ensures not only that nobody can eavesdrop the connection within the browser and the server, but also that nobody can modify it, for instance to inject a piece of Javascript with a browser exploit.


I thought I addressed that in a couple of different ways, but no I didn't forget.

I'm not saying that TLS doesn't have benefits. I'm just saying that a blanket policy is a bad idea for the US Government. If the policy for secure connectivity was "support-but-don't require" or "make your best effort to support" it would make more sense to me.


It's just basic connection integrity -- without HTTPS, there's no guarantee the user is actually getting the content that the site meant to send, and vice versa. Those kind of attacks really do happen, and at scale (e.g. Verizon XUID, or China's Great Cannon).

If it's at all interesting, I laid out in detail, with examples, why I've been comfortable taking a very hard line on deprecating HTTP:

https://konklone.com/post/were-deprecating-http-and-its-goin...


He's not necessarily "forgetting" it and I wish HTTPS-everywhere advocates would stop ascribing forgetfulness or bad motives. Some of us have a different opinion based on our own genuinely-held beliefs.

Requiring HTTPS everywhere is weighing certain factors above others. For many people, encryption and consequent lack of snooping/MITMing is an absolute which outweighs all other possible factors. That's fine. You're entitled to hold that belief. But please don't accuse others, who don't hold that as a trumps-everything absolute, of "forgetting" or (like the guy upthread who said "You being against HTTPS everything, is the same as being in support of MITM attacks somewhere") arguing in bad faith.


But he did forget the integrity benefits in his post. I understand that some people have different views, but if someone overlooks a significant point, it is ridiculous to get offended when someone else points it out.


>Some of us have a different opinion based on our own genuinely-held beliefs

Such as?

Browsing a website with an adblocker in a public place may literally give me a virus.

That's what HTTPS everywhere protects against.


Browsing any infected website, HTTPS or otherwise, may give you a virus.


Right, but if it's HTTPS, only the website can give you a virus. If it's HTTP, the website can give you a virus, plus the owner of any network device your requests traveled through on the way to and from the website. That's often a lot of owners.


> And let's face it - browser warnings about bad certificates are generally ignore

That's getting harder and harder to do. Doesn't Chrome require you to find three hidden buttons and enter a password to ignore the warning?

> Should I really have to pay someone to find out if a car has been used as a rental car or in an accident?

I don't follow... You don't need a certificate and the site can get a free one from LetsEncrypt

> If I'm looking at a congressman's voting history, is there a real benefit to that data being transmitted via an https session vs. http?

... not yet :)

But yes, it may allow to make certain conclusions regarding your interests, maybe your place of residence etc. But the larger idea here is to get users and the infrastructure accustomed to encryption. Chrome and Safari, for example, are moving towards treating any non-encrypted site as a security risk. That only works if the overwhelming majority of sites actually are encrypted, or it will just annoy people.

> There's a layer of complexity in the way that isn't necessary or helpful

which is motivation to improve the tooling. With LetsEncrypt the whole process has already become much easier. It's only a matter of time until SSL just works, the same way lots of other protocol negotiations invisibly happen today. > What happens when we're accessing data from a custom device with minimal resources

They'll have slightly less-minimal resources. SSL really doesn't require a Cray anymore. IoT-devices without encryption are a privacy nightmare, anyway. Some of them actually have access to extremely sensitive data or, even worse, can control critical hardware – c.f. pacemakers.

> And what happens when a guy working a low level office wants to expose a web service for something like viewing an event calendar?

He'll use google calendar, same as now. People who can't configure SSL have no business exposing services to the internet.


Use case that's pretty clear:

Many government websites provide a way to contact the government, either through address, email, or phone. An attacker can MITM the HTTP connection and replace that contact info with whatever they choose.

People place an incredibly high amount of trust in the government and what the government asks for, so every incoming call would have a >50% chance of resulting in complete and total identity theft.


If I can MITM your http connection, I can MITM your https connection.

But again, I'm not saying that there are no benefits to TLS, just that a blanket policy requiring TLS for _every_ government web service is a bad idea. One that prefers TLS or a best-effort-to-support TLS policy would make more sense.

Look at the history a bit. InCommon's use of insecure hashes required that all clients re-issue their certificates in order to avoid browser warnings. At the University level that was impractical to do quickly. At a federal government level? That's a whole different beast.

And look at heartbleed. How long did it take the federal government to make all resources secure again? Is it actually a completed project?

There's operational overhead to consider. There's the fact that MITM for http is minimally different than https. There's the plain and simple fact that a lot of data simply does not need to be secure. And there's the fact that there are already existing problems with making data available to the public at large.

If you want to provide confidence in government data, you could say that all certificates used by government resources are secured by X,Y, and Z certificate authorities and that would be a legitimate benefit.


>If I can MITM your http connection, I can MITM your https connection.

No you can't, that's the whole point of public key encryption. Someone could give you every single byte, in order, from the entire two way negoatiation process, and you would still not be able to to MITM your way into the datastream unless you either reverse engineered both party's secret keys that are used to perform the {sender locks lock, send locked data, receiver counter locks, sends doubly locked data back, sender unlocks, send counter-locked-only data, receiver counter unlocks, receiver now has the original data} step. Unless you know BOTH keys that were used in the purely local locks, you can do exactly nothing at all despite seeing every single naked byte involved.

Can you MITM it if you already know the keys? Of course. But that is not an argument against HTTPS, of course. Can you guess or compute the keys if you don't have them? Given enough time, again: of course. Can known shortcomings in a known cypher make the job of computing keys easier? third time: of course. And that's why HTTPS lets you specify which "secure, today" cyphers to use, and which "no longer, or never secure" cyphers you will absolutely not accept. HTTPS with sha-1 was very secure years ago, for instance. Today it's a stupid choice if you want to remain secure, because its too easy to break.

HTTPS makes MITM "the least possible" by using a secure symmetric encryption for both parties to agree on which asymmetric encryption to use. And it lets parties agree on stronger and stronger cyphers as time goes on, so that the integrity of connections stays a moving target. MITM is a theoretical possibility, but unless you literally own a half-a-billion dollar supercomputer capable of factoring large primes in a reasonable amount of time, can you, personally? Plain and simple no.


> No you can't, that's the whole point of public key encryption.

.

> but unless you literally own a half-a-billion dollar supercomputer capable of factoring large primes in a reasonable amount of time, can you, personally? Plain and simple no.

I'm not sure if you're being purposefully dense, if you really don't know how easy it is, or if you place too much trust in the chain of resources required to make an SSL connection.


1. Key pinning.

2. Certificate transparency

3. Can't do it "accidentally". That's why a lot of people have 2 foot high fences, not that you can't jump over them but to create the atmosphere that this is private, and if you get caught there you can't say "oops".

4. Non-government (malicious router) can't MITM.


1. key pinning wasn't part of this policy, and regardless implementations are few and doing it correctly is problematic at best.

2. Certificate transparency is not implemented in all clients (and won't be).

3. I do understand the 2 foot high fence, and I've re-iterated repeatedly that I don't believe that TLS is a bad idea or that it provides no benefits. My original comment was meant to point out that a blanket "https everywhere" policy for the federal government is a bad idea.

4. malicious or friendly routers can MITM. Would you go to defcon, attach to an unknown wifi source, and pass your banking credentials?


"There's the fact that MITM for http is minimally different than https." Unless I'm missing something important, that can be prevented with key pinning.


Key pinning could do a little towards preventing MITM, but implementing it is dangerous - in fact the whole standard is dangerous. Smashing Magazine went live with it, then had a cert expire before their max-age header did - rendering their site unavailable to repeat visitors. Then there are attacks like pkpransom to deal with, or the fact that Chrome doesn't implement hpkp properly.

Only ~400 sites actually implement pinning. It's going to be more of a security problem than a security solution in the next couple of years.


Key pinning is not even needed. Entire point of SSL/TLS is to ensure end to end authenticity and confidentiality.

I believe the above poster does not fully understand SSL/TLS at all.


> Entire point of SSL/TLS is to ensure end to end authenticity and confidentiality.

The point is that country A can strongarm a certificate authority under their domain to sign any certificate they want. So if A wants to MITM google or github they can, and there's no way for you to know which certificate is the real one and which is the fake.


Nazi Enigma encryption was cracked, because they encrypted everything, even repetitive information like weather reports. Https for everything is promoted by NSA.


Modern ciphers are designed to resist chosen plaintext attacks.


Might need to read up on modern ciphers. They are protected against this kind of attack.


It's great to see a positive attitude toward security also gaining support in Government.

I just wish all software projects felt the need to be more secure [0]:

> Redis is designed to be accessed by trusted clients inside trusted environments.

> While Redis does not try to implement Access Control, it provides a tiny layer of authentication that is optionally turned on editing the redis.conf file.

> Redis does not support encryption.

This is just broken by design software development. [1] It's irresponsible in 2016/2017 to assume you have impenetrable perimeter security.

Redis' excuse is just bullshit. PostgreSQL supports authentication, SSL, and even client cert pinning.

[0] https://redis.io/topics/security

[1] http://antirez.com/news/96

Edit: Loving the downvotes by butthurt developers who have never had a security audit...


Different projects - different requirements.

Perhaps you should assume people who are using it like this carefully weighted security against their requirements and came to the conclusion that it's a legit tradeoff.

> This is just broken by design software development.

This is just your "I know it all" mindset.


> Perhaps you should assume people who are using it like this carefully weighted security against their requirements and came to the conclusion that it's a legit tradeoff.

Perhaps. Or perhaps "well, I'll just use Redis because everyone else uses Redis." This is, in my experience, common. Most people choosing a stack aren't qualified to evaluate the security approach of their stack.

I don't really have a problem with Redis when somebody's going to actually either do their homework on securing it or are knowingly taking on additional risk. But you have to know it to do it. And vanishingly few people do. (And they don't ask, because they don't know to ask. The five-minute-demo culture has seen to that.)


It's basically impossible to use Redis accidentally in the way you're suggesting - to have absolutely no idea what you're getting into and fail to understand that it can run out of memory and doesn't provide security. Sure maybe before you've ever tried it or read anything at all, but you can't get very far without finding out.

You want Redis to support https so that people who haven't tried it and haven't read anything about it won't accidentally shoot themselves in the foot?

Even if you're right, your argument convinces me it's smarter for Redis to not implement security and say so than to add some in an effort to hide things for people who don't know what they're doing.

Do you have any evidence that the problem you're talking about actually exists wrt Redis? Do you have any evidence that vanishingly few people understand & do homework to offset the body of evidence of the growing number of people that can deal responsibly in security and that things are getting better over time?


> It's basically impossible to use Redis accidentally in the way you're suggesting - to have absolutely no idea what you're getting into and fail to understand that it can run out of memory and doesn't provide security.

It emphatically is not "basically impossible". I've worked with numerous teams where--well, we needed a key/value store, and Redis is cool, so let's use Redis. Hell, we don't even have to read the manual in the hallowed Age of Docker, just pull a container and there we go! Now we have Redis. Did we think about how it worked? Did we think about the security implications? No, because we needed a key/value store and now we have one and we can stop thinking about it now.

(This is why people pay me the medium bucks: to, eventually and usually after getting screwed and potentially harming their users who have no say in the matter, fix their mistakes.)

We have created a culture where ignorance of everything that is not your stack of choice is OK--and your dependencies are somehow not part of that stack, just your programming language and maybe its runtime. This is a bad culture, but it's what we've got and what the Thought Leaders seem to want. It falls to people who do give a shit to build firebreaks to control the damage that can be wrought by the practitioners of the culture of ignorance.

If nothing else, we owe it to the suckers who trust these people to do right by them.


Sometimes it can be safer to assume that something is completely insecure, than to assume ( incorrectly ) that it is. Not implementing any security on a product guarantees the former - outside of complete user incompetence. I would also say that guaranteeing security is beyond the scope of most projects, especially those with limited development resource ( Redis for example ).


For info: downvoted you not because of butthurt, but because I believe "just bullshit", "it's irresponsible", and "butthurt" are unworthy of a civilised debate.


> but because I believe "just bullshit", "it's irresponsible", and "butthurt" are unworthy of a civilised debate.

The main beef I have is that on an article about expanding the use of transport layer security, people are arguing against implementing transport layer security. To me this is an incredibly hypocritical stance for people to take.

You can downvote me because you disagree with the way I'm communicating my message, but I hope you don't disagree with the sentiment that we should be working to implement transport layer security everywhere and not just for protocols like HTTP.

Also saying my points are unworthy of civilised debate because of the language I use strikes me as incredibly naive. Donald Trump says nasty things, but people can and do debate him because the issues at hand are important to them and to the public at large. Refusing to engage someone because you disagree with the language they're using, which in my case is not hate speech or discriminating against a specific group of people, is dangerous to civil society.


Redis and Postgre are meant to play different roles.

If you don't like what Redis does, you're welcome not to use it.


> If you don't like what Redis does, you're welcome not to use it.

I like what Redis does, and it's also heavily used in industry. What I am saying, not incorrectly, is that their approach to security is harmful to their users. Most of whom won't know, care, or implement additional security which they should.

It's the same argument with HTTPS. Of course HTTPS is optional in a web server, but all major web servers support HTTPS, and there has been a concentrated push to have more people using SSL.

e.g. LetsEncrypt

We should be making it easier for people to deploy secure services. Redis' approach to security makes it extremely difficult for developers to deploy secure services.

I'll say it again: It's irresponsible in 2016/2017 to assume you have impenetrable perimeter security.


I don't get why it would be that way? It's built to be deployed in a DMZ on a private network and that's how 99% of users (typically professionals) use it.


The idea of a hardened perimeter around a soft squishy interior has been proven repeatedly not to work.


> I like what Redis does, and it's also heavily used in industry. What I am saying, not incorrectly, is that their approach to security is harmful to their users. Most of whom won't know, care, or implement additional security which they should.

Then a different security story within redis wouldn't save them anyway. Security is not magic fairy dust that you sprinkle on something and then hope for the best.


This isn't really true. Envision a system where all components are secured by default: TLS-encrypted connections in and out (which doesn't currently really exist unless you put in the work), a VPN for entry, individualized SSH keys (via SSSD or something) for machine access controlled via group policies and whatever directory services are up for grabs.

These are pretty commoditized parts that should be easily deployed and should be easily bolted together. And in such a universe, you do get a pretty strong security story without having to break your back.

My main beef with stuff like Redis is that in almost every case I can think of it's selected by developers. Security isn't on their radar. The systemic component parts we currently have don't value security and offload it to the literally (not pejoratively) incompetent.


> My main beef with stuff like Redis is that in almost every case I can think of it's selected by developers. Security isn't on their radar. The systemic component parts we currently have don't value security and offload it to the literally (not pejoratively) incompetent.

Yes, this is exactly the point I was trying to make.


> Redis' approach to security makes it extremely difficult for developers to deploy secure services.

TLS proxy is not extremely difficult to deploy if you really need to talk to redis that way. But you shouldn't be using redis if you do.


Authentication and encryption takes a lot of CPU, but turning security OFF for performance reasons should always be behind a configuration flag.

It's illegal to break into something, but if you leave the door wide open the insurance company might not be so willing, and in some laws it's actually legal to walk into open rooms.


I took those statements to be suggesting that you should be aware and implement your own security. What do you need the https access for, and why is it irresponsible to separate the database concerns from the security concerns?

I don't know, but it might be extra responsible of the Redis devs to suggest they shouldn't implement your security for you, and they should stick strictly to databases.

I'm not currently using Redis, but I would like to, and I've looked at it several times recently. Initially, I had nearly the same reaction as you, but now I think I interpret it differently.

I'd assume that you have to wrap any scalable or public Redis access in an API, you will never expose the DB directly, so https would only be used for dev work and for transport between the DB and the API layer. Both of those I would normally implement security around myself, and for API transport it is unlikely to matter when I have the API host and the DB host inside the same network.

I would think that removing Redis from the security equation probably makes a security audit easier and not harder, since you would configure dev access over ssh & share existing keys, reducing the number of entry/failure points. Same for the API transport, if necessary.


> I don't know, but it might be extra responsible of the Redis devs to suggest they shouldn't implement your security for you, and they should stick strictly to databases.

Do you still agree if I change your statement to:

I don't know, but it might be extra responsible of the nginx devs to suggest they shouldn't implement your security for you, and they should stick strictly to HTTP.

I would imagine you would not agree, because it's ridiculous to have a web server only supporting HTTP in 2017. This is exactly my argument about Redis.

I'm not saying Redis should implement their own crypto. I'm saying it's user hostile to say "We don't support security because perimeter security is infallible and plus no one really cares about security anyway" and you should go use another utility to wrap their socket in SSL.

Sure, it's not their job to be security experts, but neither are the:

  - PostgreSQL developers

  - openLDAP developers

  - rsync developers
And yet somehow they manage to support SSL within their application.

If you read reports about data breaches lately, it's usually the following pattern:

  - Send a phishing email to someone in the org with malware

  - Infect their computer

  - Pivot inside the network and exfiltrate the goodies
In this scenario, perimeter security is utterly worthless because the attacker has gotten themselves right into your network.

If you have encrypted data between services (e.g. application servers to database, application to redis) then they also have to pop the actual servers to get the data, instead of just sitting on the network and watching traffic go back and forth.

People say "oh it's not a big deal, redis is just storing sessions" until someone grabs a session from the app server to the redis server, and then pops your site as an admin.


> I'm saying it's user hostile to say "We don't support security ... "

Redis doesn't say that, and you might not want to put quotes around something they didn't say.

Here is what they do say:

"Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket."

"in general, untrusted access to Redis should always be mediated by a layer implementing ACLs, validating user input, and deciding what operations to perform against the Redis instance. In general, Redis is not optimized for maximum security but for maximum performance and simplicity."

"Access to the Redis port should be denied to everybody but trusted clients in the network"

https://redis.io/topics/security

That's basically what I assumed, and this is a perfectly valid security model. Just because some other software with different design requirements supports certain connection types doesn't mean Redis needs to, you might want to re-examine your own assumptions.

I don't buy the analogy between Redis and nginx because nginx is a web server and Redis is an in-memory database.

A better analogy than any you have provided is between Redis and memcached. "The closest analog is probably to think of Redis as Memcached" http://stackoverflow.com/a/7897243/1424242

Memcached also doesn't support encryption, and only very recently basic authentication. Their wiki has language very similar to Redis:

"How do I authenticate? You don't! Well, you used to not be able to. Now you can. If your client supports it, you may use SASL authentication to connect to memcached."

"Keep in mind that you should do this only if you really need to. On a closed internal network this ends up just being added latency for new connections (if minor)."

https://github.com/memcached/memcached/wiki/ProgrammingFAQ#h...

"This [SASL authentication] does not provide encryption, but can provide authentication. Do not run this over the internet large, do use this to protect from neighbors and accidents from within large mostly trusted networks."

https://github.com/memcached/memcached/wiki/SASLHowto


Relying on firewalls and port-access is a sucker's game. There are too many ways to defeat it that TLS is not susceptible to.

This is a fault of Redis. It's good software, on some axes. It's bad software on this one and they really, really should be better citizens about it. Security is a collaborative effort and punting it onto largely-ignorant users does not wash your hands of it.


Why? What's your use case?

Nobody is saying that your Redis host must be open to unencrypted traffic, not the Redis folks and not me. You do have plenty of options for putting TLS in front of any connections to your Redis install.

Redis is simply not a drop-in replacement for a SQL server environment, and they're trying to make that clear. Expecting it to be one is more your assumption than a fault of Redis.

SQLite doesn't implement any security for you, and it's probably the single most used database. Is that bad software, are they bad citizens too? Should memcached implement https? How about ImageMagick or OpenCV?

I'm a bit confused why you're arguing that Redis should provide automatic security for "largely ignorant users". Redis cannot be safely exposed to public access or an open network under any circumstances, not because of security, but because of memory management. It must be wrapped by a API server. If someone setting up a Redis install doesn't know that or doesn't know about security or doesn't know how to provide a proper wall around internal programs, what business do they have setting up servers at all? How will locking down Redis help someone who can't configure SSH between their API & DB, which you're going to have to do no matter what?


I think you are misreading me. I can, and do, secure Redis instances when I use them. But I am paid to make systems secure and scalable. That's my business, and I have invested extensive time and effort in being able to do so. I don't have a problem when I'm using it. But you accidentally stumble on it here:

> what business do they have setting up servers at all?

That's a very good question, isn't it? I mean, I'd say they don't. But, despite my generally full dance card, most developers who don't have any business doing that at all...do.

Security is ultimately a group effort, that means it falls to those of us who do give a shit to provide what measures we can for those who don't. We have decided that we want developers who are incompetent to pretend to competence. There are pragmatic, practical firebreaks that we must implement to prevent them from hurting other people. You know, from hurting me and you.


Adding https support to Redis would only do what you suggest -- add pretend competence, and hide real problems. It wouldn't add real security. It's better for the sake of ignorant users for Redis to advertise that it's insecure and they're on their own than it is to hide the ignorance.

You didn't answer any of my questions. What is your use case? In what situations do you need https support? Why should Redis be thought of differently than memcached or SQLite or ImageMagick or any other program intended for strictly internal use?


> It's better for the sake of ignorant users for Redis to advertise that it's insecure

It doesn't advertise any such thing. The manual people don't fucking read isn't advertisement because this is the future, a five-minute-demo is all you need, just grab it and start coding away.

In a future populated by competence, I would fully and completely agree with you that Redis does enough. We live in a bad future. Sucks, but--the priesthood of competence has responsibilities.

> In what situations do you need https support?

"I am talking over a network." It's not a complicated use case. TLS end-to-end, full-stop, is the only tenable future. It's still imperfect, but it's leagues better than unencrypted-over-the-wire, even inside of a theoretically secure ('cause it probably isn't) network perimeter.


> It doesn't advertise any such thing.

That statement is false. I realize you feel strongly about this. I realize people sometimes don't read the manual. That doesn't mean the documentation doesn't exist. That doesn't mean Redis didn't try.

I'm sorry your view of other developers is so pessimistic, but just because some people do things without enough preparation doesn't mean the entire world needs to cater to their ignorance.

You still haven't answered my question: Why should Redis be thought of differently than memcached or SQLite or ImageMagick or any other program intended for strictly internal use?

There's a lot of software that uses HTTP connectivity for internal network features. Just because it exists and could be dangerous doesn't mean that HTTPS is the right answer.

Redis is designed for performance and not necessarily intended for a network at all. You are the one misunderstanding what Redis is even for, and you are the one failing to RTFM. You are complaining that people are ignorant of how to use Redis, but you are demonstrating your own ignorance here. Redis can't do what it's designed for if it switches to HTTPS-only connectivity.


[flagged]


I've upvoted you again only because you replied and engaged with me and finally attempted to answer my question, but you are being pretty uncivilized at this point, your language is out of hand and your attacks are personal. You're giving me reasons to stop listening to you. Goodbye and good luck.


> Nobody is saying that your Redis host must be open to unencrypted traffic, not the Redis folks and not me.

Sorry. I disagree. See: Redis does not support encryption. [0]

> Redis is simply not a drop-in replacement for a SQL server environment, and they're trying to make that clear. Expecting it to be one is more your assumption than a fault of Redis.

I never meant to imply that Redis was a replacement for SQL. My analogy to PostgreSQL was misinterpreted by everyone to mean me saying "Redis should be more like PostgreSQL" when really what I meant was "PostgreSQL isn't focused on security, and yet they still manage to bundle TLS support without making users install additional application to wrap a socket"

> SQLite doesn't implement any security for you

SQLite is a file based DBMS. My point is about Redis lacking built-in transport layer security.

> Should memcached implement https?

memcached should implement transport layer security.

> How about ImageMagick or OpenCV?

To my knowledge these are mostly local services, although granted my experience is limited. My point to Redis was about transport layer security. If the services don't listen on any network interface, then I don't see the need to implement transport layer security.

You may still want to implement filesystem encryption to protect data at rest.

[0] https://redis.io/topics/security


Redis doesn't support encryption, that's true. But there is literally nothing stopping you from wrapping it's use using an encrypted connection & protecting your host. Nobody is saying your host must be open to unencrypted traffic.

> To my knowledge these are mostly local services

That's a good term for it. Redis is a "mostly local service", that's exactly what their documentation is saying. Just because it listens on a network interface doesn't mean it's intended to be publicly exposed to the internet. That's why they state explicitly that Redis is not intended to be publicly exposed to the internet.

There is a lot of other software running on your computer that makes the same assumption and uses the network stack to communicate with other processes but is not intended for exposure to the internet. Redis isn't doing anything different. Go ahead and run netstat and take note of how many running services have a foreign address listed, and how many do not, of how many are using https and how many aren't.

The funny thing here is I completely agree with the premise of the TLA - I agree that we should be using HTTPS everywhere. But "everywhere" means communication over the internet, and host to host communication inside of complex networks. "Everywhere" in this context does not mean interprocess communication on my computer. Redis is designed to allow interprocess usage, and restricting it's interface to HTTPS would actively harm Redis' design goals and prevent some people from using it they way they want. PostgreSQL is not designed for that kind of usage, it is solving different problems than Redis is.


Even were I to agree with your idea that Redis is designed to be a local service, despite just about everyone using it beyond toy-problem status sticking it on a dedicated caching machine: listen unencrypted on a socket, listen encrypted on a port. Done and done.


I didn't intend for your other comment to get flagged, but since it happened -- I hear you, this is a decent point. Look, I'm not defending what Redis or anyone else is doing, I'm simply saying I understand why, I don't think it's really a super crazy choice, but I understand and agree that it is more dangerous for people that don't read the manual. There are more things they could do to help first-timers be aware of their security. I would imagine that replacing port connections with socket connection would be a pretty big step backwards in terms of ease of use, so I can imagine it'd be hard to convince them to go that way.


fork -> add feature -> pull request


There is no reason to use redis over a non-local network as it loses its performance appeal completely. Even talking to a redis server on a different rack doesn't make a lot of sense, unless you have a total control over a network there. So encryption by default goes against its value proposition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: