Hacker News new | past | comments | ask | show | jobs | submit login

Security guy here. I'd argue that SSL and ACL are always good things to have, especially for systems that store data.

Modern security practices typically dictate a defense-in-depth approach. The ideas is that you will be compromised at some point (no security is perfect) and as such you should make any compromise that does happen as minimal as possible--you want to prevent attackers who get a foot in the door from rummaging around your network.

A key part of any defense-in-depth strategy are things like encryption and authentication/authorization. If you're using redis to store any kind of sensitive material, you want to make sure that only people on your network with the appropriate auth credentials can access it. This is one of the easiest ways to prevent drive-by data theft.

From here, SSL is a logical step. You need to ensure bad actors can't sniff network traffic and steal credentials.

I can't speak to streams or the other features you feel complicate Redis, but I think SSL+ACL are very important tools for increasing the cost to attackers that target redis instances leveraging those features.




Many systems don’t do TLS in process. TLS proxying is probably more common for systems deployed in the cloud (e.g. running nginx on the same node, or using a cloud load balancer).

AWS and GCP don’t even give you a way to install a cert yourself— you MUST use an ELB or bring your own certificate.


This is highly dependent on your environment. I work in finance and there is legislation saying we must encrypt all traffic on the wire.

Legislation aside, this also goes back to a defense-in-depth strategy; TLS proxying only works if the network behind the proxy will always be secure. You might be able to get away with running TLS on the same host as redis, but in all other cases I can think of you're going back to the 90's-era security policy of having a hard shell and a soft underbelly--anything that gets into the network behind your TLS proxy can sniff whatever traffic it wants.

EDIT: It occurs to me that you seem to be hinting at running redis as a public service. In that scenario it makes perfect sense to use a TLS proxy for versions of redis without SSL. That said, it's still important to encrypt things on your private network to ensure you aren't one breach away from having your whole stack blown open.


Regulated industry SRE here. I've run Redis at scale through stunnel, terminated through a proxy, and once Redis supported it, in-process.

In-process won by a mile, despite my feelings about redis from an operational perspective (read: not good). The added choreo, monitoring, and overall moving parts were strong contributors against external proxying.


Not sure what the argument is here. Many systems _do_ have TLS in process. Also, there are plenty of regulations/certifications that require encryption in transit. Terminating at a load balancer means you have an unencrypted hop.

> AWS and GCP don’t even give you a way to install a certain yourself.

If you mean as a service they provide, well, that’s what ACM[1] is for, no? I assume Google Cloud has something similar.

[1] https://aws.amazon.com/certificate-manager/


ACM doesn’t talk to EC2. AWS Enterprise Support will tell you with a straight face to let them handle TLS termination on the ELB/ALB, and keep things unencrypted in the VPC. Their claim is that the VPC has much more sophisticated security than TLS.


This is probably true. You can't eavesdrop on network traffic in a VPC because you never touch the actual layer 2 network, it's a virtualized network tunneled through the actual physical network, so you will never even see packets that aren't directed to you. I don't think there is a really strong security rationale for requiring SSL between an ELB and one of it's target groups, but from a regulatory standpoint it's probably easier to say "encrypt everything in transit." This is why ELBs don't do certificate validation as well. It's unnecessary and extremely cumbersome to implement well, so if you need to have SSL between the ELB and a host, you can just toss a self-signed cert on the host and be done with it.


Can you see traffic between hosts in the same VPC, even if they wouldn't otherwise have access via security groups?

The scenario I'm imagining is that an attacker manages to gain access to one box in VPC, and from there is able to snoop on (plaintext) traffic between an ELB that does TLS termination and some other box in the VPC that receives the (unencrypted) traffic.

If you encrypt all inter-box traffic, then this attacker still doesn't get to see anything. If not, then the attacker gets to snoop on that traffic.

I'm not sympathetic to lazy arguments like, "if an attacker has compromised one host in your VPC, it's game over". No, it's not. It's really really bad, but you can limit the amount of damage an attacker can do (and the amount of data they can exfiltrate) via defense-in-depth strategies like encrypting traffic between hosts.


You can't snoop on traffic between hosts in the same VPC. Here is a good video explaining why https://www.youtube.com/watch?v=3qln2u1Vr2E&t=1592s. The tl;dr; is that your guest OS (the EC2 instance) is not connected to the physical layer 2 network. The host OS hypervisor is and when it receives a packet from the physical NIC, if that packet is not directed to the guest OS then it won't be passed to it. So the NIC on the guest OS (your EC2 instance) will never even see the packets that are not intended for it. Of course this gets slightly more complicated because AWS added some tools for traffic mirroring. So theoretically someone with the right access could setup a mirror to a host they control in the VPC and sniff the traffic that way. But if someone were able to pull that off then you're likely f'ed either way.


You can enable backend authentication on the ELB if you want that (you need to provide the certificate)


Right, they let you do it now but iirc that is a relatively recent feature that was resisted for a long time. And for good reason I think. The purpose of certificate validation is to verify that the remote machine is who they say they are. But those guarantees are already provided by the VPC protocol. In order to impersonate a target instance you would need to MITM the traffic, which isn't possible in a VPC.


FWIW, you can re-encrypt from ALB/ELB to your instances.


This doesn't scale when you're using multiple replicating Redises, because every Redis needs to communicate to every other Redis. With TLS in-process, you can just sign keys and distribute them to hosts and you're done. With a tunnel like ghostunnel[1] (which we at Square built precisely for this type of purpose), you end up having to set up and configure n^{n-1} tunnels (which requires twice that number of processes) so that every host has a tunnel to every other host.

[1]: https://github.com/square/ghostunnel


Latency alone is a sufficient reason to do TLS in process.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: