One of my biggest gripes with infosec is how un-empirical the field is. Best practices and advice are based on what security people think is the best strategy for preventing breaches, but rarely do I see that backed by actual data about how real world breaches actually happen.
This leads to an outsized focus on the latest sexy vulnerabilities (e.g. CPU speculative execution vulnerabilities) and fetishes for things like firewalls. Meanwhile people type 'npm install kitchen-sink' with no worries.
In my own anecdotal but real world experience most breaches result from phishing, downloaded malware, phishing to get the user to download malware, and malware-assisted phishing, in no particular order. Firewalls do nothing for that.
I'll also add that once I'm on a traditional network, ripping through active directory is generally not difficult - my first ever live pentest went from privileged non-AD asset to domain admin within about 4 hours. My average time to compromise has come down significantly since then, too.
There's lots of bad/scam security focusing on logging and monitoring, weird antivirus products and securing the wrong things. The last network I compromised dropped an obscene amount of money on a SIEM product that couldn't detect nmap or PtH attacks, I achieved complete compromise with the same chain of attack as my first ever pentest because nobody had looked at the fundamentals of implementation/configuration security.
If I could list things that would actually secure traditional networks:
- Active Directory Hardening (See: ADSecurity, Microsoft AD Hardening Guidelines, ACSC Windows 10 Hardening Guidelines)
- Regular Patching and reliance on Microsoft Products (they're actually pretty good!)
Dunno if you'd consider these 'zero trust', but unless you've covered the fundamentals nobody is going to waste time figuring out how to abuse your network with speculative execution or drop a huge amount of budget to develop a perimeter breaching RCE 0day. Especially when in most cases sending shitware.docx.exe to a sales staff member (who is almost always going to run whatever you send them if there's a bonus incentive) will suffice.
"Application Whitelisting" Does that actually work anywhere except in an boring office? Everyone that has a functioning devshop always has too many holes in their whitelists to effectively protect them. Client machines WITH credentials has to be made untrusted.
There's no reason it can't work in say development environments - the simplest approach is to configure whitelisted directories which permit execution in most solutions. It's not perfect but it helps prevent execution of questionable downloads / attachments.
> Client machines WITH credentials has to be made untrusted.
Network segregation is better if you truly care about security and costs in productivity. I think you'll find higher end environments have a 'trusted box' and a 'development box' which physically sit side-by-side on different networks.
We might discussing this from different views, network segregation is important even if you build everything as zero trust, firewalling something it's one of the easiest way to not trust it.
I think the difference in perspective is that I only really need to work in very narrow an well defined interface to systems that can be hacked, so my view of what the attack surface is affacted by that, I do not care about. For me as long as revealing my password to my client is fine I'm fine. My client can be full of malware without affecting security severly.
Clean room; "leave everything connected on the outside", and variants of that is REALLY ineffective. I've measured that the waste from that, back then we spent alot of time classifing what software had to be developed in such an environment. You do not want to work that way.
I don't disagree that segregation is still important, but it really depends on specific environment technical details and threat models.
Firewalling AD networks, for instance, really won't help if the administrative security model is flawed (network admins using privileged account to maintain endpoints, privileged local administrative/maintenance credentials being reused on critical infrastructure, etc). The communication protocols for administration and general use are iirc pretty much require bidirectional traffic to work.
If you don't trust the host you develop on then everything produced on that host must be audited by a trusted host. Maybe that works in environments where cost is not an issue, but I would be somewhat skeptical of any environment which attempts that without the appropriate resources. It also doesn't help in situations where source code disclosure is an issue (eg a dev posting too much to pastebin/stackoverflow/inadvertently searching google for paste buffer full of data etc).
The 'BeyondCorp' approach is the opposite to a firewall, and is designed to limit the damage of successful phishing attacks.
For example, if you have a 'trusted' network, a successful phishing attack gets you access to that network. If you have a 'zero trust' network, that attack only gets you access to the compromised user.
Zero Trust includes heavily leverageing attack surface limitation to prevent lateral movement. This means even if an attacker has stolen passwords, cookies or whatever single factor token that is needed to move laterally, they can't - because no connectivity.
> This leads to an outsized focus on the latest sexy vulnerabilities (e.g. CPU speculative execution vulnerabilities) and fetishes for things like firewalls.
sounds like a fantastic plot for an anime. i can even see firewall san in my head :)
seriously though, maybe phishing and malware are common because firewalls are working?
Really? In most deployments the firewall is only outward facing. Local isolation is possible but it breaks a ton of stuff and basically renders the LAN useless.
My own opinion is: Secure. The. Endpoint.
If your devices, OSes, etc. are not secure then your systems are not secure. A firewall will not save an insecure system, and firewalls and netsec in general gets far too much attention. That attention should be focused on OS-level and application level security.
Exactly, endpoints should not be listening on the network for instance (its not just about outbound connectivity).
Company laptops often have RDP or SSH open - and newly added software might expose a remote endpoint in future (or a 0 day, like EternalBlue).
And here it comes: then an employee works from home or a coffeeshop and anyone there can attack and try to login! Locking down these things is critical to securing the endpoint.
"Endpoint security" in practice translates to full disk encryption (good) but seemingly also corporate-mandated spyware that logs and reports process and network
metadata, even traffic and keystrokes (bad).
Security isn't the only thing in the optimization equation; endpoint security is only useful — and humane — to a point.
This leads to an outsized focus on the latest sexy vulnerabilities (e.g. CPU speculative execution vulnerabilities) and fetishes for things like firewalls. Meanwhile people type 'npm install kitchen-sink' with no worries.
In my own anecdotal but real world experience most breaches result from phishing, downloaded malware, phishing to get the user to download malware, and malware-assisted phishing, in no particular order. Firewalls do nothing for that.