This is pretty unhelpful; the case I would make is that it's providing security largely by defining the problem away. For instance: it's usually unrealistic to require that all administration happen through clean-room systems that don't ever browse the web.
The real-world practice of security is in large part the deployment of risky systems with mitigations in place for the likely attacks that will target them. So, for instance, getting everyone to talk to the admin console on a purpose-built Chromebook with no Internet access is probably not a realistic option, but getting every system with admin console access MDM'd and requiring access to admin consoles to route through an IdP like Okta to enforce 2FA is much more realistic, and thus likely to happen.
The patterns in here that aren't unrealistic are pretty banal. I don't doubt that UK NCSC sees systems designed to be unpatchable, but modern engineering norms (Docker, blue/green, staging/cert environments) --- norms that have really nothing to do with security and are common to pretty much every serious engineering shop --- address that directly anyways.
Other patterns don't really make sense; for instance: you should design to make your systems patchable (sure, that's again a basic engineering requirement anyways), but also make sure your dev and staging environments aren't continuously available. Why? Those are conflicting requirements.
I respectfully disagree. I have seen many of these antipatterns in production in many medium & large size orgs, and I think the six scenarios presented in this doc are more common than you think.
The "browse-up" scenario is extremely common because engineers/administrators usually prefer to remote directly onto the systems their working on from their main machine rather than endure the inconvenience of needing to securely connect to another host first. Many of these admins/engineers would think it's inconceivable for their machines to be vulnerable but have no issues downloading dev tools, libraries and dependencies onto their machines from third party & untrusted sources (e.g. Github, NPM, etc).
'Docker, blue/green, staging/cert environments" - believe it or not, these are seen as emerging trends in many orgs rather than the norm as you suggest here.
And regarding designing systems to be patchable, you say: "sure, that's again a basic engineering requirement anyways", but again I'd counter that I've come across many systems that haven't been patched in months or years because it's deemed too hard. Another similar issue I've come across is where an org's DR processes have not been properly tested because it's too hard to failover without causing significant disruption. Both can easily be designed for early on, but for legacy systems that were implemented without this foresight it still remains an issue.
The way that I'm reading the "browse-up" scenario, however, isn't how you're describing it. Admins wouldn't "secure connect to another host"-- they'd have to use a trusted and known-clean device to perform all that administrative activities. Connecting to that device from another host (i.e. using it as a "jump box") seems to be specifically disclaimed as an "anti-pattern".
That's not how I read it. This past in particular:
> There are many ways in which you can build a browse-down approach. You could use a virtual machine on the administrative device to perform any activities on less trusted systems.
The point is to tailor your risk to the systems your accessing. You should interact with less trusted content in more secure ways if you're also interesting with high security systems.
So if you're using firejail/bubble wrap to consume less trusted content (web, email, videos, etc.) and selinux/apparmor; I think your system would match their description of browse-down, for most low to mid security systems. For high security maybe Qubes/VMs. Then highest security you start thinking about multiple machines with kvm switches.
Correct - the guidance serious shops give is to created privileged access workstations (PAWs) for critically sensitive work (think AD domain admin work or NW engineers, etc). You wouldn't expect most devs to be down in the weeds, but who knows
Another approach to browse up would be to not grant god access to a single administrator. Require all changes to go through a pull request that requires another admin’s thumbs up, etc.
If we are talking engineering there are OT systems that are not patchable. You cannot blue/green docker deploy machine that is running industrial system. It is all nice and easy if you run web farm where you can just balance stuff to other server.
For the first one, I would say you could make admins use "clear Chromebooks" but probably no one is going to pay for that.
For other banal ones, I would say it is good to remind people about "management bypasses" are not good idea.
I have much trouble to understand what is meant by the "browse-up" scenario. If it's "don't use devices being able to download stuff from the internet" I would deem this extremely impractical advice.
The real-world practice of security is in large part the deployment of risky systems with mitigations in place for the likely attacks that will target them. So, for instance, getting everyone to talk to the admin console on a purpose-built Chromebook with no Internet access is probably not a realistic option, but getting every system with admin console access MDM'd and requiring access to admin consoles to route through an IdP like Okta to enforce 2FA is much more realistic, and thus likely to happen.
The patterns in here that aren't unrealistic are pretty banal. I don't doubt that UK NCSC sees systems designed to be unpatchable, but modern engineering norms (Docker, blue/green, staging/cert environments) --- norms that have really nothing to do with security and are common to pretty much every serious engineering shop --- address that directly anyways.
Other patterns don't really make sense; for instance: you should design to make your systems patchable (sure, that's again a basic engineering requirement anyways), but also make sure your dev and staging environments aren't continuously available. Why? Those are conflicting requirements.