Hacker News new | past | comments | ask | show | jobs | submit login
Defenders think in lists, attackers think in graphs (2015) (github.com/johnlatwc)
405 points by akyuu 3 months ago | hide | past | favorite | 163 comments



"Attackers" usually have a single mission in mind (exfiltrate juicy data, destabilize the target, hold juicy data for ransom, etc.) and have the privilege of exploring as deeply as needed until the mission is accomplished.

"Defenders" (like the SOC) have to think in lists because they're tracking many signals and threat vectors at a time and need to prioritize which ones warrant their attention/require action because a regulator told them so (think high-scoring CVEs against code that's been deprecated forever ago).

Without having "defenders" post in random places along the graph looking for interesting activity, I don't know how they'll be able to "think in graphs". To wit, the suggestions that the author made would be, you guessed it, signals in a list that a "defender" would check against!


I disagree with this. In my experience we saw the red team use graph techniques to plot paths to high value assets over and over, where defenders were not even thinking about this approach. As soon as they did, they identified potential attacks before the red team launched them.

Smart teams will immediately adopt the red team technology that, for example, crawls AWS as a graph looking for paths to from low to high value accounts.

Its not a zero sum or sea change thing, but defenders absolutely can think more like attackers and leverage attacker tools more often in the act of defense.


This is why I think having separate red/blue teams or roles in a cybersec outfit is best, allow each individual or unit to fully utilize each technique most effectively and synthesize the results.


How do you define and track high values assets? Perhaps in a list?


High values can exist on different dimensions.

The CEO is worth more in some ways than the HR lady.

The HR lady is worth much, much more than the CEO in other ways.

This added dimension turns the 1D list into a 2D map, or a graph.


Let's take an example: protecting a car. There are countless possible attacks on the software that runs automobiles, but the most common attacker goal is to steal the car. That means you want to protect the paths to that goal, regardless of what they look like or whether they are even specific to software.

Sure, attackers can also exploit CVEs to DOS the entertainment system but who really cares if that happens?


> "Attackers" usually have a single mission in mind ...

No. This is a very common misconception, but in reality a vast majority of attacks is carried out as "lets look around what we can find and will see how we'll use it". The same actually applies to a lot of attacking activities – intelligence, influence operations, propaganda etc.


No.


Defenders use lists because they have to manage hundreds and thousands of assets at the same time. What do you do when you have to manage a ton of things? You make a list. You go through that list. You apply a checklist.

Now should defenders also make dependency graphs too? Sure, but they should be making lists first before dependency graphs and making sure things are up to date, that they assume limited trust, and that resources are isolated. Then they should make dependency graphs.

“Defenders have to think in list and graphs and manage a billion things. Attackers just have to look at a few things.”


Put in other words. Creating a graph of your assets should be a part of your checklist.


in this scenario you seem to be arguing lists as the basis of graphs - this is still the authors point, and a subtle but critical difference in how the problem is approached by defenders.

You have to have the insight to pivot your list into a graph, otherwise you just have a list of Crown Jewels and play whack-a-mole on the 10000s of ways they can be reached that you didn't consider.


It's just breadth vs depth.


I feel like this goes too deep. Or maybe it hits on the right reason, but the wrong cause. The defender’s job isn’t defense. Cyber security isn’t a sportsball game where there are clear even goals and objectives with alternating positions. It’s a side show, and a distraction from the main business of whatever else the defenders are trying to do. By contrast, an attacker’s entire job is to attack the system. There is no other purpose they are serving, no secondary masters or considerations that need to be used to weaken their attacks.

Attackers win for the same reason that Microsoft is better at publishing operating systems than Cisco, because ciscos operating systems are a means to an end. Microsoft’s are the end


Cybersecurity being a sideshow to the main event is a brilliant way to frame the problem.

It also explains why companies rarely get punished by the markets for data breaches.


Thing is, it's not a problem. A problem would be if cybersecurity was playing first fiddle.

(When concerns of security become the main worry in an organization, the term we historically use to refer to it is "police state".)


>police state

Isn't a police state where a government is concerned with security above all else? To my mind, a place where private organizations are above all concerned with security is the exact opposite, anarchy, since there's no collective security framework in place to take the security burden off private organizations.


Police state on the inside, anarchy on the outside. This makes it even more similar to governments - sovereign nations are the highest organizational level; beyond them, there's no one to defer to. International affairs is anarchy - everyone's pogo dancing (to the tune set by nuclear powers).


It’s firmly in the “cost” center.


That’s a worse framing than above. It doesn’t matter if it’s a cost or a profit center. It’s part of a trade off.

You could achieve a perfectly secure system, if and only if you make that system do exactly nothing. If you want to achieve any other outcome you will have to trade some measure of security for the ability to do anything. Or as Matt Levine so aptly put it: the optimal amount of fraud is non-zero


Indeed. But I’m pointing out if it’s not the goal, then it’s a cost to minimize to achieve the goal.


I think the main problem is that most organizations put security at the end of the budget list.


Even if they put it near the top, it's still going to be reduced in effectiveness by the actual needs and goals of the organization. Any company that has a VPN and remote employees is objectively and inherently weaker from a cyber security standpoint than an otherwise equally equipped company with no external access to the network. But they do that because remote employee access means they can do their actual business better. Any company that uses networked computers in objectively and inherently weaker from a cyber security standpoint than one which requires physically moving data from one machine to another by way of personal handoffs between employees at the same physical location, but they do that because it means they can do their actual business better.

It does not matter if cyber security is at the top or the bottom of your budget list, if the choice is ever "better cyber security" or "do more business", cyber security is always going to lose that battle. You will never convince a company to use E2E encrypted email for all communications with all customers and vendors, no matter how high on the budget list cyber security is, because doing so would actively hinder the day to day operations of the business.


I don't think this is relevant. Even on-prem "air gapped" networks get breached. I would say it happens on as frequent a basis as any other network tbh. Microsoft hacks get headlines because Microsoft is a public company; there are lots of undisclosed breaches happening out there.

Security vulnerabilities come from the same place they always have. Where IO happens, where transactions happen, and where an operating system does a lot of work. How attackers get to these points, what happens when they do, and then how the system reacts when a malicious event occurs are the factors that matter.

In today's world of complex technologies, I have yet to meet a single organization that is invulnerable to these threats. I've seen a lot of organizations limit damage, patch vulnerabilities, and generally manage their risk profile effectively - but losses are a part of the business.

IMO, the only thing that will really make a difference is when we have technologies that are sufficient enough to male the user more resilient. Only then can we have a truly safer web.


Citation extremely needed.

I have worked at 20+ companies and the ones that had little to no security got ransomwared at LEAST yearly (with 50m+ in revenues) and the ones that had basic and standard security practices got zero network wide intrusions (at least at lower then say, a nation state level.)

Now, COULD they have been exploited with an 0day? Sure, in theory these networks could be both exploited with the same technology or by a dedicated actor likely without an issue - they're internet connected corporate networks mostly with probably out of date tech; and in practice most attacks corporations need to mitigate are the drive by trash that consumers also face.


> I would say it happens on as frequent a basis as any other network tbh.

...really?

I find this extremely hard to believe on its face. Sure an attacker can infect a system via a USB drive, but they need to get physically close to the victim (at least at one point in time). That both dramatically decreases the number of possible attackers and increases their personal risk.

It also becomes far more difficult for an attacker to exfiltrate any data.


Exfil may be tricky if the system is actually airgapped - I take GP's use of scare quotes to mean that most systems are "airgapped" by means of software-enforced security policies, which should correctly be referred to as "not airgapped".

As for the attack method, there's always the good ol' "flash drive found on a parking lot" vector.


> As for the attack method, there's always the good ol' "flash drive found on a parking lot" vector.

Right, which requires the attacker to be physically near the parking lot at some point! That decreases the number of possible attackers by several orders of magnitude at least.

> Exfil may be tricky if the system is actually airgapped - I take GP's use of scare quotes to mean that most systems are "airgapped" by means of software-enforced security policies, which should correctly be referred to as "not airgapped".

Ah, that makes more sense! I do think tpmoney was quite clearly talking about truly airgapped systems, however.


> Ah, that makes more sense! I do think tpmoney was quite clearly talking about truly airgapped systems, however.

Very much so. My point being that a truly air gapped system is objectively more secure than one that is networked, and yet, a bank or social network company that only operates with truly air gapped systems will be strictly worse off than their competitors in their actual business of banking or social networking. And so since their actual job is not objectively better cyber security, but banking or social networking, then they are inherently at a disadvantage compared to Attackers whose business IS attacking (or at one step removed, selling the resources obtained from attacking). In the name of making their business better, Defenders will chose weaker security, and attackers will chose stronger attacks.


Yeah, GP is sort of saying that seat belts are pointless since traffic fatalities can happen anyway


My point is that the vulnerable points, regardless of where they come from, are ultimately there because the purpose of the Defender is not to have perfect cyber security, but to use computers and technology to enable business. Or as you said, "losses are a part of the business"; and that's so because "the business" isn't cyber security.


If this is true, big if, it’s because air gapping doesn’t happen without a specific threat model in mind. Think of the airplane with red dots.


I’m sorry but I really really really want some citations here - a network that has VPNs, LANs at multiple locations is as vulnerable as a single location that uses air-gapped computers passing say usb sticks around to share say git repos.

I am not sure I would enjoy working at the second place but I would really hope we weren’t an easy target


Viruses that infect USB devices can compromise systems based on air gaps.

Cf. eg., https://www.schneier.com/blog/archives/2013/10/air_gaps.html and https://www.schneier.com/blog/archives/2020/05/ramsey_malwar...


It's been shown many times that people will pick up random USB devices from anywhere and plug them into any computer without thinking. Airgapping just stops the automated scans and stuff that was already being stopped. Defence is reactive, so the momentum and advantage is always on the attacker side, and stopping the lazy ones doesn't do anything to stop the real threats.


People die in car crashes although they have seat belts, it's been shown many times, so seat belts doesn't do anything to stop the real danger.


The costs of seatbelts are already built in to the car. The cost of airgapping is not. The sheer inconvenience and limiting of the potential employee pool would put it far out of budget for anyone but governments or very large corporations doing very sensitive work, and even in those cases it would be on a site-by-site basis, not org-wide.


> The cost of ... far out of budget ...

Yes maybe, but now you changed the topic and started talking about money and how expensive things are. Have a nice day anyway


The parent wrote that "most organizations put security at the end of the budget list", but he did not write that it should be put at the top of the budget list. Your criticism would only be valid if he had written the latter.


The parent wrote that the "main problem" was that they put the security at the end of the budget list. My argument is it doesn't matter where it is on the budget list, it will always be subservient to the actual business of the Defender. That is, my argument is the "main problem" is that perfect cyber security Defense isn't anyone's actual business.


the safest computer is turned off, in a Faraday cage, in a vault under a mountain.


Or thrown out from the solar system, antenna facing an other galaxy


For good reason - if you prioritize security above everything else, you end up building a rock, as the only system that's perfectly secure is the one that's completely useless and unusable for anything. Adding any feature, any utilitarian aspect to it, means compromising security right there and then. Security is never the goal, it's always an unpleasant cost to pay on the way to the goal.


Security management services balance confidentiality, integrity and availability. Spending more on security means you can have great availability despite the measures on integrity and confidentiality.

Look at any cloud provider. They get it right because they employ the best security management systems.



> if you prioritize security above everything else, you end up building a rock

Well, I'd prefer incapable people to build secure rocks over them building insecure non-rocks.


Where would you put it. Before feature development, headcount, infrastructure, marketing, sales, management, hr, facilities?


Compartmentalizing it is an issue in itself.

Replace "security" with "safety" for, IDK, space engineering or nuclear power. Does it still make sense?

Safety and security need to be integral parts of processes. It is not something you can acquire from a vendor or split out as the responsibility of separate team(s) who have to internally battle for resources and interface with the core development through escalation requests...


> Replace "security" with "safety" for, IDK, space engineering or nuclear power. Does it still make sense?

Here's a question: Why do most safety regulations require the force of law to get companies to enact, but no laws were necessary to get companies to adopt the internet? "Safety" is in a similar boat to security, with the benefit that you usually don't have people actively trying to harm your employees. But Safety often gets tossed out the window when it gets in the way of accomplishing the real goals of the organization. Why is the US military exempt from a number of safety regulations that private companies are beholden to? Because the military believes those regulations will hamper their real mission, which is not keeping individual soldiers safe.


Indeed. Also, it's even tougher to argue to stop increasing safety at some point, because it usually means arguing for accepting a certain number of lives or limbs lost. And yet we do, and we have to, because like with security, a perfectly safe system is one that does nothing at all.


But safety isn't security, and your random SaaS pets.com clone isn't a space shuttle. Safety is more about reliability. Security in space systems is very much bolt-on and split out: the concern of everyone working on a rocket is that it flies and lands safely. Who is or isn't authorized to launch it is a worry of another department.


Safety is not really about reliability. Maybe as a means to an end. It's about not killing people and maybe not destroying too much of your facility in case things go south.

Safety systems take over if your chemical reaction overheats the reactor; they prevent your logistics team from moving a train while it's being loaded with dangerous chemicals (real example, the safety system was disabled by the logistics people - ironically the company put those with poor safety record from production to logistics, because they could do no harm there).

You're mixing up safety systems with the property of "doing X is safe by construction" - but most plants inherently are not: e.g. in a small reactor that's manually fed, your employees can just input the wrong recipe by accident; the safety systems should then take care of the mess. Or your junior chemist (who needs an expensive, senior chemist for an established process?) can mess up the improvement to the recipe, resulting in rapid unscheduled disassembly of you poorly maintained reactor, including the building and one of its operators (sadly a real example).


Security is PvP, safety is PvE. Different game, different tactics.


See my reply to sibling comment: if you're thinking safety is purely PvE, you're in for a bad time.

Humans are reckless idiots. If circumventing safety systems means they can go home 15 minutes earlier (or if it's the only/simplest easy to get reach some unrealistic high daily goal dictated from higher ups), some of us will happily risk their own health, and that of a whole city, to do so.


Ok, not 100%. Human Factors can be critical in a lot of cases. Similar to security, designers try to eliminate the system's reliance on human actions wherever possible.

Still though, security is fighting against an opponent who searches for weak points and exploits them to the max. "Safety" protects against random natural accidental events (either internal or external in origin). If someone is just trying to get home, they want to get the job done quickly sure, but their intent is not to cause damage, their actions aren't targeted. It's a different risk profile.


Thanks for agreeing :) safety also protects against preventable human errors.

Maybe my examples are bording on sabotage, eg "sabotage by accident".


It's an insurance. Nobody likes paying insurance bills. Being on the defense side of cybersecurity means that you are permanently on the back foot, not a pleasant position either. And cherry on the cake, you spend most of your time cleaning up other's mess (CVEs, users being users...).


That doesn’t seem like the main problem to me. The original comment seemed more on point; it is for most companies a sideshow that makes no money and only works to enable their main business. That it gets a bad priority in the budget is a result of that underlying problem.


This is well put.

I think one way to drive home the point is defenders are cost centers and attackers are profit centers.


> By contrast, an attacker’s entire job is to attack the system. There is no other purpose they are serving, no secondary masters or considerations that need to be used to weaken their attacks.

This seems like a serious misconception. Cyber attacks absolutely have purpose, whether that’s to steal data, disrupt services, whatever. Your viewpoint might apply to unsophisticated actors who just want to break things and cause chaos, but it’s completely ignorant when considering nation-state actors and financially motivated criminals.


Yes, but that purpose is accomplished by way of attacking the system. By comparison, most cyber security defense gets in the way of the purpose of the defenders. We know this, and even joke about it openly. The ultimate secure system is a computer unplugged, sealed in a lead vault, encased in concrete and buried a mile beneath the surface. It might not be useful, but it is completely secure.

Even less flippantly though, we inherently know this. How many things do we do every day that could be "more secure", but "more secure" gets in the way? Do you use memorize 256 character unique passwords for every site and system and refuse to record them even in a password manager? Do you use GPG encrypted emails and only E2E encrypted messaging services? Are all your home network devices independently fire walled, with strict in AND outbound rules ensuring they can only talk to the specific devices they should be able to talk to and only on specific well defined ports? Have you hardened your home network against data exfiltration via DNS queries? Is all network traffic fully encrypted with mutual client and server cert validations? If you've answered no to any of these questions, you have chosen to prioritize something else over better cyber security defense. And it's probably a good bet that at least some of that is because doing these things would actively get in the way of doing what you actually want to do with your electronic devices. You've knowingly chosen a weaker defensive stance to do something else instead.

Attackers on the other hand have no need to choose weaker attacks on your defenses in order to do something else instead. The attack is the point of their usage of their devices (and yours).

You might argue that the attacker might choose a lesser profile in order to remain hidden and beneath detection, but I would argue this still isn't the same choice. Given the option, no company would spend any time or money on resources for cyber defense. They would rather spend all that time and money on their actual business. But Attackers would spend time and money and resources on their attacks because those attacks directly serve their goals.


> If you've answered no to any of these questions, you have chosen to prioritize something else over better cyber security defense.

To add to this: I get irrationally irritated when some hack occurs and someone makes the comment: "Their databases weren't even encrypted! Amateurs!"

Okay mister wise-guy, let us see you "encrypt" the database at an organisation where that database produces a billion dollars of revenue annually.

Are you sure you aren't going to lose the encryption keys? Many billions of dollars sure?

Okay, you've made sure that the keys are safely backed up! Good job! Now rotate them. On a schedule. That's a process you will be required to hand over to a secops team to avoid you being a "bus factor of one". Good luck with writing out that process so nobody ever screws up.

Now provide access to the encrypted data to... everything and everyone. Because that's the point of business data. It's supposed to be consumed, reported on, updated, saved, exported, imported, and synchronized. Not just to systems you control either! To the CFO's tablet, to the third-party suppliers' ERP, and to every desktop in the place. There's a hundred thousand of them, across every content bar Antarctica.

It's surely because they're amateurs that they haven't figured this all out already: cheaply, robustly, and securely!


It's never the encrypted database that gets hit, it's always the other database that was made because our first database was encrypted....


And the reason it was made is because the encrypted database may as well be a shrine to a dead god; it makes you feel awe, but it's otherwise completely useless.


I mostly just see "not even encrypted" by way of password stores and PII though — stuff no employee or CFO is ever going to need to visually examine — and those we have a lot of best practices to fall back on about keeping always encrypted all the time.


Attacker purpose is perfectly aligned but defense may not align with tons of things like user convenience or ease of development


This is exactly right. There's also the underlying asymmetry; defenders need to get everything right, attackers only need one vulnerability. (Which is usually human behavior based.)


> By contrast, an attacker’s entire job is to attack the system.

If cyber defence is a sideshow to the actual businesses objectives, then surely the same holds of cyber attack and whatever the actual criminal/strategic goal?


Not at all. The attack is directly inline with the goal of getting to whatever is at the other end of the attack. By contrast, nothing about SSL certificates, forward secrecy, encryption at rest or authentication and access controls is inline with the goals of say Facebook to deliver as many cat pictures and political memes to you as possible, and as many eyeballs to advertisers as possible.

Or put another way, in a world with 0 cyber security attackers, there would be no money or time spent on cyber security defense. But in a world with 0 cyber security defenders, there would still be people attacking resources and taking things the owners would rather they not have.


Cyber attacks are in the same position as sales in most orgs: they can focus on getting other people's money and some failure rate is acceptable as they don't get paid much if they don't succeed.

The org can invest as much as they bring in, they're a profit center, where defense is always a cost center that will be reduced to the minimum acceptable.


> There is no other purpose they are serving, no secondary masters or considerations that need to be used to weaken their attacks.

Nowadays their purpose has some sort of monetization component, therefore there is consideration as to which attack vectors seem to be the most likely to lead to the kind of a monetization scheme they are targeting on. For example does a group of attackers ransoming companies prefer the same attacks as phishing individuals? Are these the same companies / groups (I prefer companies at this point, they are organized crime, they are a company and have the same sort of problems in that any small company have in deciding where to put their resources - we don't have a phishing division here, we ransom data, we don't denial of service - nobody is paying us for that, we ransom data and that's it!)


Sure, the final business goal is selling access to the resources gained by the attack, but ultimately the attacking IS aligned with/directly enables that goal. Defense in cyber security is almost always at odds with the goals of defender's business. Or put another way, if there were no Attackers, no one would spend any money on cyber security defense. But if there were no Defenders, someone would still be paying for cyber security attacks.


> It’s a side show, and a distraction from the main business

I'm pissed this is accepted as normal for the IT sector, while in railroad engineering and aviation (where human lives are at stake) you'd get your licenses and certificates revoked. Therac-25 something.


Security != safety. Safety in civil engineering and aviation is paramount. Security is an annoying, bolt-on side issue at best, nonexistent typically (e.g. railroads).

Therac-25 was a safety event, not a cybersecurity event.


Also security attack in those engineering fields are not nearly as scalable as attack on computer systems. So you don't have as much need to cover every single hole because the damage won't be as much.


IT security is a thing at all because software safety is near non-existent.


How are you going to retain safety in a compromised software system?


You aren't, for the same reason you aren't going to retain safety in an airplane or a nuclear reactor when men with guns are shooting people and pushing random panels (and/or the other way around).

Even with "defense in depth", there's clear separation between parts that do the important stuff, and parts that protect the parts that do the important stuff from being used against the rules.

I'd go as far as dividing cybersecurity into two major areas of concern:

1) The part that constrains software systems to conform to some make-believe reality with rules stricter than natural -- this is trying to make the abstract world function more like physical reality we're used to;

2) The rules and policies - who can access what, when, why, and what for.

Area 1) should very much be a bolt-on third-party thing everyone should ideally take for granted. Area 2) is a responsibility of each organization/project individually, but it also small and mostly limited to issues specific to that organization/project.

It maps to physical reality like this:

Area 1) are the reinforced walls, floor and ceiling, and blast doors that can only be opened via a key card[0];

Area 2) are the policies and procedures of giving and revoking keycards to specific people in the company.

--

[0] - Or by crossing some wires deep in the reader, because accidents happen and cutting through blast doors cost money. I.e. real-life security and safety systems often feature overrides.


You don't? Safety is not having that compromise in the first place.

Like, if the code can handle all input correctly, then there is no exploit path. Regardless of whether the input comes from an attacker or not.


You said "security is a thing because safety is nonexistent". This means that, if you had safety, you wouldn't need security. I'm asking you to explain how a perfectly safe system wouldn't need security, as someone who compromised it would be able to simply undo all the safety.


How do you compromise a safe system?


Say I wrote software to control a gamma ray knife, it's perfectly safe and it always does the right thing and shuts down properly when it detects a weird condition.

Compromising it would simply be a matter of changing a few bytes in the executable, or replacing the executable with another one.

This seems so obvious to me that I think you may have non-standard definitions of either safety or security.


> Compromising it would simply be a matter of changing a few bytes in the executable, or replacing the executable with another one.

The executable is part of the system that's supposed to be safe. That you have no means to modify it is an aspect of safety.

With your example, imagine that program would be running on an AVR with boot fuses burnt.


That AVR can still be manipulated. If your definition of safety includes preventing in-person attacks on the data storage, then you pretty much need armed guards.

If that's the standard, then no wonder "software safety is near non-existent".


Ah, there's the non-standard definition. Safety means that the system performs as designed while the design invariants hold. Security means someone malicious can't change the invariants.


Cite your standard.


How could it be any other way? There are probably some definitions out there, but what 'stavros said is pretty much what the words mean.


That's not what it is about. If someone calls you "non-standard", you challenge them to identify these standards. If you call me wrong, at least give it hands and feets.


> If you call me wrong, at least give it hands and feets.

    \|/      \|/
      \      /
    You're wrong!
      |      |
     ^^^    ^^^
Sorry, couldn't help myself. There's an obscure Polish joke it made me think of (punchline being, thankfully you didn't ask for it to "hold its shit together").


Nah, I'm OK.


If you know it better, enlighten us.


I comment for fun, and this thread has stopped being fun.


> while in railroad engineering and aviation (where human lives are at stake) you'd get your licenses and certificates revoked

Is that true? The safest train is the one which is stationary. Cut off the wheels, and drop the rail cars on secure foundations (carefully) and you just protected yourself from 90% of possible accidents. If you remove everything flamable and weld the doors shut you got rid of 9% more.

Do you see why this is silly? Because the main business of rails is to transport people and stuff. You won’t be railroad engineering long if you don’t keep that in mind.


> I'm pissed this is accepted as normal for the IT sector, while in railroad engineering and aviation (where human lives are at stake) you'd get your licenses and certificates revoked. Therac-25 something.

You're confusing a statement of the way the world *Is* as an endorsement of the way it *Ought* to be. Ask yourself this: WHY do you get your licenses and certificates revoked in railroads and aviation? Because without that threat of punishment, the railroad and aviation companies would choose to spend their money on other things that are their actual business and not on the distractions that safety and security are otherwise. It takes the force of law to mandate that the companies do something that is contrary to their actual interests and goals. See also Boeing.


I couldn't disagree more. Defenders and attackers are alike in many ways. I disagree with the post as well.

Mature security teams for example use Bloodhound which uses neo4j to visualize attack paths in AD. Defenders (good ones) don't think in lists.

> "The defender’s job isn’t defense."

Yes, it is. Obviously!

> "It’s a side show, and a distraction from the main business of whatever else the defenders are trying to do"

I'm sorry, but what else are defenders trying to do that isn't defense? are all defenders completely incompetent then?

> "By contrast, an attacker’s entire job is to attack the system."

Yes, and there are people in mature security teams whose entire job is to search for and stop (not just react to alerts) attackers.

> "Attackers win for the same reason that Microsoft is better at publishing operating systems than Cisco, because ciscos operating systems are a means to an end. Microsoft’s are the end"

I think you have an incorrect perception of what security teams do. It is both a matter of strategy and resources. There are security teams whose budget is in the 100's of millions of dollars and who employ some of the brightest cybersecurity strategists and professionals. You rarely (if ever) hear their names in relation to a breach or compromise. There are also much less capable security teams who do well against most attackers, but will inevitably get pwned by an APT, except the good defenders catch the apt's before they cause significant damage.

At well protected organizations, attackers lose 99.9% of the time (probably higher, I'm guessing here). Attackers simply need to win once to succeed, while defenders need to succeed 100% of the time.


There is also Fix Inventory, which is a graph-based security tool:

https://github.com/someengineering/fixinventory

I'm one of the people behind Fix Inventory. What scares a lot of developers away from graph-based tools is the graph query language. It has a steep learning curve, and unless you write queries every day, it's really cumbersome to learn.

We simplified that with our own search syntax that has all the benefits of the graph, but simplified a few concepts like graph traversal.


> Yes, it is. Obviously!

> I'm sorry, but what else are defenders trying to do that isn't defense? are all defenders completely incompetent then?

You've misunderstood me. Defenders aren't the "cyber security team employed by AT&T to keep customer data secure". The Defenders are AT&T, who would rather spend their cyber security budget on just about anything else that could actually generate a profit. The cyber security team that AT&T hires might have the sole job of building the most robust defense system imagined, but even if they do, their efforts will be continuously stymied and reduced because true, complete, robust security will get in the way of actually doing the things the AT&T wants to do.

Or to put another way, a company that spends all their money on perfect cyber security is as useful as the proverbial perfectly secure computer encased in concrete and buried a mile underground with no power or network connections.


But that's false equivalency. The attackers also work for organizations. It is a bit rare for individuals to hack companies these days. APTs are teams, sometimes they are employed by intelligence or military units of countries, other times they are employed by a criminal organization and yet other times they are loosely formed organizations between individuals with a financial or political goal, like hacktivists as an example. But they have hierarchy, motive, goals, even a work schedule and paid vacations and bonuses.

Even for individual hackers, there are individual good hackers (commonly called "whitehat" although I deride that term) doing bug bounties and finding CVEs.

The main differences between attackers and the attacked are intent, resources and which side you're on. The NSA and CIA are the good guys from my perspective, but they are the bad guys for defenders working in Russian or Chinese government cyber defense teams.


> But that's false equivalency. The attackers also work for organizations.

You've missed my point or I wasn't clear enough. It doesn't matter that they're part of a larger organization. That organization's goal is attacking, or at one step removed, selling/using the resources gained from attacking. Defenders are never in an organization whose business is the Defending.

Or lets use your CIA example, and for the sake of argument, lets pretend there are no other counties in the world other than the USA, Russia and China. In a world where there are no Russian or Chinese Attackers, the CIA would not spend money on defense against Russian and Chinese attackers. But in a world where there are no defenders in Russia and China, the CIA would still spend money on attacking and exfiltrating data from Russia and China. They would just be vastly more successful at it.

Or as a different analogy, mining companies mine because they want to sell the ore and gold in the mountains. But we still call them "minim companies" because thats their job. And they are often opposed by environmental groups working to defend the mountain. In a world where there were no mining companies, no one would be organizing an environmental group to defend the mines because there's no gain to spending time and resources standing around and guarding mountains and ore that no one is trying to get access to. But in a world where there are no environmental groups, there would still be mining companies.


Defenders is referring to the entire org, not just the security team.


> I'm sorry, but what else are defenders trying to do that isn't defense?

This is an uncharitably narrow reading of the post to which you're replying, isn't it? Defenders are trying to ship. To make money to make payroll. Create profit centers, not cost centers.

You can say that security is a feature and a load-bearing one, and I'd agree with you, but not everyone who makes decisions will do the same.


You're wrong, defenders are not profit centers. You don't expect the security guard for your office building to generate profit, why would you do so for your digital assets? defenders are like lawyers and hr, they are cost centers whose existence is justified because attackers also exist.

> "You can say that security is a feature and a load-bearing one, and I'd agree with you, but not everyone who makes decisions will do the same."

Maybe it is, but I wouldn't put it that way. Security teams exist because people with bad intent that want to harm you exist. Just like lawyers exist because people who sue you (including the government) exist.

Imagine stating "lawyers don't exist to protect from lawsuits", that's how it sounds to me. If defenders aren't there to defend, then their existence isn't justified.

> "Defenders are trying to ship"

Defenders are there so that when other teams who "ship" attempt to do so, they don't get the application, system, company or wherever you have protected data doesn't get compromised. And this is before and after "shipping" or deployment. Security is a cost of business, whose RoI is measured by the fact that you are doing business without getting hacked, nothing more.


> You don't expect the security guard for your office building to generate profit, why would you do so for your digital assets?

Yes, that's why companies cut cost on security guards as much as they possibly can. From the product-making company standpoint security is a mostly a cost.


Yes it is mostly a cost. Breaches are also a cost. When the homedepot security team tried to fix the issues that got them pwned, the execs said "we're not a security company, we sell hammers". Box ticking mindsets like that are held by incompetent and short sighted executives. The cost of security is decided by the cost of a potential compromise, it has nothing to do with profit margins. A lot of companies learn this lesson the hard way. Many "snakeoil" security companies exist because of this incompetent line of thinking by executives. It is easier to say you paid some company who made some b.s. claim than to actually fix problems, even if the 3rd party costs more than the cost of fixing problems.

In short, what you and OP commenter describe is incompetency, it should not be taken as the default, those are not defenders, those are mismanaged organizations. We're in 2024, every exec should know better.


> In short, what you and OP commenter describe is incompetency, it should not be taken as the default, those are not defenders, those are mismanaged organizations. We're in 2024, every exec should know better.

Everything in life is a trade off, and no-one is in the business of perfect cyber security defense. Therefore, businesses will *always* trade weaker cyber security defense for better/faster/cheaper/easier/more business in their actual line of business. Just like you do every single day. Do you have ALL traffic on your home network encrypted with mutual serve and client certificate verification? Do you only have your 256 character passwords memorized in your head and not stored in a password manager anywhere or otherwise recored somewhere? Are all of your home systems equipped with strict outbound firewall rules that only allow one time, on demand and confirmed communications with the wider internet? Have you hardened your home network against data exfiltration via DNS queries[1]? If you use 2FA for your accounts, and the objectively weaker password managers to store your passwords, are your 2FA tokens kept on completely separate devices from your password managers? Do you only allow direct console access to any of your systems and have no remote access like SSH enabled? Do you a have every single computer backing up their data into multiple redundant copies, without using the network for data transfer and with at least one if not more of those copies stored off site?

If you answered "No" to any of those questions, you also have chosen the route of "incompetency" and "mismanagement". It's 2024, and every IT person should know better. But of course we do "know better" and choose the objectively weaker options anyway because the stronger options get in the way of actually doing the things we want to use our systems for. You don't choose perfect cyber security defense for your home network because you don't have a home network for the purpose of practicing perfect cyber security defense. So it is with businesses, they don't have their systems for the purpose of practicing perfect cyber security defense either.

[1]: https://www.akamai.com/blog/security/dns-the-easiest-way-to-...


> We're in 2024, every exec should know better.

"Should" doesn't mean much. People respond to incentives. Can you explain the incentive function that exists today in the real world to prioritize the security cost center above the profit center?

I mean, I work at a company that I'd say does a pretty good job of this--in a regulated industry and after getting burned a few times. But you can still go full-send with VP approval, and the risk becomes part of the cost of doing business.


the problem goes even deeper, execs chase short term profits and stock ticker bumps, that's the root cause in my opinion. You shouldn't prioritize security over the main business and profit, that was not my suggestion, but you should prioritize long term profits and reputation (ability to make even more profits in the long term), which is where security comes into play.

In other words, security is necessary for business. Just like how you would want your offices secured from burglars -- because otherwise you can't do business well -- you should want your digital assets secured from hackers, except unlike physical security, it isn't just local malicious actors and competitors after your business but intellectual property thieves, hacktivists, financially motivated cybergangs and more (not just nation state actors).

Failure to give proper priority and funding to cybersecurity, is failure to ensure conditions that make the company profitable and viable in the long term.


> security is necessary for business

It's not, though, that's the thing you aren't picking up. Managing risk to the tolerances necessary to make money is necessary for business. That's what's being done.

You say that it's about the long term, but within epsilon of nobody has gone out of business or even been seriously impacted by bad security posture. Experian gets wrecked on the regular, but it's not going out of business. Azure springs holes regularly enough that Corey Quinn has an ongoing schtick about it, but Microsoft isn't going out of business, either.

If you want security to be necessary for business, you need to make failing to operate securely a legitimate threat to an organization. Waiting for consumers to act collectively means you'll die of old age before seeing a twitch, so you're really talking about legislation. I would be in favor of this, to be clear--I think we as an industry are bad at cybersecurity, terrible even. But I'm describing what is, not what ought.


Companies go out business because someone from China stole their intellectual property, that isn't uncommon. There are companies like riskiq and bitsight that rank your security posture, which other companies use to decide on giving you their business. If it is between your ransomwared company and the competition, you just lost a business advantage there. Azure and Microsoft are bad example, as is Experian, they don't have much competition. I think the whole ransomware trend has skewed how people think about security. It isn't just outages like the ones caused by ransomware that are a concern, keeping secrets and confidential information from your competition is a big deal. as is the trust of your clients, that you will protect their information.

> Managing risk to the tolerances necessary to make money is necessary for business. That's what's being done.

I agree, but that isn't what is being done at most places. Every organization should spend as much as their risk tolerance allows them to do so on security. My problem is with spending as little as possible without getting into legal trouble.


> You're wrong, defenders are not profit centers. You don't expect the security guard for your office building to generate profit, why would you do so for your digital assets? defenders are like lawyers and hr, they are cost centers whose existence is justified because attackers also exist.

I didn't say that infosec was a profit center. But they're in tension with profit centers for attention and sway, and by the way--the profit centers are the ones who make money.

I've said it before, I'll say it again: People Respond To Incentives. Lawyers and HR are generally not respected except insofar as they protect companies from visible legal risk, and often not even then. Infosec is so vague as to appear as a tiger rock to people who aren't plugged into it.

> Defenders are there so that when other teams who "ship" attempt to do so, they don't get the application, system, company or wherever you have protected data doesn't get compromised.

Everyone, infosec included, is trying to ship. Shipping is how you make money, make payroll, and keep people employed. You only don't ship when your risk calculus indicates that the cost of not shipping is less than the cost of shipping.

This us-versus-them thing brings us back to "the most secure system in the world is in an unplugged box". But we don't operate businesses off of unplugged boxes. Risk management exists. If this is how you would argue risk management with the median exec I've known, you'd lose. I have skilled infosec friends who've had better success than this through wise process and product choices, though.


> Microsoft is better at publishing operating systems

Is that a joke? Microsoft seems to be in the advertising business. Their OSes are also a means to an end.


For cloud providers, security is their raison d’etre. On the cloud you can have confidentiality, integrity and availability that was unheard of two decades ago.


No, it's not. Cloud provider's raison d'etre is selling access to compute resources. That confidentiality and integrity are resources they can also sell does mean that they might spend more on those things than would otherwise be expected but if their single and sole purpose was security, they would't be selling networked access to computer systems at all because that is objectively less secure than physical terminal access. But no cloud company that requires their customers to staff a physical presence at a terminal in the data center is going to be as successful as one that allows you to remote into your cloud from anywhere, even though tht is objectively less secure.


I feel like this does not go deep enough. :-)

"lists" is just short hand for components. "Graphs", shorthand for interoperation. The component view is analysis, the interaction view - well we don't have a really good word for that, and yet as the article points out, that is often the attack surface.

Complex adaptive systems (see John Holland's "Hidden Order") have components and a messaging bus which crucially provides a way for the constituent components to interoperate. You can swat ants individually, but if you want to stop them, you destroy the ability to leave pheromone trails.

Maybe there should be a word like "analysis" for understanding how things interoperate. Gestaltysis?


Whoa.

I briefly worked for a "cyber security" company and couldn't quite put my dinner on why I ultimately hated the product and felt that the approach that they took -- and a large part of the industry -- was ultimately a sham.

I couldn't quite put it into words, but now I get it: we were building the tools to support the most useless of cybersecurity practices -- org-level checklists.


If you can't do checklists though, you can't do anything else.

All activities have lists and recurring calendar entries at their heart.

You have to regularly show up and do the things.

I agree that deeper / better approaches are required during the "do the things" steps.


Right, I think the industry is too self-critical at times.

I onboarded a new client recently, and within five minutes of guessing a Wi-Fi password I had sensitive financial data using stock tools. Anyone with physical access to the office could do the same. Contrast an existing client, after a month of trying and writing custom shellcode loaders, spear phishing campaigns, my entire team had... some graduate CVs.

Really, you're saying that because the NSA could probably do a better job with the latter than us by intercepting and hardware-hacking networking gear, that no value has been provided?


That’s basically the entire cybersecurity industry. Companies buy these tools as a kind of blast door for when things go wrong, to point a finger at. “We followed all the checklists and our security software didn’t catch it, it’s not our fault.”

If companies cared about security they would hire red teams instead of paying for useless scanners with a <1% signal to noise ratio.


That’s not just cyber security, that’s part of the reason many service/support contracts tend to exist, the whole CYA finger pointing by externalizing the responsibility.

Ultimately you end up having to clean up the mess after either way but at least there’s a paper trail of responsibility passing to CYA.


Perhaps it depends upon why the checklists exist, how accurately they model what needs to be done, and how closely they are followed. Consider aviation: pilots live by checklists. In the world of aviation, following checklists doesn't guarantee that things won't go wrong during a flight, but not following them or treating them cavalierly is an invitation for disaster. Those checklists are built upon years and years of hard-earned experience.

Granted, however, having a checklist for no good reason other than to say that you have a checklist, or not regularly reviewing and updating the checklist based upon real-world conditions, is meaningless.


I'll be more nuanced.Doing operational security can enhance your security posture, but the real trick in cybersecurity isn’t just putting measures in place, it’s keeping them up over time. That’s where checklists come in.

A lot of companies get it wrong. They think the checklist is the security. But really, the checklist is just there to remind you that you did something right before and you need to keep it up. Treating the checklist like it’s the goal is where things go off track.


As a pentester, I would argue that attackers don't think in graphs either.

Apart from Bloodhound, I can't think of any tools where we have graphs.

For web security, I can't think of something where "graph thinking" applies. But we have a pretty huge list of attacks to test https://portswigger.net/web-security/all-topics.

And ultimately, what is inside your pentest report ? Not a graph, a list of things to do:

- SMB signing.

- Don't use the domain admin to manage every machine.

- ...

The main reason this phrase is so popular, is that it panders to the hacker community: "We are the smart guys, all the defenders do is excel sheets."

IMHO, the nugget of truth in this is that defenders can spend considerable amounts of time on things that don't matter. Like doing CIS benchmark by hand on all servers. While missing the low-hanging fruits that would give them a strong security posture.

In a lot of companies, the defenders are just sysadmins that don't have any idea of what they should focus on.


I get your point but I think pentesters are perfectly capable of thinking in graphs, including web security. Bug chains are the immediate example, where a couple of CVSS 4-7 vulns can be turned into a full rce/whatever 9.8 equivalent. This bug chaining fundamentally occurs via elements of compromise i.e a graph traversal.

Bloodhound is great, and a nice visual tool for people to conceptualise attack graphs but it’s just a part of the process of understanding the target domain from an attackers perspective. No nice tool like bloodhound exists for web pentesting because a chain of compromise can’t simply be reduced into tool form there because a chain is often specific to the app and not an underlying framework, unlike AD where the security boundaries are well(ish) understood and codified.

Pentest reports include stuff like SMB signing and “don’t admin everything with your DA account ” because they are glowing hot nodes very early in a chain of compromise, meaning that is often how stuff gets popped IRL. It’s (hopefully) not that the pentester doesn’t understand graph thinking, it’s just the the first node in the graph represents effectively complete compromise, so why traverse?


This is just a fancy way of saying defenders have to defend every entrypoint vs attackers who only have to find a single point of weakness.


Well, as the aphorism goes, the best defense is a good offense. Of course Microsoft and Google have security projects which aim to disrupt organized hacking groups, but maybe they need to go further? Like using honey pots to back-feed zero-day exploits (maybe even ones that have been engineered into Google/Microsoft/et al.'s products) to attacker's machines? Go full black-hat (with some sort of plausible deniability) and ransomware the ransomwarers.

As I type this, I realize it sounds a bit like some of the evil all-powerful corporations in sci-fi, although they usually also go to the lengths of assassinating their enemies.


Wouldn’t any form of digital offence mainly be a waste of resources? The reasoning behind this is that the attacker has nothing to lose. What would you ransomeware? Cheap hardware which likely isn’t even owned by the attacker?


"Hacking back" has gotten hackers into jail in the past.


> assassinating their enemies

Well, Google no longer has a mandate of “don’t be evil” and Microsoft never did…


Asymmetry of defense, if you will.


I've done both. I've been an incident responder and have experience with penetration testing and red teaming. I think that, while reductive, this is somewhat true, but not necessarily as negative as the the article reads. Defense is made up of many things. For instance, developing effective controls to reduce the risk and impact of a security event, identifying attacks and compromise, and responding to the events. Lists of standards and responses work well. Defense also includes architectural decisions which require thinking about the graph of the network to develop these controls. There are lots of disciplines in defense too: architecture/engineering, risk management, incident response, application security, education, threat intelligence, and so on.

Also interesting that the author implies the problem is about thinking of defense in lists then provides a list of items to consider to improve defense.


Attackers win because they only need to succeed once after proving for weak points. Defends have to guard everything at once.


Sounds like there need to be one or more honeypots in each network for catching intruders in the system. Fake crypto credentials, fake password storage etc.



A formal way to frame this insight is how a formal methods security researcher might: graph-based program verification.

A lot of core confidentiality and integrity security problems come down to 'safety property' verification (a notion from model checking), which in turn comes down to reachability on a program flow graph (a notion from program analysis). This is also true of access control verification, but that's a topic for another day.

Imagine a dataflow or points-to analysis on a program, and extend it all the way to include the code in your OS and the cloud and the database. These analyses create a graph, and the question is can an attacker get from an entrypoint and precondition of some node A (a line of code) and traverse to the assets on point B (another line of code.)

Interestingly, the security field is increasingly getting there, with ideas like CNAPP, IAM/Cedar, AD/bloodhound, where we are getting these basic access graphs modulated by estate, identity, access policy, etc. Often we don't even really need the programs, because it's more about a distributed system where we can focus just on identities and policies across trust zones. (Eg, If a box gets hacked, that exposes other credentials on the same box.)

At the same time, anyone working in these things also knows graph reachability is simplistic 80's & 90's stuff: there can be complex logical policies at each node, So we're seeing things like modeling those harder points, not just as pure reachability, But also things we can actually peek into and more richly verify, such as by modeling fancy ABAC policies using smt solvers.

I don't think that's really where the author is coming from, but it's a reason the article resonated with me for so many years from a principled perspective, and I think it's incredibly practical and important today.

(Disclaimer: we do crazy GPU graph AI power tools for folks in the space at Graphistry / Louie.AI, in my first verification papers here were almost 20 years ago, so I've been thinking about this a lot.)


Defenders need to win every single time. Attackers only need win once. So attackers win.


Do you think this still applies?

My take as an attacker: it goes directly against the security 101 of "defence in depth". Sure, we only have to win once for a specific step, but then there are more steps to complete for us to reach our goal. This is the same for most occupations that I can think of anyway, no one reaches their goal with one step.

I understand that this can be taken to mean there are multiple avenues to achieve a certain objective (e.g. I can find a password on disk multiple ways), but I still wouldn't agree. Develop a defence that makes sense (e.g. MFA is a good mitigation for password theft). Detect / alert on the usage rather than the endless list of methods to retrieve a password.


It's still a "loss" from the defenders perspective even if someone can't compromise other systems. The defenders still need to assess the damage, fix the vulnerability, and verify that nothing was compromised regardless of what protections are in place.

For example, maybe the attacker is after trade secrets but compromises the CMS (content management system) of your public website. It has no connection to your intranet, but they were able to change download links and inject scripts for visitors of your website. Still a "win" as they now have a place to pivot from or just use to their liking. It gives the attacker options while your system is left weakened with less options.


I wonder if this could change if our governments' policy was to hunt would-be hackers to the ends of the earth.

(Note: I genuinely have no idea if that would be done outside of an authoritarian / autocratic regime. So I'm not remotely advocating it at this point.)


Hackers are global, though, and no matter what country you’re in, almost certainly the vast majority of hackers attempting to attack you are doing so from outside your country. Enforcing laws on a global scale is extremely difficult and almost impossible to do effectively.


You both have a point, so I would word it in a different way: if we devoted enough resources to track hackers down no matter where, could we lessen their impact? It would take a crazy amount of resources, like sending in infiltration teams into hostile countries. It is technically possible, but where is the balancing point?


What will you do when you find them? if you find one in say France, the French will deal with it and attacks stop. However most attacks are tracedto North Korea or Russia where they don't care and so you can't do anything.


> I wonder if this could change if our governments' policy was to hunt would-be hackers to the ends of the earth.

"Security" has a cost. The only question is whether the cost of security exceeds the cost of lack of security. Currently, lack of security has very little cost.

It would be easier and more effective to start putting CEOs in jail for security breaches of personal information. Suddenly the executive suite would be very interested in security and would start spending an appropriate amount of money on it.


The balance might not change much:

- some hackers are state actors, and pointing the finger to North Korea won't help much

- some live in precarious conditions from the start, in areas where gov is unreliable. Even if you catch a bunch of them, it might not disuade others to try their chance if there's no other obvious jackpot to them.

- When the risk increases the reward can also increase as the barrier to entry is that much higher. You get a hacker scene with more high profile, super professional actors that will get more organized. Think "war on drug" style of underground actors building cell networks to manage their operations.


They already do something like this, but only for pirates (the computer kind). That's because Hollywood makes a credible story that illegal copying of movies loses the USA billions of dollars. It also helps they pay millions to politicians. You don't pay millions to politicians to enhance the credibility of your story that personal data theft costs the USA billions.


> Bob admins the DC from a workstation. If that workstation is not protected as much as the domain controller, the DC can be compromised.

They both run Windows. The protection class between the two is identical. You can draw as many graphs and lists as you want, but the security of this arrangement is mostly down to timely and accurate Windows Updates.

> Learn to Spot List Thinking

I think in terms of "diffs." I want to know what is _changing_ on my network. I don't ever need an enumeration of things in any particular arrangement and as a human being, whether graph or list, I'm not equipped to use it in any meaningful way.

A difference list is typically very short, reveals intrusion patterns quickly, and is something you can automate easily.


You can do a lot more than just keep Windows up to date. Here are some other things you should do:

1) Use separate accounts and machines for administration tasks. Basically, IT people should never use their e-mail, web browsing, and development machine for system management tasks. They should use a separate machine with a different account for these tasks. The main reason is the system management machine has a much smaller attack surface. Ideally, it should only run a communications program (IRC, Slack, MS Teams, etc.), and the tools the person needs to do their job. Examples of tools people need include SSH, Remote Desktop, kubectl (Kubernetes tool to manage a cluster), etc. The machine should have a web browser BUT the web browser should be restricted to the bare minimum number of sites needed to administer the systems. Examples of acceptable sites include the Azure Portal, the AWS web site, web sites to configure equipment, etc. Examples of things which are not allowed include web mail, documentation, etc. The goal is to have the smallest possible attack surface.

2) People need to be mindful of what they install on their machines. I have seen a lot of people who will install anything which looks useful on their machine. This is a great way to get hacked. Here are the questions people need to ask before installing software.

- Why do I trust the people who wrote the software? - Why was the software created? What was the motivation of the author? - How does the author support himself or herself? What would happen if one of the authors was in financial distress? - How will I know when the software needs to be updated? - What are the capabilities of the software's authors? Do the authors understand how to write secure software? How do the authors handle security bug reports?

Basically, you ask these questions because you want to avoid malware, and you want to use software which fixes its security problems.

3) Consider putting system management machines on a different network or VPN. Basically, it's harder to hack a machine if you can't easily communicate with it.

4) System management machines should not be listening on any ports (or should have a firewall which blocks all connection attempts). It's harder to break into a machine when an attacker cannot connect to it.

5) Put the system management machines behind a firewall.

6) Consider what happens when an account is eventually hacked. Many people assume that only dumb or incompetent people get hacked. Unfortunately, this is not true. Systems should be designed to assume accounts will get hacked. The best designs limit the blast radius (damage) of account attacks.

7) All users should have limited privileges. Basically, they should get the privileges they need to do their jobs. Very few people should be allowed to have the equivalent of root/Administrator for the entire system (obviously, people may have these permissions for some machines but almost no account should be able to do anything on any system the organization runs).


What I meant is even doing all this you're effectively a single 0-day away from total compromise and are now dependent on specific workarounds or on getting the update that patches the vulnerability.

Because of this I can't consider the security of the AD as any better or worse than the desktop that connects to it and it's pointless to pretend that you can even have this.

We did have a crypto locker that spread on our network this way between our AD machines. It was launched from an email in the sales department, but due to an unpatched RDP credential attack, it quickly got onto the ADs then spread across the entire WAN.

I'm not saying don't do the things you're suggesting but you should prepare the scenario where none of it matters. So one thing you missed which we now have is: WAN KILL SWITCH.


I dunno about that; it sounds like a PC in your sales department could connect to the RDP port on the AD, and the listed controls would have prevented that (RDP port on the DC should be firewalled down to the machines administrators use to connect to them, which per the comment should _not_ include access to emails, just in case the compromise you're describing was actually an email from sales -> system administrator).


Defenders also think in graphs. Matter of fact, good defenders think like attackers.

Case in point, to contradict the author of this post directly:

https://github.com/BloodHoundAD/BloodHound

BloodHound is primarily a defender tool, that uses graph theory to help defenders find attack paths. But attackers also use it to help them find the shortest path to owning an AD domain. BloodHound is used in by a lot of threat actors as part of those news stories where the entire company is ransomwared. But what you don't see is, in a lot of companies that don't get totally ransomwared, there is a chance defenders are also using BloodHound to find and fix attack paths.


This true. A good responder understands the attack and methods used to perform the attack. The best can visualize the event through the eyes of the attacker and use this insight to contain the event quickly.


Judging by the comments this article seems true in the abstract: "You are only as strong as your weakest link" can be formalized better as "You are only strong if there is no path over your assets for an attacker to carry out their objective." The big problem with "checklist based security" is that it is agnostic to underlying infra and ignores whatever issues a graph based approach might reveal.

I'll add another item that defenders use for graphs: SBOMs. You can map out component relationships with them and understand if, for example, there's an issue with openssl, note which end applications are affected.


Discussed (a bit) at the time:

Defenders think in lists. Attackers think in graphs - https://news.ycombinator.com/item?id=9442565 - April 2015 (7 comments)


Attackers win so easily because corporate security teams play predominantly

1) regulatory games

2) compliance games

3) CV optimization games

4) political games and finally

5) actual security work games within their organization


This is exactly why (good) defenders work by threat modeling using different perspectives and representations: one of them being attack graphs. But yes, a lot of the mandated compliance and governance stuff is just checking lists, which is why it does not work.


compliance requirements are the table stakes you should do you in your sleep so that you spend most of your time decomposing risks (attack graphs or not). It's a mistake to dismiss compliance (or CIS in another posters comment) as useless, they are basic - and the fact that so many cant deliver the basics is a huge issue.


> Philosophy has an affinity with despotism, due to its predilection for Platonic-fascist top-down solutions that always screw up viciously. Schizoanalysis works differently. It avoids Ideas, and sticks to diagrams: networking software for accessing bodies without organs. BWOs, machinic singularities, or tractor fields emerge through the combination of parts with (rather than into) their whole; arranging composite individuations in a virtual/ actual circuit. They are additive rather than substitutive, and immanent rather than transcendent: executed by functional complexes of currents, switches, and loops, caught in scaling reverberations, and fleeing through intercommunications, from the level of the integrated planetary system to that of atomic assemblages. Multiplicities captured by singularities interconnect as desiring-machines; dissipating entropy by dissociating flows, and recycling their machinism as self-assembling chronogenic circuitry.


Amongst the linguistic hand waving is information theory. But who am I to judge someone else's word-a-day calendar?


Isn't a defense strategy based on a graph an O(n!) problem, and thus unrealistic? Perhaps it's not quite that bad, but it has to be somewhere in computationally infeasible solutions territory.


What?

Those visualizations of network graphs enhanced by segmentation/clustering data are at least a decade old. As is studying how attackers traverse.

Here’s something I find my true:

Defends think in cheap cliches, attackers think like professionals — so attackers win.


The post is almost a decade old too, but I don't think it's any less relevant now.


Fantastic analogies useful for cybsercurity here.


I know practically nothing of the industry this post is referring to except my passion for continuing my education until I can legitimately become a part of it. Computers and the cybersecurity industry are passions of mine. Im apalled at how badly threat actors are ruining things and I just feel like it takes a hacker to catch a hacker. I've dabbled but never become serious enough to get the required education. Frankly to my way of thinking that involves, ( aside from all the playing around with what one can break), learning a programming language. So I'm slogging through C. But honestly I believe it is the constraints of societal perceptions that most hinders the security industry from doing better. They tie their own hands by playing "good guy" within the framework of certain rules themselves formulated out of overcaution, while the "bad guys" run rampant, operating effectively without constraints. Most likely despite all the effort and knowledge I will have eventually accumulated as the criteria for being qualified no person or company will ever hire me. I spent a number of years in several states incarcerated in the penal system for crimes entirely unrelated to this industry nevertheless rendering me unfit as far as any firm is concerned to work in the cyber security industry. Even though an education coupled with a map of the criminal mind because of my experiences would be an unbeatable combination (nearly..lol) the fear that abides in the heart of the defenders will be the reason they will always stay one or more steps behind. The idea that reform is not only possible but that SOME of us so abhor the way we once were that we are now staunch bulldogs on the side of morality is an idea entirely unknown in this country currently. And that's a shame. Because last I checked it was understood the best way to catch a thief...


I prefer systems inspired by ecology:

1. Network connectivity: Crickets

2. Cluster resources: Bees

3. Queues/pipelines: Ants

Consider the population of ants on earth is 20*10^15 : One could spend the rest of their life stepping on individuals, but the futile behavior remains meaningless to self-repairing ecosystems.

https://www.youtube.com/watch?v=ksZTYRqr444 (Jimi Hendrix, "Castles Made of Sand" )


Attackers aren't hampered by organizational imperatives. They are free to find targets of opportunity and move between them as it suits them.

Defenders usually have to justify their work to management and balance "real" defense work with things that reduce liability. This ends up being a prioritized list.

I blame JIRA for giving the attackers an advantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: