Hacker News new | past | comments | ask | show | jobs | submit login

One thing I've never understood is why Blue teams don't get as much funding. Cyber defense is much harder than cyber offense. I know there is a lot you can do by tracking citizens and a lot of information you can get, but if you're not blue teaming your country then an adversarial country can use exactly all those same tools you're excited about using on your adversaries. I feel the red teams get all the money and the blue teams get pushed off to the side. I do want to keep red teams, but I want to see NSA also doing bug bounties, increasing security in Android and iOS, strengthening the internet, etc. Why is this not happening? Why are we also not outraged about this?



Blue teams do get lots of funding (edit: I am speaking in general, not on government spending). It is just that their strategies are so so unbelievably bad no amount of money can produce an adequate system.

Blue teams with a $1 Billion/year budget can not prevent total compromise by red teams with a $1 Million/year budget. If you must outspend you attackers by 1000x you are doomed.

For instance, in 2015 Microsoft committed to spending $1 Billion/year in security research and development to securing their cloud, the second largest cloud in the world [1]. What is the result of such spending? A little over a month ago the default management agent they ship for managing Linux on Azure had a security defect that allowed local privilege escalation by sending an empty password [2]. Their processes are so bad that despite spending $1 Billion/year they can not detect and prevent themselves from releasing security 101 defects in default installs of widely deployed products. This is indicative of a grossly inadequate process in much the same way that a car factory delivering cars with no brake lines would indicate that factory and manufacturing process needs to be completely redesigned from the ground up and the entire team overseeing it replaced.

The outrageous part is not that security is not being funded, it is that organizations and systems displaying such fundamental errors continue to get vast sums of money poured into them.

[1] https://blogs.microsoft.com/blog/2015/11/17/enterprise-secur...

[2] https://www.wiz.io/blog/secret-agent-exposes-azure-customers...


The real problem is, security needs to be inherent to how developers, managers, work. Instead, security is often a bolt on, after thought, or put off until "thing $x is done".

One example, many popular frameworks. How do you audit every single piece of code brought in by, say, laravel? And how do you do it, if developers want to be able to reuse code?

Answer? You cannot. At all. You can't even reliably handle license compliance.

Yet, we use such frameworks, because security is not first, or even last sometimes. It's not part of the process, it's a thing to think about when a dev, a department has free time.

Many companies have a security team, an audit team. What?! You don't get secure by having people look at security after development, and then spend time fighting over fiscal concerns, to get a code re-write.

I think none of this will ever be fixed, until the CTO position becomes like the CFO position. Mandatory requirements, jailtime for CTOs if they breach certain regulations, and the authority for a CTO to tell everyone from board to CEO "no, thing X will be done".

Yet no one wants that, because of cost, and a desire to get to market first.


This is only true in our existing software ecosystem, where no one bothers to prove any of their assumptions hold (if they even realize they're making assumptions). In a software ecosystem designed with provable correctness in mind, open source software would be at least as viable as it is now, while being provably correct with respect to high level specifications which could much more feasibly be audited.


I'm in so much agreement, though I would disagree with the last line. I've seen people wave away security concerns because they don't want to be bothered.

Developers waste plenty of time (we're all on here chatting away for one!) but ask someone to even think about security seems to offend them in the way that asking a teenager to clean their bedroom would.


As long as the solution demands radically slower development there will be problems. Do you really believe that a team of 5 overworked devs will re-implement all functionality pulled in by laravel and their app without introducing yet more vulnerabilities?


The point of a framework was, historically, a project which implemented secure, shared, many eyes-on code. Proper code reuse.

Now, with 20k node packages pulled in as deps for a couple of 10 line libraries, or with 100s of phars as deps often for 10 line composer sourced libraries, you often have almost no eyes on for that code reuse.

Devs just indiscriminately pull stuff in, don't care if a package is unmaintained, or was written by someone for fun 5 years ago and abandoned.

This model isn't sane code reuse.

At least with code written by an on site dev, one can have code reviews, sign offs, etc.

None of this even touches on things like hostiles taking over packages, either.

I'll put this another way. Do you audit every single non-core package installed from random sites, at each update, and all its dependencies?

Or, do you even audit each package, all deps, to make sure the project is active, and seems to have a competent admin?

Because code reuse logic means lots of eyes, and abandoned stuff, low use stuff doesn't jive with that.


Which is harder: sneaking across the US border anywhere or preventing anyone from sneaking across the US border everywhere?

Sure seems like a 1000:1 problem to me.


It has been within our technical capabilities to totally seal off the land border since WWII. What is lacking is the will to do so. There are many reasons we let the borders leak, that aren't too hard to figure out.

Seismic, Infrared, Radar, and other monitoring systems could be deployed and ALL land crossings would be known soon enough to stop anyone crossing. Of course, a shoot on sight order is the part nobody has the stomach for, nor should they, we're not at war.

Naval patrols could greatly curtail routing around the ends.

Capability based security could greatly curtail the leverage you get from access to a given computer. The current default access systems we're using everywhere are about as effective as building a fortress out of crates of C-4 explosive.


The Berlin wall was orders of magnitude shorter, had a much smaller population of people trying to cross it, guards with shooting orders, several decades of progressive technological improvements, pervasive surveillance of the people trying to cross it before the act. People still managed to cross it.


More illegal immigrants cross into California in a single night than in the entire history of the Inner German Border.

That said, I don't think any sane person would favor using things like SM-70 directional mines to secure the American border. We can make a dent on illegal labor and more importantly get some idea of who these people are without resorting to the tactics of the DDR.


I don't object to anything you've said, I was just replying to the absoluteness of the statement made by GP. Making a land border of that size absolutely safe against sufficiently motivated and individuals is harder than they seem to think.


This is such a ridiculous comment. It has never been in the technical capabilities to seal off any land border in the history of humanity, because any controls that you can put in place, including all of the technology that currently exists that you could throw at it are still designed by humans, require absolutely no collaboration on the part of the operators of those defenses, and are immediately upon deployment subject to attack by your adversaries.

Assuming you are referring to the United States, even if you deployed all of those sensors at all crossings, it would not be sufficient to deter crossings; those only provide detective capabilities. Yes, you would radically increase the number of people guarding the border, but that would also require a radical investment in enforcement, and such an investment would bankrupt the country for no real practical effect. Even if you did fund and build it, the maintenance costs of preserving the effectiveness of such a land border along the northern border with Canada means protecting a nearly 9,000 km border that about 200 km from the West Coast turns into extremely inhospitable to hostile terrain and climate for a minimum of three to four months out of the year. This terrain is absolutely awful to stalk and patrol in, but is a pleasure to sneak through (at least I really enjoyed it back in my hunting and military training days :D)

It is not feasible to scale the navy to the point where it would be practical to prevent access to the continental United States, for example, let alone the island territories. Attempting to do so would bankrupt the nation. Hell, modern technology can't even reliably prevent folks from bobbing over from Cuba despite the relatively small attack surface there.

Even if you managed to invest more than the current percentage of GDP (which is already ridiculous), you would also have to confront an increasingly hostile domestic population who is used to freedom of movement, is already paying punishingly high taxes in contrast to the value and benefit they receive for them, and rapidly diminishing quality of life.

Good luck with that!

Oh, and one last note about capability based security - it works great in lab and somewhat small environments, but I would appreciate a practical explanation of how you would scale up capability based security to an environments operating in 87 distinct countries (with disparate regulatory requirements that preclude centralized management), 3600 offices, and 800,000 employees, with approximately one third of those employees having employer managed devices, notebooks, and managing a total of 2,300 distinct software applications (granted it's been about 14 years since I worked in that environment, but that company has grown substantially since then).

I'm listening for a real, practical suggestion here, not being facetious.


The thing about the prevailing commercial IT methodologies is that they can not deliver at the scale you are describing while also providing security within a factor of 10x of adequate, with adequate here meaning they would be happy to put the numerical cost to completely compromise their systems into their shareholder communications and consumer advertising and be happy to prove it by announcing a bounty for that amount at DEFCON and challenging anybody to claim it. There is no person with any technical competence and signing authority at any organization of that scale who would sign off on such an idea as they all know exactly how preposterous such a claim would be. The only thing that can be said about these systems is that they are functional at scale while being completely inadequate with respect to security, which, if security is not important as is the case for most organizations, is a perfectly valid tradeoff.

However, if adequate security is actually important, such as when lives are at stake, these methodologies completely fail to fulfill such requirements. This is not merely the case when tackling the hard problem of large scale systems, where it might be forgiven to be unable to solve the hardest problem available, it is the case at every scale. At least capability-based systems have demonstrated adequate security at a usable scale, the prevailing techniques can not even do that. There is little reason to believe that abject failure at scale and an inability to solve any interesting sub-problem or smaller scale problem is a better way to success at scale than attempting to scale small successes.


I would like to challenge the notion that the executives at businesses at this scale are unwilling to communicate a numerical cost.

I can't say which organization it was, or when it was, because I maintain enough of a public profile that it could leak specific information, but while doing a security consulting gig for a major global financial organization between 2001 and 2011, the team I was working with identified numerous serious concerns. The client company agreed that they were real risks, and even likely risks, but the financial impact, even if those risks were realized multiple times, were far below their documented thresholds for risk tolerance. In other words, the individual risks would have had to exceed $10M in impact per year, for multiple years before the financial impact of those issues would justify the massive cost of investment to remediate the risks. It's not that they didn't take them seriously, but the cost of the new system in development, which included addressing those risks, was so high that starting separate remediation efforts that would detract from those in progress changes introduced risk of the main replacement project failing. It was easier and lower risk and lower cost to just accept the impact of fraud, including compensating victims, than to try to fix it.

Whether or not those "self-insuring" risk tolerance figures are public are a different matter.

I do agree on the value and efficacy of capability-based systems, however the cost and effort to deploy and manage them, even in life critical environments, is so high that they are only practically effective in centrally planned environments where the funding for systems security is strongly decoupled from how that funding is acquired. In other words, capability based systems are, and will only be effective, at scale, in environments such as government (and even then, only military) or publicly funded, single payer, centrally managed health care systems (which don't really actually exist - most public funded healthcare in the world is centrally and publicly funded, but delivered by private service providers who bill the public funder).

It's an effective model where the cost of reliably deploying and managing capability based security can be externalized from the line of business that requires that level of security.


>Oh, and one last note about capability based security - it works great in lab and somewhat small environments, but I would appreciate a practical explanation of how you would scale up capability based security to an environments operating in 87 distinct countries ...

Most of the advantage of having a capability system is that it allows users to more transparently control what resources are given to a program at runtime. Instead of trusting the application to go pick a file and do something, the OS relies on a PowerBox UI to allow the user to directly pick files.

The closest analogy is that of a wallet (or purse) with currency in it. You chose directly which money you wish to use in a transaction, and that's the most you can lose if you make a mistake. There's no equivalent in Linux, Mac, Windows, etc... you're forced into a situation where you had your wallet to code, and hope for the best.

Users aren't the great weakness we've grown to think of them as, they just have insanely crippled tools.


German Democratic Republic wants to disagree on that!


In Berlin alone, which was by far the most heavily fortified part of the "Iron Curtain," more than 5,000 people successfully fled across the border despite the "Berlin Wall."

https://www.chronik-der-mauer.de/fluchten/


More people cross into California alone in a single Border Patrol district in a single night.

That said, I don't envision shoot on sight orders or the use of SM-70 directional mines on the American border. What I find particularly interesting is the complete reversal of positions from years ago. CSPAN has a video of Dianne Feinstein talking about border security with AG Janet Reno, and you could swear they were talking points cribbed from a Trump strategy session.

Increased immigration tends to lower wages which favors employers. OTOH, protectionist immigration policies tend to keep the status quo especially for low-skilled labor, whom you would think the DNC is pledged to protect. It's just interesting to see how political fashions have switched. Is it the fear of terrorists crossing the border or..?


> What I find particularly interesting is the complete reversal of positions from years ago.

It hasn’t reversed, though the right-wing bipartisan consensus on the issue of the early 1990s has gone.

> Increased immigration tends to lower wages which favors employers.

Immigration mostly increases and decreases by economic conditions (at home and abroad). Immigration policy mostly influences the proportion of immigrants with legal status. More immigrants with legal immigration status means immigrants are less likely to be in fear of asserting things like labor rights, whereas more without legal status means more fear of things like that.

> It’s just interesting to see how political fashions have switched.

They haven’t switched, at least not in well over a generation. Republicans have been for narrower legal immigration policies for decades (the backward-reaching amnesty under Reagan was a component of tighter forward policies). There was a brief period of general bipartisan consensus on the direction of change (though still distance on the details), but no reversal.


This has literally nothing to do with labor rights. Increase the supply of something and the cost goes down. Illegal labor gets here faster than legal labor and in uncontrollable numbers. You're right that you have to pay legal immigrants more, but given the ready supply of illegal immigrants it's not hard for an employer to look the other way.

If one wanted to control wages, one would want to ensure the unskilled labor supply was somewhat limited. Otherwise an employer can drive down to minimum wage rather easily.

One might do that by better securing the borders. I'm purposely leaving out the antiterrorist wish of at least getting some idea of who's coming in and out of Disneyland so to speak.


People crossed the German border. People cross the Korean border, too.


Are you trying to hold up the autocratic client state that failed to stop 50,000 people from escaping in it's 45 year duration as an example? The one that was facing the prospect of economic ruin when it's controlling state had to withdraw economic and military support due to it's own policies of perestroika that sought to address the impending economic collapse of Soviet Russia?

The fascist client state that collapsed due to an increasingly hostile domestic population who were angered by the growing economic challenges, and were tired of the continued state violence?

The same German Democratic Republic that has been consigned to the historical scrap heap of failed totalitarian regimes?

One might even consider holding North Korea up as a current example, since they have outlasted the GDR by 31 years so far, but they are also a client state, and their economy has been in relatively steady decline. The only thing keeping North Korea a functioning country is military and economic support from China, with even Russia's central bank continuing to push back against further economic development with North Korea.


Bit of an aside, but it's only a "1000:1" problem because

- The response can't have a blast radius

- Human software systems can't be formally verified in a tractable amount of time

- etc.

I'd go as far as to wager that if it weren't for MAD, the US might be sending SEAL or Special Forces teams in to deal with hackers.

When we have an AGI, perhaps it will develop systems impervious to intrusion. Or maybe it'll take the simpler approach and eradicate all humans and other capable AI actors.

This is expensive because it's guerrilla warfare.


At one point we came relatively close to using a nuke to deal with a hacking attempt, though taking down NORAD is going to get people nervous. Not everyone can respond to hackers by building an independent global network, but it is one model.


> At one point we came relatively close to using a nuke to deal with a hacking attempt

Do you have a source on this? Sounds like an interesting read


Unfortunately it was a first hand account, solar sunrise 1998 was the event but many of the details aren’t exactly public. The critical bits where it was being routed through a compromised machine in Baghdad, many DoD systems where running email alongside more mission critical applications, and the timing was the lead up to the first Iraq war.


"At one point we came relatively close to using a nuke to deal with a hacking attempt"

That sounds like wet dream of a genocidal maniac. There is no way this claim is credible


Read up on solar sunrise, a infosec teacher of mine was involved on the Army side and said it was seriously being considered because of timing and email being run on some extremely mission critical systems. This was lead up to first Iraq war and things where being routed through a machine in Baghdad. So, the possibility that this was a military attack was seriously being considered.

And no I don’t think it was likely, but there’s only so many time in history senior military people have referred to nuking something in a non joking manner.


Having just read a couple articles about it, I see no reference to nuclear strike anywhere. Do you understand that proposition is to commit genocide at a minimum, and maybe start WW3?

If you wanted to use military force, you would create a power blackout in target area by destroying closest substation or powerplant.

"In the end, two California High School students were arrested and pled guilty. Their mentor, an 18 year-old Israeli, was also arrested"

Surely noone is mad enough to nuke their own country? Or would you nuke the wrong ciutry instead?


Articles are written after the conclusion was already reached. In the moment you don’t know who’s behind the attack just that you can’t tell if ICBM’s are incoming to the US and the attack seems to be coming from a country we are literately preparing to go to war with.

It’s that moment the options are to successfully hack the machine, do nothing and hope no physical attack is incoming, try and penetrate serious air defenses with conventional aircraft, or use an ICBM. I don’t mean to suggest people where requesting authorization from the president for a nuclear strike, just that it was considered which is a serious escalation for a cyber attack.

As to the articles, you will note Iraq is mentioned many times by military officials even the attacks originated from teens in the US and Israel. Some of that’s timing, but the other part is as I said the machine happened to be located in Baghdad. Also, the press briefing before they arrested the teens shows just how serious this was being taken. It really wasn’t business as usual.


"you can’t tell if ICBM’s are incoming to the US"

Why can't we tell? US has a world-wide network of early warning radars and satellites specifically built for this purpose during cold war. Various allied countries have their own networks too, and would warn US.

Iraq never had ICBMs and no-one ever claimed that they do.

I am not seeing a plausible scenario of 'unnoticed incoming ICBM' on US.

"try and penetrate serious air defenses with conventional aircraft, or use an ICBM"

US has bombed Iraq (and many other contries) multiple times by then with minimal losses. I am struggling to see how nuclear holocaust was an appealing option.

Increadible thing do sometimes happen, so I would be interested in there is an article explaining the events or this line of thinking.


> ‘unnoticed incoming ICBM’ on US.

Lack of ICBM’s was assumed, but on the moment when your early warning system goes down while your preparing for an invasion it’s hard to stay rational.

The plausible scenario as to why the warning system went down is ARPA net was designed to keep military computers in contact through a nuclear attack. ARPA net became the internet but before SIPERnet and so it was still being used to keep those critical systems in contact. Hackers compromised those systems, because the military was using the same sendmail software as everyone else.

Now for us looking backwards it’s like wait what about computer security, but it largely didn’t exist back then.


DSP doesn't depend on any external networks. There are mobile receiver terminals that can receive their transmissions and relay warning to combatant commanders. There's actually a funny little national guard post that houses those doomsday trucks. Another interesting garage is at F.E. Warren (mobile STRATCOM survivable NC3).


Assuming they didn’t depend on ground stations for anything in 1998 that might poke a hole in the story. On the other hand you can’t exactly use one of those inside NORAD’s bunker so they could have been blind until someone setup an independent relay.


The whole early warning system was not down, Long range radars are independent, not connected to the internet, and in the worst case they could use a phone and smoke signals. There was no credible nuclear threat.

Given that you can't refer anyone to anything written, this whole story defies logic and smells of a dead possum.


Could be, these systems should have been completely isolated on DSNET 3.


Yes. Times must be right:

Blue team: every time

Red team: once


Yeah but the bad guys need to perfect all the time to avoid law enforcement and going to jail. Unless you are state backed that is.


I think the point is that there might be a bunch of independent bad guys, but there is only one "you". You're not allowed to make a mistake, ever. Ten bad guys could get caught, but if the 11th breaks in, you lose. They only need to get lucky once.


For sure any red team needs a blue game if only from other red teams. Red team is still advantaged in this regard by blue team’s relatively lit up profile.


Scope of disaster is important. Keys to the kingdom style fails are always bad


What stops someone from designing a blue team technique that is just red team techniques applied to your own product prior to release? I suspect MS does exactly that, but red team productivity varies.


This is known as "purple teaming". You have security team segments actively trying to attack your own systems, using both established tooling/techniques, but also developing bespoke attacks that are specific to your systems.

Then, and this is crucial, they not only teach the blue team from their findings - they also rotate out to blue teams, to become the defenders themselves. At the same time, some of the blue team rotates in. Rinse and repeat. The whole point is that you have to understand both sides properly, and continuously work with the teams involved. Otherwise you're nothing more than a consultant.


Nothing stops that; it is one of the most routine things you could do. NCC Group exists to provide this service. HackerOne exists to provide this service. Having an external team periodically attempt to penetrate your defenses is legally required for anyone who processes payment card information (in the US; I don't know what PCI requirements are like elsewhere).


So what do we do? Just admit that it's all doomed forever? Perhaps the only question left at that point would be exactly what even needs to be blue and red team'd. In other words what is worth using at the risk of being abused. If you also assume infinite extent, then nothing is worth it because everything can cause harm.

Otherwise, we actually do learn ways to converge towards more generally secure systems. Safer programming languages and safer hardware will lead the way, but it seems much slower this round than the stories we hear about the origins of everything.


No, we just need systems 100-1000x better than prevailing commercial IT systems. Systems that do not quake in their boots at the thought of a single dedicated hacker, but are designed and expected to resist competent teams of tens or hundreds working full time for years since that is what is needed to reach basic parity.

However, we will not find those techniques by following the standard commercial IT methodologies which were not designed for such a task. Just ask any architect of these systems if they could stop a team of 10 people working full time for 3 years. If even the people making it think it is absurd to defend against such a minimal effort there is no chance it is actually adequate.

In fact, there is little reason to assume that the methodologies that can only get 0.1% of the way to solving the problem despite decades of work and tens of billions of dollars will ever converge to an adequate solution. It could be like trying to use the knowledge of horse buggy makers to determine how to make a machine faster than the speed of sound. And even if it could eventually get there it would require 100% improvements year over year for an entire decade to get there from existing commercial methodologies.

No, it is far more reasonable to use systems that were actually designed for these environments and have actually demonstrated success, such as systems certified to Orange Book A1, and make them more practical since, as everybody knows, it is far easier to make a cheap, working design by starting with something that works and making it cheap than starting with cheap components and figuring out how to make something that works.

As for how you can identify proven success you can just start with a $1 million red team exercise. If they are able to find any material defects that means that there are likely many such defects and your processes can not prevent the occurrence of such trivial flaws and needs to be rethought. Only when there are zero material defects are you at the starting line. Note that this is not an exhaustive test, rather it should be treated like the fizzbuzz of security design, a trivial softball to weed out the the people that know nothing and the systems that do not work.


Is this copypasta? Where are the “systems certified to Orange Book A1?”

The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications. Initially issued in 1983 by the National Computer Security Center (NCSC), an arm of the National Security Agency, and then updated in 1985, TCSEC was eventually replaced by the Common Criteria international standard, originally published in 2005.

https://en.wikipedia.org/wiki/Trusted_Computer_System_Evalua...


Wait, so you're telling me all I have to do to get at that juicy $1B blue team is hire the $1M red team? Where's the catch!? /s


An Apollo program for information systems? This isn't a bad idea.


Apollo isn't what I'd compare this to. There were a lot of near misses and there was of course one really big fail (the aftermath of which strengthened its quality culture a lot)

It's not as if this was some kind of profound rock-solid architectural effort from the start. It was a race and speed was of the essence.

In many ways it has a lot in common with information science today. A high speed of innovation. Go fast and break things. High risk high reward.

I think the parent was advocating doing the opposite. Going for rock-solid and secure. Set security design standards that are mandatory. The same way that we have a building code. Good idea really but development speed and features will suffer. It'll be a big change for an industry that's totally not set up for that. I doubt many consumers will be happy either, no more huge spec improvements every year or fancy new features to show off.

And it's not all tech either. Right now the main technique of ingress is phishing and opsec weaknesses. A mindset change is needed, not just for the tech community but everyone.

It'll happen but it'll take years, maybe decades. And we'll have our own "Apollo 1"s to underline the importance and keep us on track. And we already had. It was WannaCry that started the awareness process.


>think the parent was advocating doing the opposite. Going for rock-solid and secure.

That is the part of Apollo I was thinking about. Done right, the humans get back to earth...

Here, done right, the data stays where it should be and the systems do what they are intended to do. The people don't get notices...

And, like Apollo, lots of that would trickle out into industry and eventually down to ordinary people.

>Right now the main technique of ingress is phishing and opsec weaknesses. A mindset change is needed, not just for the tech community but everyone.

I am well aware. Seems solvable given a consistent culture and norms can be established, eventually polished and time tested, production proven.


> Just admit that it's all doomed forever?

No, because ultimately security isn't binary. If you can increase the cost to the attacker, that raises the bar for attacking you and reduces the number of potential attackers. And over time security practices do get generally better, raising the tide for all boats; the problems right now are that we're still wrestling with the legacy of foundational systems designed in a pre-internet world where constant adversarial networking was not the norm, and more generally we keep increasing the attack surface by adding new things to the network. But once we have software/hardware stacks that have all been designed in a post-internet world (yeah, it'll take a while) and once we've finished networking everything that could reasonably be networked, there's hope enough to suspect that it will be possible to close the security gap to all but the most determined adversaries.


> once we've finished networking everything that could reasonably be networked

I highly doubt we'll come to terms on this one.


> What is the result of such spending?

Did they actually spend $1 billion? Or they did and spent on overpriced services? Without knowing what they did the amount is meaningless


You present it like the people in charge of doing these things are incompetent, but a lot points towards the domain just being that hard to control.


In OPs precise instance wouldn't that be some form of incompetence given the stated goals of that funding? A competent reviewer would potentially be able to see these things in the code without tools.


Why hire a red team if you can't afford to fix the problems they find?


They are doing stuff like that, but it doesn't make for sexy clickbait, so nobody posts it. For example, last week Microsoft released a patch for an exchange server exploit that NSA discovered and reported.


It's also worth bearing in mind that the Departments of Commerce, Energy, Homeland Security, and Treasury all have efforts to this effect. And as you say, nobody writes articles about how the Department of Energy helped a solar operator figure out a patching strategy. It's boring.

Energy effort here: https://www.energy.gov/national-security-safety/cybersecurit...


There's also the issue that there's no mechanism to force companies to keep servers up to date or face consequences so it's only possible to really do half the work. Blue teams could find every vulnerability out there but you'd still have companies running old versions or refusing to put out patches to their customers (and customers not updating devices deployed in their home).


The mechanism has sort of revealed itself recently, and it is not a net positive: ransomware.


True enough I guess, though it’s definitely not what I’d pick. Tangentially I wonder why ransomware hasn’t been a thing for longer. Was it an issue of how to do payments that crypto solved?


The consensus seems to be that it is crypto making receiving the ransom easier, and Russia (and other jurisdictions) giving ransomware crews free reign.

I think the second one might be more important, because it is generally known who the big crews are, and it is actually easy to screw up the opsec on crypto.


I mean it's HN, I'm happy to read and hear about that stuff. That is the kind of thing I want on the front page here.

Also it does mean bad PR on their part. Which that is part of cultural warfare.


Wait, blue team gets to take credit for fixing the reports of red team?!

This seems like normal old grey programming to me.


I agree with you but please, no more outrage. We have enough outrage. Be a proponent of something without being outraged about it. It's not doing any good to be outraged and it's exhausting.


>"We have enough outrage"

We who? I definitely do not feel that we have enough. If we did it would've percolated to some noticeable action.


Outrage is not conducive to well thought out action.

Ourage builds and builds pressure until the subject reaches a tipping point and comes with a knee-jerk reaction meant to reduce the pressure. Such reactions are poorly planned, have no room for nuance or discussion of downsides, and are not aimed at solving the problem but at getting the outraged people of your back.

Joining a mob of angry people is a way to affect change. But in many cases it is not the change we (should) want.


>"Outrage is not conducive to well thought out action."

Tell it to politicians. They're outraged every other minute.

>"But in many cases it is not the change we (should) want."

Did I say we should be outraged in "every case"?

Oh and thank you for telling me what (should) I want.


Often outrage is all froth and no caffe.


Particularly internet outrage.


percolated or drip?


By the outrage I more mean where people are spending time complaining and trying to fix the system. I want people to focus efforts into trying to get larger funding into defense (and not defense as in building weapons, but security, if that wasn't clear).


Thanks for protecting us, outrage police!


And does that make you the outrage police police?


I'm just your regular outrage citizen thanking our local outrage police, no sarcasm whatsoever! They have an important duty to protect us from outraged people on the internet, and they put their lives on the line every day.


When we all have outrage, does that make it outrage-us?


The government's strategy is probably a result of it being easier to maintain an advantage by keeping weapons secret than by distributing defences to only the good guys.

It would be interesting to speculate how close we are to replacing all networked services with provably secure implementations (like the work of Project Everest[0]). Of course there's no such thing as perfect security (or perfect proofs), but I think we are close to reaching the point where attacking implementation flaws is less fruitful than attacking the software supply chain.

In fact, we may already have reached that point, so I think that efforts to secure the supply chain (like sigstore[1]) and potential government efforts to attack it (like recent changes to iOS and Android[2]) deserve more focus.

[0] https://project-everest.github.io/

[1] https://security.googleblog.com/2021/03/introducing-sigstore...

[2] https://news.ycombinator.com/item?id=27176690


The US gov can easily maintain an offensive advantage while doing massive amounts of patching. Red teaming is much easier.

The problem here is that we're essentially building glass cannons. Yeah, we can hit hard but you can't win a fight that way. Eventually you're going to get punched in the face.


The NSA does perform that function for the government. They protect DoD and IC assets and critical civilian computing infrastructure. They created SELinux and sponsored many of the major cryptographic standards out there. They don't actively provide defense for iOS and Android because those are product owned by trillion dollar private companies who can pay for their own security, not expected publicly-funded agencies to do it for them.

The Internet is an interesting case. Nobody owns it. It isn't even American. The fact that it was originally created by and for universities that all implicitly trusted each other has led to a whole lot of security flaws baked into the core assumptions of the most basic protocols. But the NSA does protect the hell out of military networks. Military and IC networks are absolutely nothing like the Internet. There is an inherent difficulty in bringing the same assurance to public networks, though, because nobody on a military network expects to be anonymous or to have any privacy. Users implicitly trust the network's central authority. They have to because they work for it. Security is a lot easier with a trusted central authority.


>> but if you're not blue teaming your country then an adversarial country can use exactly all those same tools you're excited about

You miss the point of these tools. They are not being used to protect country A from country B. They are being used to protect those running country A from those living in country A. Country B and its people are not in the picture. You don't blue team the target because you don't want to make it more difficult to watch. If anything, you want to deny them strong security tools. Your red team can do that very well.


I think the root of this is you cannot buy "security". It has to be part of the engineering ethos at all levels. This gets really hard to do at scale. Pouring money u to Blue teams is too late in the process.


This. If you want something secure everything has to be created from scratch. The OS, languages, tooling, every software etc. etc. Nobody will ever do that. And even if they did something will fuck it up on higher level.


Maybe there is a genius in the statement “640K is all the memory anybody would ever need on a computer” in that it would be harder to fuck up something much smaller and simpler.

https://www.wired.com/1997/01/did-gates-really-say-640k-is-e...


No, those smaller/simpler systems are much easier to fuck up.

In order to have secure systems, you need to add a lot of complexity overhead that earlier systems simply could not afford. Proper process isolation is a good example - extra complexity, but absolutely necessary unless your chip limitations make it impractical. Permissions for memory pages eg. W^X policy. Stack canaries. Address space layout randomization. All things that add extra complexity and resource usage, but without which it's much easier to exploit systems and any single mistake means it's game over.


I remember that time quite well. PC's of the regular not savvy users were full of viruses.


Publicly, Poindexter and the rest of the criminals under Bush Jr. went all-offence and launched the Information Awareness Office [0] to pursue a strategy of Total Information Awareness [1]. They wanted to ramp up ECHELON to hoover the whole world, started hoarding 0-days, and eventually created a whole industry to shop exploits. Now that is a business, nobody is going to make director leading the blue team.

Privately, I speculate that they also assessed the state of play and just gave up. Microsoft back then still believed that code-signing would fix their bug-of-the-week run. Industry security practices were so weak as to be non-existent. Hell - telnet was still common.

The only nice thing I can say about it was they had an amazingly honest logo [2]. That is, until congress freaked out and made them hide it all behind a bit SECRET sign. And so we heard little more about except via a steady drip of whistleblowers like Mark Klein, Thomas Drake, William Binney, and Snowdon.

[0] - https://en.wikipedia.org/wiki/Information_Awareness_Office [1] - https://en.wikipedia.org/wiki/Total_Information_Awareness [2] - https://en.wikipedia.org/wiki/Information_Awareness_Office#/...


It seems like you think that red teaming and blue teaming works much like an RPG game where you can spend skill points on perks, but blue team perks (like tier 1 endpoint defense) cost more skill points than red team perks. I don't think this is an accurate mental model, and I'd rather frame it like this: exploits are secrets, and when you learn the secret, you can share it with others as well as develop the countermeasure to the exploit. If you spend a lot of money discovering a useful exploit it is by definition something nontrivial that is unlikely to be discovered by regular hackers unless it is leaked or discovered after careless usage. If you discover an exploit that an enemy will soon discover, it is to your advantage to publish the countermeasures to the exploit before your opponent discovers and weaponizes it.


Also, recently, FBI "hacked" in to Exchange servers that were vulnerable (with court authorization) to patch them [1], so it does happen. But I agree with your sentiment that it doesn't happen as often as it should.

The public perception seems to be that the US doesn't spend enough resources to harden its and its people's defenses than it does to surveil people.

[1]: https://techcrunch.com/2021/04/13/fbi-launches-operation-to-...


Speculating at a 10,000 foot view: It may just be the case that, in any real conflict, phones, the internet, etc. are doomed no matter what. It would be so easy to cripple digital infrastructure with physical means it probably doesn't make much sense investing in digital prevention.

Physical safeguards are both easier to implement, easier to demonstrate when funding is decided, and generally the better investment considering most compromises of government digital infrastructure come from people with direct physical access (plugging in a dropped USB, spies, phishing, etc.).


> increasing security in Android and iOS

This would make it more difficult to hack those OSes in other countries and within the borders too, why would they make their own job harder for them?

Yeah, I know, their job shouldn't be to hack others, but that's how it is today.


You can't measure the true effectiveness of blue teams so in MBA terms they are a useless cost center. At best the sort of thing that should be outsourced to the lowest bidder. Exploit happens... See! Look how bad the blue team is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: