One thing I've never understood is why Blue teams don't get as much funding. Cyber defense is much harder than cyber offense. I know there is a lot you can do by tracking citizens and a lot of information you can get, but if you're not blue teaming your country then an adversarial country can use exactly all those same tools you're excited about using on your adversaries. I feel the red teams get all the money and the blue teams get pushed off to the side. I do want to keep red teams, but I want to see NSA also doing bug bounties, increasing security in Android and iOS, strengthening the internet, etc. Why is this not happening? Why are we also not outraged about this?
Blue teams do get lots of funding (edit: I am speaking in general, not on government spending). It is just that their strategies are so so unbelievably bad no amount of money can produce an adequate system.
Blue teams with a $1 Billion/year budget can not prevent total compromise by red teams with a $1 Million/year budget. If you must outspend you attackers by 1000x you are doomed.
For instance, in 2015 Microsoft committed to spending $1 Billion/year in security research and development to securing their cloud, the second largest cloud in the world [1]. What is the result of such spending? A little over a month ago the default management agent they ship for managing Linux on Azure had a security defect that allowed local privilege escalation by sending an empty password [2]. Their processes are so bad that despite spending $1 Billion/year they can not detect and prevent themselves from releasing security 101 defects in default installs of widely deployed products. This is indicative of a grossly inadequate process in much the same way that a car factory delivering cars with no brake lines would indicate that factory and manufacturing process needs to be completely redesigned from the ground up and the entire team overseeing it replaced.
The outrageous part is not that security is not being funded, it is that organizations and systems displaying such fundamental errors continue to get vast sums of money poured into them.
The real problem is, security needs to be inherent to how developers, managers, work. Instead, security is often a bolt on, after thought, or put off until "thing $x is done".
One example, many popular frameworks. How do you audit every single piece of code brought in by, say, laravel? And how do you do it, if developers want to be able to reuse code?
Answer? You cannot. At all. You can't even reliably handle license compliance.
Yet, we use such frameworks, because security is not first, or even last sometimes. It's not part of the process, it's a thing to think about when a dev, a department has free time.
Many companies have a security team, an audit team. What?! You don't get secure by having people look at security after development, and then spend time fighting over fiscal concerns, to get a code re-write.
I think none of this will ever be fixed, until the CTO position becomes like the CFO position. Mandatory requirements, jailtime for CTOs if they breach certain regulations, and the authority for a CTO to tell everyone from board to CEO "no, thing X will be done".
Yet no one wants that, because of cost, and a desire to get to market first.
This is only true in our existing software ecosystem, where no one bothers to prove any of their assumptions hold (if they even realize they're making assumptions). In a software ecosystem designed with provable correctness in mind, open source software would be at least as viable as it is now, while being provably correct with respect to high level specifications which could much more feasibly be audited.
I'm in so much agreement, though I would disagree with the last line. I've seen people wave away security concerns because they don't want to be bothered.
Developers waste plenty of time (we're all on here chatting away for one!) but ask someone to even think about security seems to offend them in the way that asking a teenager to clean their bedroom would.
As long as the solution demands radically slower development there will be problems. Do you really believe that a team of 5 overworked devs will re-implement all functionality pulled in by laravel and their app without introducing yet more vulnerabilities?
The point of a framework was, historically, a project which implemented secure, shared, many eyes-on code. Proper code reuse.
Now, with 20k node packages pulled in as deps for a couple of 10 line libraries, or with 100s of phars as deps often for 10 line composer sourced libraries, you often have almost no eyes on for that code reuse.
Devs just indiscriminately pull stuff in, don't care if a package is unmaintained, or was written by someone for fun 5 years ago and abandoned.
This model isn't sane code reuse.
At least with code written by an on site dev, one can have code reviews, sign offs, etc.
None of this even touches on things like hostiles taking over packages, either.
I'll put this another way. Do you audit every single non-core package installed from random sites, at each update, and all its dependencies?
Or, do you even audit each package, all deps, to make sure the project is active, and seems to have a competent admin?
Because code reuse logic means lots of eyes, and abandoned stuff, low use stuff doesn't jive with that.
It has been within our technical capabilities to totally seal off the land border since WWII. What is lacking is the will to do so. There are many reasons we let the borders leak, that aren't too hard to figure out.
Seismic, Infrared, Radar, and other monitoring systems could be deployed and ALL land crossings would be known soon enough to stop anyone crossing. Of course, a shoot on sight order is the part nobody has the stomach for, nor should they, we're not at war.
Naval patrols could greatly curtail routing around the ends.
Capability based security could greatly curtail the leverage you get from access to a given computer. The current default access systems we're using everywhere are about as effective as building a fortress out of crates of C-4 explosive.
The Berlin wall was orders of magnitude shorter, had a much smaller population of people trying to cross it, guards with shooting orders, several decades of progressive technological improvements, pervasive surveillance of the people trying to cross it before the act. People still managed to cross it.
More illegal immigrants cross into California in a single night than in the entire history of the Inner German Border.
That said, I don't think any sane person would favor using things like SM-70 directional mines to secure the American border. We can make a dent on illegal labor and more importantly get some idea of who these people are without resorting to the tactics of the DDR.
I don't object to anything you've said, I was just replying to the absoluteness of the statement made by GP. Making a land border of that size absolutely safe against sufficiently motivated and individuals is harder than they seem to think.
This is such a ridiculous comment. It has never been in the technical capabilities to seal off any land border in the history of humanity, because any controls that you can put in place, including all of the technology that currently exists that you could throw at it are still designed by humans, require absolutely no collaboration on the part of the operators of those defenses, and are immediately upon deployment subject to attack by your adversaries.
Assuming you are referring to the United States, even if you deployed all of those sensors at all crossings, it would not be sufficient to deter crossings; those only provide detective capabilities. Yes, you would radically increase the number of people guarding the border, but that would also require a radical investment in enforcement, and such an investment would bankrupt the country for no real practical effect. Even if you did fund and build it, the maintenance costs of preserving the effectiveness of such a land border along the northern border with Canada means protecting a nearly 9,000 km border that about 200 km from the West Coast turns into extremely inhospitable to hostile terrain and climate for a minimum of three to four months out of the year. This terrain is absolutely awful to stalk and patrol in, but is a pleasure to sneak through (at least I really enjoyed it back in my hunting and military training days :D)
It is not feasible to scale the navy to the point where it would be practical to prevent access to the continental United States, for example, let alone the island territories. Attempting to do so would bankrupt the nation. Hell, modern technology can't even reliably prevent folks from bobbing over from Cuba despite the relatively small attack surface there.
Even if you managed to invest more than the current percentage of GDP (which is already ridiculous), you would also have to confront an increasingly hostile domestic population who is used to freedom of movement, is already paying punishingly high taxes in contrast to the value and benefit they receive for them, and rapidly diminishing quality of life.
Good luck with that!
Oh, and one last note about capability based security - it works great in lab and somewhat small environments, but I would appreciate a practical explanation of how you would scale up capability based security to an environments operating in 87 distinct countries (with disparate regulatory requirements that preclude centralized management), 3600 offices, and 800,000 employees, with approximately one third of those employees having employer managed devices, notebooks, and managing a total of 2,300 distinct software applications (granted it's been about 14 years since I worked in that environment, but that company has grown substantially since then).
I'm listening for a real, practical suggestion here, not being facetious.
The thing about the prevailing commercial IT methodologies is that they can not deliver at the scale you are describing while also providing security within a factor of 10x of adequate, with adequate here meaning they would be happy to put the numerical cost to completely compromise their systems into their shareholder communications and consumer advertising and be happy to prove it by announcing a bounty for that amount at DEFCON and challenging anybody to claim it. There is no person with any technical competence and signing authority at any organization of that scale who would sign off on such an idea as they all know exactly how preposterous such a claim would be. The only thing that can be said about these systems is that they are functional at scale while being completely inadequate with respect to security, which, if security is not important as is the case for most organizations, is a perfectly valid tradeoff.
However, if adequate security is actually important, such as when lives are at stake, these methodologies completely fail to fulfill such requirements. This is not merely the case when tackling the hard problem of large scale systems, where it might be forgiven to be unable to solve the hardest problem available, it is the case at every scale. At least capability-based systems have demonstrated adequate security at a usable scale, the prevailing techniques can not even do that. There is little reason to believe that abject failure at scale and an inability to solve any interesting sub-problem or smaller scale problem is a better way to success at scale than attempting to scale small successes.
I would like to challenge the notion that the executives at businesses at this scale are unwilling to communicate a numerical cost.
I can't say which organization it was, or when it was, because I maintain enough of a public profile that it could leak specific information, but while doing a security consulting gig for a major global financial organization between 2001 and 2011, the team I was working with identified numerous serious concerns. The client company agreed that they were real risks, and even likely risks, but the financial impact, even if those risks were realized multiple times, were far below their documented thresholds for risk tolerance. In other words, the individual risks would have had to exceed $10M in impact per year, for multiple years before the financial impact of those issues would justify the massive cost of investment to remediate the risks. It's not that they didn't take them seriously, but the cost of the new system in development, which included addressing those risks, was so high that starting separate remediation efforts that would detract from those in progress changes introduced risk of the main replacement project failing. It was easier and lower risk and lower cost to just accept the impact of fraud, including compensating victims, than to try to fix it.
Whether or not those "self-insuring" risk tolerance figures are public are a different matter.
I do agree on the value and efficacy of capability-based systems, however the cost and effort to deploy and manage them, even in life critical environments, is so high that they are only practically effective in centrally planned environments where the funding for systems security is strongly decoupled from how that funding is acquired. In other words, capability based systems are, and will only be effective, at scale, in environments such as government (and even then, only military) or publicly funded, single payer, centrally managed health care systems (which don't really actually exist - most public funded healthcare in the world is centrally and publicly funded, but delivered by private service providers who bill the public funder).
It's an effective model where the cost of reliably deploying and managing capability based security can be externalized from the line of business that requires that level of security.
>Oh, and one last note about capability based security - it works great in lab and somewhat small environments, but I would appreciate a practical explanation of how you would scale up capability based security to an environments operating in 87 distinct countries ...
Most of the advantage of having a capability system is that it allows users to more transparently control what resources are given to a program at runtime. Instead of trusting the application to go pick a file and do something, the OS relies on a PowerBox UI to allow the user to directly pick files.
The closest analogy is that of a wallet (or purse) with currency in it. You chose directly which money you wish to use in a transaction, and that's the most you can lose if you make a mistake. There's no equivalent in Linux, Mac, Windows, etc... you're forced into a situation where you had your wallet to code, and hope for the best.
Users aren't the great weakness we've grown to think of them as, they just have insanely crippled tools.
In Berlin alone, which was by far the most heavily fortified part of the "Iron Curtain," more than 5,000 people successfully fled across the border despite the "Berlin Wall."
More people cross into California alone in a single Border Patrol district in a single night.
That said, I don't envision shoot on sight orders or the use of SM-70 directional mines on the American border. What I find particularly interesting is the complete reversal of positions from years ago. CSPAN has a video of Dianne Feinstein talking about border security with AG Janet Reno, and you could swear they were talking points cribbed from a Trump strategy session.
Increased immigration tends to lower wages which favors employers. OTOH, protectionist immigration policies tend to keep the status quo especially for low-skilled labor, whom you would think the DNC is pledged to protect. It's just interesting to see how political fashions have switched. Is it the fear of terrorists crossing the border or..?
> What I find particularly interesting is the complete reversal of positions from years ago.
It hasn’t reversed, though the right-wing bipartisan consensus on the issue of the early 1990s has gone.
> Increased immigration tends to lower wages which favors employers.
Immigration mostly increases and decreases by economic conditions (at home and abroad). Immigration policy mostly influences the proportion of immigrants with legal status. More immigrants with legal immigration status means immigrants are less likely to be in fear of asserting things like labor rights, whereas more without legal status means more fear of things like that.
> It’s just interesting to see how political fashions have switched.
They haven’t switched, at least not in well over a generation. Republicans have been for narrower legal immigration policies for decades (the backward-reaching amnesty under Reagan was a component of tighter forward policies). There was a brief period of general bipartisan consensus on the direction of change (though still distance on the details), but no reversal.
This has literally nothing to do with labor rights. Increase the supply of something and the cost goes down. Illegal labor gets here faster than legal labor and in uncontrollable numbers. You're right that you have to pay legal immigrants more, but given the ready supply of illegal immigrants it's not hard for an employer to look the other way.
If one wanted to control wages, one would want to ensure the unskilled labor supply was somewhat limited. Otherwise an employer can drive down to minimum wage rather easily.
One might do that by better securing the borders. I'm purposely leaving out the antiterrorist wish of at least getting some idea of who's coming in and out of Disneyland so to speak.
Are you trying to hold up the autocratic client state that failed to stop 50,000 people from escaping in it's 45 year duration as an example? The one that was facing the prospect of economic ruin when it's controlling state had to withdraw economic and military support due to it's own policies of perestroika that sought to address the impending economic collapse of Soviet Russia?
The fascist client state that collapsed due to an increasingly hostile domestic population who were angered by the growing economic challenges, and were tired of the continued state violence?
The same German Democratic Republic that has been consigned to the historical scrap heap of failed totalitarian regimes?
One might even consider holding North Korea up as a current example, since they have outlasted the GDR by 31 years so far, but they are also a client state, and their economy has been in relatively steady decline. The only thing keeping North Korea a functioning country is military and economic support from China, with even Russia's central bank continuing to push back against further economic development with North Korea.
Bit of an aside, but it's only a "1000:1" problem because
- The response can't have a blast radius
- Human software systems can't be formally verified in a tractable amount of time
- etc.
I'd go as far as to wager that if it weren't for MAD, the US might be sending SEAL or Special Forces teams in to deal with hackers.
When we have an AGI, perhaps it will develop systems impervious to intrusion. Or maybe it'll take the simpler approach and eradicate all humans and other capable AI actors.
At one point we came relatively close to using a nuke to deal with a hacking attempt, though taking down NORAD is going to get people nervous. Not everyone can respond to hackers by building an independent global network, but it is one model.
Unfortunately it was a first hand account, solar sunrise 1998 was the event but many of the details aren’t exactly public. The critical bits where it was being routed through a compromised machine in Baghdad, many DoD systems where running email alongside more mission critical applications, and the timing was the lead up to the first Iraq war.
Read up on solar sunrise, a infosec teacher of mine was involved on the Army side and said it was seriously being considered because of timing and email being run on some extremely mission critical systems. This was lead up to first Iraq war and things where being routed through a machine in Baghdad. So, the possibility that this was a military attack was seriously being considered.
And no I don’t think it was likely, but there’s only so many time in history senior military people have referred to nuking something in a non joking manner.
Having just read a couple articles about it, I see no reference to nuclear strike anywhere. Do you understand that proposition is to commit genocide at a minimum, and maybe start WW3?
If you wanted to use military force, you would create a power blackout in target area by destroying closest substation or powerplant.
"In the end, two California High School students were arrested and pled guilty. Their mentor, an 18 year-old Israeli, was also arrested"
Surely noone is mad enough to nuke their own country? Or would you nuke the wrong ciutry instead?
Articles are written after the conclusion was already reached. In the moment you don’t know who’s behind the attack just that you can’t tell if ICBM’s are incoming to the US and the attack seems to be coming from a country we are literately preparing to go to war with.
It’s that moment the options are to successfully hack the machine, do nothing and hope no physical attack is incoming, try and penetrate serious air defenses with conventional aircraft, or use an ICBM. I don’t mean to suggest people where requesting authorization from the president for a nuclear strike, just that it was considered which is a serious escalation for a cyber attack.
As to the articles, you will note Iraq is mentioned many times by military officials even the attacks originated from teens in the US and Israel. Some of that’s timing, but the other part is as I said the machine happened to be located in Baghdad. Also, the press briefing before they arrested the teens shows just how serious this was being taken. It really wasn’t business as usual.
Why can't we tell? US has a world-wide network of early warning radars and satellites specifically built for this purpose during cold war. Various allied countries have their own networks too, and would warn US.
Iraq never had ICBMs and no-one ever claimed that they do.
I am not seeing a plausible scenario of 'unnoticed incoming ICBM' on US.
"try and penetrate serious air defenses with conventional aircraft, or use an ICBM"
US has bombed Iraq (and many other contries) multiple times by then with minimal losses. I am struggling to see how nuclear holocaust was an appealing option.
Increadible thing do sometimes happen, so I would be interested in there is an article explaining the events or this line of thinking.
Lack of ICBM’s was assumed, but on the moment when your early warning system goes down while your preparing for an invasion it’s hard to stay rational.
The plausible scenario as to why the warning system went down is ARPA net was designed to keep military computers in contact through a nuclear attack. ARPA net became the internet but before SIPERnet and so it was still being used to keep those critical systems in contact. Hackers compromised those systems, because the military was using the same sendmail software as everyone else.
Now for us looking backwards it’s like wait what about computer security, but it largely didn’t exist back then.
DSP doesn't depend on any external networks. There are mobile receiver terminals that can receive their transmissions and relay warning to combatant commanders. There's actually a funny little national guard post that houses those doomsday trucks. Another interesting garage is at F.E. Warren (mobile STRATCOM survivable NC3).
Assuming they didn’t depend on ground stations for anything in 1998 that might poke a hole in the story. On the other hand you can’t exactly use one of those inside NORAD’s bunker so they could have been blind until someone setup an independent relay.
The whole early warning system was not down, Long range radars are independent, not connected to the internet, and in the worst case they could use a phone and smoke signals. There was no credible nuclear threat.
Given that you can't refer anyone to anything written, this whole story defies logic and smells of a dead possum.
I think the point is that there might be a bunch of independent bad guys, but there is only one "you". You're not allowed to make a mistake, ever. Ten bad guys could get caught, but if the 11th breaks in, you lose. They only need to get lucky once.
For sure any red team needs a blue game if only from other red teams. Red team is still advantaged in this regard by blue team’s relatively lit up profile.
What stops someone from designing a blue team technique that is just red team techniques applied to your own product prior to release? I suspect MS does exactly that, but red team productivity varies.
This is known as "purple teaming". You have security team segments actively trying to attack your own systems, using both established tooling/techniques, but also developing bespoke attacks that are specific to your systems.
Then, and this is crucial, they not only teach the blue team from their findings - they also rotate out to blue teams, to become the defenders themselves. At the same time, some of the blue team rotates in. Rinse and repeat. The whole point is that you have to understand both sides properly, and continuously work with the teams involved. Otherwise you're nothing more than a consultant.
Nothing stops that; it is one of the most routine things you could do. NCC Group exists to provide this service. HackerOne exists to provide this service. Having an external team periodically attempt to penetrate your defenses is legally required for anyone who processes payment card information (in the US; I don't know what PCI requirements are like elsewhere).
So what do we do? Just admit that it's all doomed forever? Perhaps the only question left at that point would be exactly what even needs to be blue and red team'd. In other words what is worth using at the risk of being abused. If you also assume infinite extent, then nothing is worth it because everything can cause harm.
Otherwise, we actually do learn ways to converge towards more generally secure systems. Safer programming languages and safer hardware will lead the way, but it seems much slower this round than the stories we hear about the origins of everything.
No, we just need systems 100-1000x better than prevailing commercial IT systems. Systems that do not quake in their boots at the thought of a single dedicated hacker, but are designed and expected to resist competent teams of tens or hundreds working full time for years since that is what is needed to reach basic parity.
However, we will not find those techniques by following the standard commercial IT methodologies which were not designed for such a task. Just ask any architect of these systems if they could stop a team of 10 people working full time for 3 years. If even the people making it think it is absurd to defend against such a minimal effort there is no chance it is actually adequate.
In fact, there is little reason to assume that the methodologies that can only get 0.1% of the way to solving the problem despite decades of work and tens of billions of dollars will ever converge to an adequate solution. It could be like trying to use the knowledge of horse buggy makers to determine how to make a machine faster than the speed of sound. And even if it could eventually get there it would require 100% improvements year over year for an entire decade to get there from existing commercial methodologies.
No, it is far more reasonable to use systems that were actually designed for these environments and have actually demonstrated success, such as systems certified to Orange Book A1, and make them more practical since, as everybody knows, it is far easier to make a cheap, working design by starting with something that works and making it cheap than starting with cheap components and figuring out how to make something that works.
As for how you can identify proven success you can just start with a $1 million red team exercise. If they are able to find any material defects that means that there are likely many such defects and your processes can not prevent the occurrence of such trivial flaws and needs to be rethought. Only when there are zero material defects are you at the starting line. Note that this is not an exhaustive test, rather it should be treated like the fizzbuzz of security design, a trivial softball to weed out the the people that know nothing and the systems that do not work.
Is this copypasta? Where are the “systems certified to Orange Book A1?”
The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications. Initially issued in 1983 by the National Computer Security Center (NCSC), an arm of the National Security Agency, and then updated in 1985, TCSEC was eventually replaced by the Common Criteria international standard, originally published in 2005.
Apollo isn't what I'd compare this to. There were a lot of near misses and there was of course one really big fail (the aftermath of which strengthened its quality culture a lot)
It's not as if this was some kind of profound rock-solid architectural effort from the start. It was a race and speed was of the essence.
In many ways it has a lot in common with information science today. A high speed of innovation. Go fast and break things. High risk high reward.
I think the parent was advocating doing the opposite. Going for rock-solid and secure. Set security design standards that are mandatory. The same way that we have a building code. Good idea really but development speed and features will suffer. It'll be a big change for an industry that's totally not set up for that. I doubt many consumers will be happy either, no more huge spec improvements every year or fancy new features to show off.
And it's not all tech either. Right now the main technique of ingress is phishing and opsec weaknesses. A mindset change is needed, not just for the tech community but everyone.
It'll happen but it'll take years, maybe decades. And we'll have our own "Apollo 1"s to underline the importance and keep us on track. And we already had. It was WannaCry that started the awareness process.
No, because ultimately security isn't binary. If you can increase the cost to the attacker, that raises the bar for attacking you and reduces the number of potential attackers. And over time security practices do get generally better, raising the tide for all boats; the problems right now are that we're still wrestling with the legacy of foundational systems designed in a pre-internet world where constant adversarial networking was not the norm, and more generally we keep increasing the attack surface by adding new things to the network. But once we have software/hardware stacks that have all been designed in a post-internet world (yeah, it'll take a while) and once we've finished networking everything that could reasonably be networked, there's hope enough to suspect that it will be possible to close the security gap to all but the most determined adversaries.
In OPs precise instance wouldn't that be some form of incompetence given the stated goals of that funding? A competent reviewer would potentially be able to see these things in the code without tools.
They are doing stuff like that, but it doesn't make for sexy clickbait, so nobody posts it. For example, last week Microsoft released a patch for an exchange server exploit that NSA discovered and reported.
It's also worth bearing in mind that the Departments of Commerce, Energy, Homeland Security, and Treasury all have efforts to this effect. And as you say, nobody writes articles about how the Department of Energy helped a solar operator figure out a patching strategy. It's boring.
There's also the issue that there's no mechanism to force companies to keep servers up to date or face consequences so it's only possible to really do half the work. Blue teams could find every vulnerability out there but you'd still have companies running old versions or refusing to put out patches to their customers (and customers not updating devices deployed in their home).
True enough I guess, though it’s definitely not what I’d pick. Tangentially I wonder why ransomware hasn’t been a thing for longer. Was it an issue of how to do payments that crypto solved?
The consensus seems to be that it is crypto making receiving the ransom easier, and Russia (and other jurisdictions) giving ransomware crews free reign.
I think the second one might be more important, because it is generally known who the big crews are, and it is actually easy to screw up the opsec on crypto.
I agree with you but please, no more outrage. We have enough outrage. Be a proponent of something without being outraged about it. It's not doing any good to be outraged and it's exhausting.
Outrage is not conducive to well thought out action.
Ourage builds and builds pressure until the subject reaches a tipping point and comes with a knee-jerk reaction meant to reduce the pressure. Such reactions are poorly planned, have no room for nuance or discussion of downsides, and are not aimed at solving the problem but at getting the outraged people of your back.
Joining a mob of angry people is a way to affect change. But in many cases it is not the change we (should) want.
By the outrage I more mean where people are spending time complaining and trying to fix the system. I want people to focus efforts into trying to get larger funding into defense (and not defense as in building weapons, but security, if that wasn't clear).
I'm just your regular outrage citizen thanking our local outrage police, no sarcasm whatsoever! They have an important duty to protect us from outraged people on the internet, and they put their lives on the line every day.
The government's strategy is probably a result of it being easier to maintain an advantage by keeping weapons secret than by distributing defences to only the good guys.
It would be interesting to speculate how close we are to replacing all networked services with provably secure implementations (like the work of Project Everest[0]). Of course there's no such thing as perfect security (or perfect proofs), but I think we are close to reaching the point where attacking implementation flaws is less fruitful than attacking the software supply chain.
In fact, we may already have reached that point, so I think that efforts to secure the supply chain (like sigstore[1]) and potential government efforts to attack it (like recent changes to iOS and Android[2]) deserve more focus.
The US gov can easily maintain an offensive advantage while doing massive amounts of patching. Red teaming is much easier.
The problem here is that we're essentially building glass cannons. Yeah, we can hit hard but you can't win a fight that way. Eventually you're going to get punched in the face.
The NSA does perform that function for the government. They protect DoD and IC assets and critical civilian computing infrastructure. They created SELinux and sponsored many of the major cryptographic standards out there. They don't actively provide defense for iOS and Android because those are product owned by trillion dollar private companies who can pay for their own security, not expected publicly-funded agencies to do it for them.
The Internet is an interesting case. Nobody owns it. It isn't even American. The fact that it was originally created by and for universities that all implicitly trusted each other has led to a whole lot of security flaws baked into the core assumptions of the most basic protocols. But the NSA does protect the hell out of military networks. Military and IC networks are absolutely nothing like the Internet. There is an inherent difficulty in bringing the same assurance to public networks, though, because nobody on a military network expects to be anonymous or to have any privacy. Users implicitly trust the network's central authority. They have to because they work for it. Security is a lot easier with a trusted central authority.
>> but if you're not blue teaming your country then an adversarial country can use exactly all those same tools you're excited about
You miss the point of these tools. They are not being used to protect country A from country B. They are being used to protect those running country A from those living in country A. Country B and its people are not in the picture. You don't blue team the target because you don't want to make it more difficult to watch. If anything, you want to deny them strong security tools. Your red team can do that very well.
I think the root of this is you cannot buy "security". It has to be part of the engineering ethos at all levels. This gets really hard to do at scale. Pouring money u to Blue teams is too late in the process.
This. If you want something secure everything has to be created from scratch. The OS, languages, tooling, every software etc. etc. Nobody will ever do that. And even if they did something will fuck it up on higher level.
Maybe there is a genius in the statement “640K is all the memory anybody would ever need on a computer” in that it would be harder to fuck up something much smaller and simpler.
No, those smaller/simpler systems are much easier to fuck up.
In order to have secure systems, you need to add a lot of complexity overhead that earlier systems simply could not afford. Proper process isolation is a good example - extra complexity, but absolutely necessary unless your chip limitations make it impractical. Permissions for memory pages eg. W^X policy. Stack canaries. Address space layout randomization. All things that add extra complexity and resource usage, but without which it's much easier to exploit systems and any single mistake means it's game over.
Publicly, Poindexter and the rest of the criminals under Bush Jr. went all-offence and launched the Information Awareness Office [0] to pursue a strategy of Total Information Awareness [1]. They wanted to ramp up ECHELON to hoover the whole world, started hoarding 0-days, and eventually created a whole industry to shop exploits. Now that is a business, nobody is going to make director leading the blue team.
Privately, I speculate that they also assessed the state of play and just gave up. Microsoft back then still believed that code-signing would fix their bug-of-the-week run. Industry security practices were so weak as to be non-existent. Hell - telnet was still common.
The only nice thing I can say about it was they had an amazingly honest logo [2]. That is, until congress freaked out and made them hide it all behind a bit SECRET sign. And so we heard little more about except via a steady drip of whistleblowers like Mark Klein, Thomas Drake, William Binney, and Snowdon.
It seems like you think that red teaming and blue teaming works much like an RPG game where you can spend skill points on perks, but blue team perks (like tier 1 endpoint defense) cost more skill points than red team perks. I don't think this is an accurate mental model, and I'd rather frame it like this: exploits are secrets, and when you learn the secret, you can share it with others as well as develop the countermeasure to the exploit. If you spend a lot of money discovering a useful exploit it is by definition something nontrivial that is unlikely to be discovered by regular hackers unless it is leaked or discovered after careless usage. If you discover an exploit that an enemy will soon discover, it is to your advantage to publish the countermeasures to the exploit before your opponent discovers and weaponizes it.
Also, recently, FBI "hacked" in to Exchange servers that were vulnerable (with court authorization) to patch them [1], so it does happen. But I agree with your sentiment that it doesn't happen as often as it should.
The public perception seems to be that the US doesn't spend enough resources to harden its and its people's defenses than it does to surveil people.
Speculating at a 10,000 foot view: It may just be the case that, in any real conflict, phones, the internet, etc. are doomed no matter what. It would be so easy to cripple digital infrastructure with physical means it probably doesn't make much sense investing in digital prevention.
Physical safeguards are both easier to implement, easier to demonstrate when funding is decided, and generally the better investment considering most compromises of government digital infrastructure come from people with direct physical access (plugging in a dropped USB, spies, phishing, etc.).
You can't measure the true effectiveness of blue teams so in MBA terms they are a useless cost center. At best the sort of thing that should be outsourced to the lowest bidder. Exploit happens... See! Look how bad the blue team is.
so it sounds like American companies helping the US government spy on Americans is OK, American companies helping the US government spy on people in other countries is OK, American companies helping foreign governments spy on their own people is not OK. If they believe these tools are effective, it sounds like they want to prevent foreign governments from using American technology to defend themselves against American agents. “Computer powered super surveillance is our thing, you don’t get to play with it” is how it reads to me.
Is there any other way to read into the government's authority at large? The entire premise of government is that is assumes power over others. Monopoly on violence. If said power becomes distributed to other parties, it becomes moot. How that power is wielded, and with what intent are separate discussions, but the power itself is core to the idea of government.
I feel like the idea that "The US government wants to have certain powers, and wants no one else to have them" is just baked into the fact that it is the US government, and should neither surprise nor alarm anyone who isn't an Anarchist.
> Is there any other way to read into the government's authority at large? The entire premise of government is that is assumes power over others. Monopoly on violence.
Ya, monopoly on violence by... who exactly? Who is wielding the power in the government? Afghanistan was horrible, right? But US just kinda let that go on for 20 years or whatever. News didn't really talk about it. That's kind of like... a lot of suffering while just pretending nothing is happening. I guess if it was you wielding the power in any real way, or i mean if the people who represented you cared about what you thought, they might have checked in to update you and see what you thought. But they didn't, so... wonder what happened there. And since we are talking about violence, i guess we all agree constructing dragnet surveillance with no regard for human and civil rights is violent, right?
Also, monopoly on violence, you mean over other states right? So then the other states... don't have a monopoly, or? Isn't that just imperialism? I've never heard that phrase used this way, i usually hear it used as monopoly on violence over the people (which the US absolutely is but will never admit because then they wouldn't be able to call every non-US gov in the world 'authoritiarian'). I guess you might be saying monopoly over so-called 'enemy states', but like i said it US is touted as such a world-renowned and representative democracy, and most people don't even understand anything about 'enemy states'. Really, they are mostly just blindly nationalistic or bask in the high standard of living afforded them with a warm and convenient lack of awareness about what is going on in the world (retirement funds did well due to that war, that's nice). Or maybe they went into some detail but it was inside a carefully crafted investigation bubble, usually under threat of losing their job or burned as a witch for having the wrong thoughts about an evil enemy if they could influence someone. Anyway, in such a situation, i hope we can agree that the state should not just blindly exercise violence for a 'monopoly'. Who said so? How was the decision acted upon? To what end? How much violence? Which kinds are ok, which are off-limits? What effect is it gonna have on the people?
Ok, so wait we get to comment on this particular export situation. Great. Despite the fact that i've heard people say even well-reasoned and popular adversarial comments in these little situations do next to nothing, we are not addressing the overall problem. People need to know the big picture of how this will be used. Did we ever even agree to surveillance in the first place, do we agree with who we've been told our enemies are? Do we agree with violating peoples' rights? Whose? Why? You know... very basic questions that will never be asked. Request for comment, ya, ok. Thanks. Also, if there's a request for comment on something, all people need to be made aware of the request through some very well-publicized channel. Is there one? I don't think so. No one bothers building such a channel because building one would make their entire political and military racket less profitable not to mention eat away at the neighbordhood of makebelieve.
> If said power becomes distributed to other parties, it becomes moot. How that power is wielded, and with what intent are separate discussions, but the power itself is core to the idea of government.
You have like some premise that all power constructed by the US is legitimate? I don't understand. We never agreed to wire up the whole world like this and it is pretty clear at this point that no one else did either. Ok. Now it's constructed, what to do? The proper answer is, i guess, to destroy the technology and not move forward, right? Since it's construction and deployment was done in secret? If they didn't want to waste resources they should maybe not build illegal things that take lots of resources using stolen public funds?
> I feel like the idea that "The US government wants to have certain powers, and wants no one else to have them" is just baked into the fact that it is the US government, and should neither surprise nor alarm anyone who isn't an Anarchist.
I guess it just smells like fascism to me when you submit to this without like... actually thinking about the specifics of each power? Or letting them just spread illegal shit around the world? Maybe i misunderstand. This all seems weird.
I was specifically avoiding commenting on the legitimacy and use of said powers and instead addressing only what I saw was the core premise of OP: That the US's desire for abilities (maybe that's a better word to use so that we don't conflate legal power with practical power, that is what is possible vs what is allowed) that others do not have is somehow wrong or strange. A core function of all governments, and one that government doesn't exist without, is power. You can't claim jurisdiction without the ability to enforce it.
You kind of went off in the weeds and talked about a lot of things I wasn't saying anything about. The one point of yours I will respond to is this:
> I guess it just smells like fascism to me when you submit to this without like... actually thinking about the specifics of each power? Or letting them just spread illegal shit around the world? Maybe i misunderstand. This all seems weird.
Two things: First and foremost, I don't see how you drew a line from the U.S. preventing export of technologies that enable fascism to the U.S. 'spreading illegal shit'. Kind of a hard turn there, and not really what I, OP, or the article were commenting on.
Second, power is not equal to fascism. Personal freedoms rely on the power to defend them. You are not free to live if someone else is free to take your life. Even a loving and benevolent government requires the power to protect and care for it's constituents. (Now, don't misread that, and think I'm saying the US government is loving and benevolent, I'm making a broader point), and that power needs to be highly asymmetrical or it simply doesn't serve its purpose.
> I was specifically avoiding commenting on the legitimacy and use of said powers and instead addressing only what I saw was the core premise of OP: That the US's desire for abilities (maybe that's a better word to use so that we don't conflate legal power with practical power, that is what is possible vs what is allowed) that others do not have is somehow wrong or strange. A core function of all governments, and one that government doesn't exist without, is power. You can't claim jurisdiction without the ability to enforce it. You kind of went off in the weeds and talked about a lot of things I wasn't saying anything about.
Ya, i wrote too much without getting enough clarification, sorry.
> The one point of yours I will respond to is this:
>> I guess it just smells like fascism to me when you submit to this without like... actually thinking about the specifics of each power? Or letting them just spread illegal shit around the world? Maybe i misunderstand. This all seems weird.
> Two things: First and foremost, I don't see how you drew a line from the U.S. preventing export of technologies that enable fascism to the U.S. 'spreading illegal shit'. Kind of a hard turn there, and not really what I, OP, or the article were commenting on.
So i didn't mean to draw the line you describe there, i should be clearer in what i write. If i understand what you are going for by adding your comment to this issue, you were kind of dismissing anyone being "surprised or alarmed", and you were maybe a little annoyed anyone was even talking about it. I am not personally really surprised, but i was suggesting alarm and discussion might be good when government decides to exercise power (and my weeds about this gov in particular and the historical events leading up to the issue at hand), because people should understand they should be a part of what power is exercised. I still don't know if i am misunderstanding, but it seemed you were kinda dismissing people being alarmed because this is kinda just what governments do -- "they do power, what's the alarm?". What i meant about fascism was not the line you suggested but the idea that someone might just glance over the entire issue by saying 'governments do power, the government is posturing to enhance this power, who cares?'. I was suggesting that a bunch of people submitting to the the state building power without involvement or question of the specifics of the powers being built is kind of fascistic.
> Second, power is not equal to fascism. Personal freedoms rely on the power to defend them. You are not free to live if someone else is free to take your life. Even a loving and benevolent government requires the power to protect and care for it's constituents. (Now, don't misread that, and think I'm saying the US government is loving and benevolent, I'm making a broader point), and that power needs to be highly asymmetrical or it simply doesn't serve its purpose.
If one state gets to build asymmetric power, that means other states don't have it, right? So.. those states under your definition here can no longer be loving and benevolent because they can't have enough highly asymmetric power in the reverse direction if something bad happens? Am i simplifying too much or missing your point? How does this work out?
Is prematurely building asymmetry with technology really necessary as you say? What if there is symmetry in all countries, then someone does something bad, we all meet and see this person is behaving incorrectly, and then unite. No individual state has asymmetric power but asymmetric power was constructed and used as needed for this situation.
Am i still way off? What are you trying to get across with your comments here? Are you dismissing people being concerned or am i misunderstanding?
no, I agree with mrobot and I'm only commenting because you're framing my argument oddly and suggesting that I wouldn't agree with mrobot.
I think your commentary is tautological reasoning that says the US Government's use of force is legitimate because the point of the US Government is to use force. I agree with mrobot's framing that your line of reasoning is authoritarian and proto-fascist. The "illegal shit" thing feels like a sidebar and not really important to mrobot's argument (and also a bit left-field), so I'm ignoring it entirely.
Governments don't exist for the purposes of exercising force, they exist for things like paving the roads, building hospitals, etc; for the benefit of the commons. Greater specialization leads to greater productivity, and administering things like "how do we pave the roads" is a specialization that all of society benefits from that no one person or entity should bear the responsibility of paying for. Establishing the taxation and monetary structure by which those things are funded is a core function of the government, and a legitimate one in my opinion, as I would rather live in a society where I simply pay my taxes than a society in which I have to sit around and decide which roads projects to "invest" in. I would like to "hire" someone to do that, and I do so by voting for a Superintendent of Highways (or whatever it's called in your jurisdiction), who receives a pay from the government, which is funded in part by the taxes I pay. All of that seems totally legitimate and has nothing to do with the usage of force or the legitimization of state violence.
The use of force by governments is indefensible far more often than it is defensible, and a core function of the citizenry is to question and investigate its governments use of force, and to use their voting rights to eject those in the government that would perpetrate indefensible uses of force against innocents, both domestic and foreign. We should always be questioning the government's usage of force. We should never, ever stop questioning the government's usage of force.
There are some legitimate uses of force. For example, the Nazi government used force, that was bad. We can all probably agree on that. Other governments used force to oppose them. That was good. Again, probably not controversial. Clearly, not all governments' usage of force is equivalently legitimate. It appears that your position is that the US Government's usage of force is more legitimate than other governments' usage of force, because they're the US Government. The idea that the usage of force is more legitimate if it's the US Government than if it's a different government, which is how I interpret your position, is a position so obviously jingoistic that I am embarrassed to have to clarify that I think it's preposterous.
Now let's say the US government formed a red team to attack the electrical grid of, I dunno, Iran. And let's say the Iranian government thought "we sure would like to have a blue team, let's buy some blue team tools", and all the blue team tools were made by ... American companies. Would those American companies be barred from selling blue team tools to the Iranian government, which the Iranian government would use to defend themselves against American aggression? That, in effect, is my question.
If these tools are legitimate tools, then other governments should have the ability to use them. If these tools are not legitimate tools, then no government should use them; not the US Government alone. That later position is what I'm interpreting these rules to mean: that these tools are dangerous, and so China shouldn't have them, but we're good guys, so don't worry, we can handle them safely. The rules aren't "you can't make tools that can be used to violate people's rights", the rules are "you can make and sell and profit off of making tools that can be used to violate people's rights so long as the customers are the US government and its friends, but not if the customers are governments we don't like". If those tools are illegitimate and prone to abuse, then I don't want my own government having them either! The ruling I want is "Don't make PRISM", not "You can sell PRISM to us but not to China".
> The entire premise of government is that is assumes power over others.
No I don't think that's at all the entire premise of government. I think that statement is terrifying and authoritarian, and it is so terrifying that I originally didn't want to engage you. Since you've chosen to attempt to speak for me, I felt it only fitting to clarify my position for anyone that may make the mistake of taking my silence for agreement.
Of course, that's the whole point of export controls - to ensure that companies creating government-useful technology benefit your country and not other countries unless your government explicitly approves export.
I've tried to read the (currently unpublished) interim final rule to see what's been added but with all of the ECCN, country groups, and license exceptions cross-referenced it's practically incomprehensible: [PDF] https://public-inspection.federalregister.gov/2021-22774.pdf
No. ECC memory isn't "hardened" in the technical sense intended here; it's simply error-detecting.
What this primarily refers to is hardware which has been fabricated on an exotic semiconductor process (like silicon-on-insulator substrates) to resist radiation-induced upsets or latchup. This hardware is almost exclusively used in military and space applications; it's basically nonexistent in the consumer space.
So that perspective is largely a risk management failure. Yes, it's possible and probably likely, that at some point a rogue cosmic ray might flip a bit on your phone (or a bit in transit between a tower and your phone).
No, it is exceedingly unlikely that such a bit flip will have a real world impact beyond an unexpected error that manifests as an application crash or device reboot in the worst scenario, and most likely a temporary failure such as an image not loading or rendering, or a decryption operation that fails because the flipped bit makes is treated as an error.
Is it something to be concerned about? Generally not - if you are in an environment that has sufficient radiation that it has a practical impact on your phone, one would hope that your actual concern is more heavily focused on protecting your physical self, and one would hope that your personal threat model increases in scope to justify spending on electronics that are more resilient if you have budget left over after buying protective gear.
Are there any means to detect that a cosmic ray caused such a crash without ECC memory? I recall that Drop Box had some issues early on with memory corruption due to how many low-end PCs ran Drop Box.
I recall coming across a more detailed write up a long time ago but still found mention of the issue [0]
> our clients rarely have ECC memory. We see a constant rate of memory corruption in the wild and end-to-end integrity verification always pays off.
To detect cosmic rays? I doubt it; detecting them would be possible, but pointless since you couldn't reasonably act on the information - any such solution would probably need Heisenberg compensators since the tools to detect the radiation would most interfere with the radiation you are trying to detect. It's probably better to just shield all the things, or add integrity checks everywhere.
To detect bitflipping errors? Yes. Use cryptographically secure algorithms and protocols that ensure that messages have integrity checks in transit and in memory.
To detect crashes? Probably not - if a bit flips in memory without hardware level error correction that reports the error, there isn't really a way to detect what caused the error.
Because its cool. Am I not allowed to want something because I think its neat? I don't need night vision goggles either but man it would be cool to walk around in the woods at night with them on and star gaze.
What you meant to ask is, "Is telecommunications equipment using ECC memory controlled under 5A001?", and the answer is no, a.2 refers to rad-hard components.
The key words are "specifically hardened to ..." instead of something like "using any technology that might help with ...". Generally the CCLs never use vague wording like this.
The new controls are for "intrusion software" (e.g. malware) and "IP network communications surveillance systems or equipment."
There are specific definitions for those terms with technical specifications. Then there are licenses/exemptions that mean you don't have to seek a license if you are selling to nongovernment customers in certain (friendlier) countries. There's also larger exemptions in export controls related to commercial off the shelf equipment and fundamental research that would apply as well.
Generally the take away is that if you're selling malware, exploits, or network surveillance equipment, you might want to talk to an export control lawyer first.
Are there any nice tools for resolving cross references like this in a body of text?
I don't deal with legal documents enough (luckily) to have ever really needed this, but it would be a nice thing to know how to use if needed, or on creative document sets. Essentially I'm asking for something where I can import a set of machine readable text (or OCR'd) set a grammar for references in context and then easily click through. If it's easy enough to extend the grammar I could probably link new things up as I go when new kinds of references pop up. Trying to get too smart about things like acronyms might be a step too far though, I want to be able to trust this tool completely.
Fwiw for actual bills, they're all written in an XML format that makes this easy. I don't think any executive branch agencies use it though, unfortunately.
> Fwiw for actual bills, they're all written in an XML format that makes this easy. I don't think any executive branch agencies use it though, unfortunately
Not the same XML format (s), but regulatory material (both Federal Register and CFR) is also published in XML.
The Government Printing Office has a Bulk Data Repository with the machine readable (largely XML) forms of legislative, regulatory, and a bunch of other documents:
For those of you (like me) who weren't sure exactly how to interpret the rule based on the link above or the original PDF, I believe this Washington Post article from today also summarizes it:
Is it illegal in the US to sell zero-day exploits, or to package up zero-day exploits into nice usable tools? My understanding is that it is not illegal, but something like this perhaps give the US a tool to pursue and/or prosecute individuals that engage in these types of sales when they are selling to the 'wrong' customer (the 'right' customer being NSA or other US intelligence gathering operations).
Read the book This Is How They Tell Me the World Ends: The Cyberweapons Arms Race by Nicole Perlroth
There’s a few chapters in the beginning about the history of the exploit market. Haven’t finished it yet.
To my knowledge it’s not illegal to sell vulnerabilities. If you’re not a government contractor selling/contracting to the US government it would be illegal to sell exploit chains or working software that uses the exploits/malware what have you. The book touches on how they sold multiple of the same zero days to multiple agencies. It got to the point where one of the guys was like you (3 letter agencies) need to talk to each other and stop wasting taxpayer money.
To my knowledge, it is illegal to sell exploit kits to actors that you know are going to use the kits to commit crimes (e.g. if someone sends you an email saying "I'm looking for an exploit kit so that I can attack company X and steal their IP", you cannot legally sell to them). It is otherwise legal to sell, rent, or give away exploits to the general public or to resellers like Zerodium as long as they are not marketed as criminal tools.
For sure, but in practical terms these types of dealings often have middle men and the end buyer is often not known (by design). Everything I know is from podcasts and books, so I'm not an authority on the subject - though I would point out that the enormous amount of red tape in the west tends to be something that westerners seem to project onto the rest of the world. In much of the world, things are just far more loose.
In practice, what is happening here is that people in certain governments are trying to corner the intelligence/surveillance market. To get the capabilities you'll have to go through them, or at least their anointed vendors. So, just like F35 planes and MRAPs, you can get them if you interface with the right interests.
I'm trying to dig through this to understand what it means, but I am far from an expert on regulations or legalese. I'm looking forward to any breakdowns and explanations/annotations of the passages in this article and rule. If anyone has any, please let me know in the reply?
I'm a regulatory lawyer (but I have no experience with export controls) and I can't decipher the rule either. I actually wonder how anyone is able to confidently draft and revise such a long document with so many complex cross-references:
> License Exception ACE eligibility is added for 5E001.a (for 5A001.j, 5B001.a (for 5A001.j), 5D001.a (for 5A001.j), or 5D001.c (for 5A001.j or 5B001.a (for 5A001.j)). License Exception STA conditions is revised to remove eligibility for 5E001.a (for 5A001.j, 5B001.a (for 5A001.j), 5D001.a (for 5A001.j), or 5D001.c (for 5A001.j or 5B001.a (for 5A001.j)) to destinations listed in Country Groups A:5 and A:6 (See Supplement No. 1 to part 740 of the EAR for Country Groups). License Exception TSR is revised to remove eligibility for “technology” classified under ECCN 5E001.a for 5A001.j, 5B001.a (for 5A001.j), ECCN 5D001.a (for 5A001.j), or 5D001.c (for 5A001.j or 5B001.a (for 5A001.j)).
It's like a logic puzzle.
Edit: Looking at this random paragraph again and it seems they're missing a few closing parens so maybe the answer to how they confidently draft and revise these documents is... they don't.
I can’t help but wonder if encryption export controls will be slipped into this mess. Seems like a good place to hide them but I don’t have time to drudge through this at the moment.
I doubt it. The link says it’s consistent with Wassenaar Agreement (WA) negotiations, which is the international export control agreement that is quite well harmonized across many nations. WA has a lot of restrictions on encryption, but a huge carve out for most items that says encryption on commercially available devices is exempt.
Are you saying we're way past the point where encryption could be restricted from export in the U.S.? Because encryption exports are controlled and when I first started programming they were completely illegal. Every once in a while new legislation is proposed to make these exports illegal again, usually to "save the children".
Based on other comments here, I'll assume there is no hidden agenda on encryption here but a document this messy is probably hiding "stuff" (on purpose or not).
With AES widely available in free code, adding export controls today wouldn't seem to do much damage to symmetric crypto at least.
Maybe post-quantum schemes could be affected, but it's only a question of time until people agree on a standard, and if that one gets exported and doesn't get broken, controlling crypto exports won't prevent anyone from using secure ciphers.
Encryption is restricted from export in the US. I've had to submit forms to do things as trivial as buying microcontrollers from TI which happened to have AES instructions.
No idea why I can go into a store and buy an infinitely more powerful Intel laptop without a form, though.
I find it deeply ironic that an authority is using it's powers to implement regulation (goodness or badness aside) which claims to "help ensure that U.S. companies are not fueling authoritarian practices".
Somehow I doubt this will lead to myself being any less surveilled... but maybe I'm just being cynical. I want power to the people! But we are all just so damn stupid these days.
We may dispute the wisdom of the decisions, but there are certainly strict export controls on military equipment. For example, exporting nuclear submarines to Australia is big news because it's a major exception. F-22 fighter planes cannot be exported to anyone, by law.
They are granted to countries where US considers that helping the regime benefits US interests and forbidden to countries where helping the regime does not benefit US interests. Stuff to make blowing people up more effective and digital surveillance tools are rightly in the same category.
Did a US government site just slap me with a modal popup offering to snatch my email address off me? Before I had a chance to see anything on the site?
I absentmindedly closed it immediately at first, and had to delete the site's cookies to check if I saw that right.
I hope it could also force other countries like Israel and Switzerland to follow like it did in other areas like the wars on drugs, copyright piracy and tax evasion.
Yet they will allow Chinese routers that require an app on your phone to use and where you can't turn off the cloud functionality. Looking at you TP-Link.
Yea, because NSO group has taught us that we can't trust governments to not abuse these tools. for example the Mexico and in turn the cartel use of pegasus.
Yes, it's Israel that needs to do this, perhaps much more than the US, because it's the Israel company's tools that have found their way into the hands of people surveiling protesters across the world. I'm sure US companies have nefarious technical hacking tools too, but why do all those reports list Israel? Let's help stop these kinds of tools world wide.
I sometime wonder if we should go one step further and help getting as many Starlink dishes as possible into China/Russia/Iran/Cuba/NK (the worse offenders in term of censorship and human rights violation) to finally give their population access to real, free information instead of whatever heavily censored network the local regime allows.
People in China/Russia/Iran can use proxy services to bypass censorship. Such services are much cheaper than Starlink connections and far easier to set up and maintain.
Starlink satellites route through ground stations which are subject to local controls [0].
Communication between satellites and the earth are governed by international treaties [1]. Every country controls radio spectrum use in their borders [2]. Starlink must obtain spectrum licenses and comply with local laws. If Starlink were to route traffic to ground stations outside of the country to evade local controls, the country would simply revoke their spectrum license. If Starlink decided to operate without a license, the US government would be forced to either stop them or break numerous international treaties.
I doubt that helping people circumvent censorship will have long-term positive impact. Censorship is a symptom of bad government, not a cause. For example, both the United States and Israel both have low censorship. Yet, according to [3, 4, 5 + 6], the United States and Israel would be included in a list of "the worst offenders in terms of censorship and human rights violations". Also, UK and Singapore have strong censorship [7, 8] and perform few human rights violations nowadays.
China could always shoot down some satellites if Starlink allowed access in China without approval. Yes, there are 1000's of Starlink satellites, but I doubt SpaceX wants to get into this kind of a fight.
Yeah, it's a battle that China may not be able to win, but the United States government continuing to permit SpaceX to deploy satellites that are used to circumvent Chinese policy would be a clear act of aggression, especially if China had taken action to assert control over it's domestic communications.
I mean, it's entirely possible that some elements of the US government might prefer that, but generally speaking, I don't think that's a winning strategy for SpaceX. I think that even the most ardent Musk fans should anticipate how quickly tolerance for Musk's cavalier attitude dries up once missiles start flying.
I'm guessing since they built a national data hoovering apparatus that can already surveil the entire world, they want all their surveillance technology (including Google and Facebook) to remain under national control.
Right. They're certainly not going to ban Google and Facebook from operating abroad, as a naive reading might suggest. The rules look so vague to me that it looks like just another way to justify the ruling US prez banning whatever he dislikes on a whim.
If I want to order from a webshop that relies on googleapis.com or uses recaptcha, how much choice do I realistically have? How aware of webbugs (Facebook and Twitter logo's, for example) do you think the average Internet user is?
You have plenty of choice. Your choice might have consequences but you still have the choice. Most users are fully aware of how tracked the free internet is.
That reminds me of some earlier definitions of rape where the victim is required to have fought against the rapist. Otherwise, they had “chosen” to yield to the perpetrator.
No, it's not. That doesn't follow ANY definition of surveillance. Just because you replied to my comment and gave me your username doesn't mean I'm performing surveillance against you. That's silly and that is exactly how ur definition would work.