Hacker News new | past | comments | ask | show | jobs | submit login
30k U.S. organizations newly hacked via holes in Microsoft Exchange Server (krebsonsecurity.com)
1038 points by picture on March 5, 2021 | hide | past | favorite | 437 comments



We've got some information on the timeline (and a name) on one of the major exploits here:

https://proxylogon.com/

Some of the detail on where this is a mess -

The relevant security update is only offered for the latest (-1) Cumulative Update for Exchange. So you can open Windows Update and it will say "fully updated and secured", but you're not. Complicating matters, Cumulative Updates for Exchange 2019 have to be done from the licensing portal, with a valid logon.

So maybe you have a perfectly capable 24x7 tech team, but the guy who manages license acquisition is on leave today. This is how you may basically find yourself resorting to piracy to get this patched.


Reminiscent of Cisco IOS patches being stuck behind support contracts - and inaccessible to many until they pony up.


It's been a while since I've had to deal with Cisco IOS, but IIRC they were always good about releasing security fixes to anyone upon a TAC request.

For used devices off support contract, security incidents were a great opportunity to get free updates.


IIRC you needed a contract login to open TAC cases.

I always just got copies of the .bins from friends who worked at places that had contracts. They didn't gate updates at that time by which model you bought, once you had access you could get firmware for anything Cisco.


Phone or email would work too, no contract required.

> As a special customer service, and to improve the overall security of the Internet, Cisco may offer customers free software updates to address high-severity security problems. The decision to provide free software updates is made on a case-by-case basis. Refer to the Cisco security publication for details. Free software updates will typically be limited to Critical and High severity Cisco Security Advisories.

> If Cisco has offered a free software update to address a specific issue, noncontract customers who are eligible for the update may obtain it by contacting the Cisco TAC using any of the means described in the General Security-Related Queries section of this document.

https://tools.cisco.com/security/center/resources/security_v...


Can confirm this works, did this 3 times in the past. 2 times no questions asked, a 3rd time I first got a message that I didn't have a support contract, but got it after linking them the release notes of the firmware in question where it said Cisco would provide it for free.


Cisco support at Enterprise level expensive or not. The Best in industry, their support organization is literally moving to GCP because they know what to do. GCP don't know yet.


I can confirm that on personal devices with no support contract. You contacted them asking for an update image due to published vulnerabilities and they sent it over


> the guy who manages license acquisition is on leave today

At that point, if you really have no other options, you pull the network plug. Or firewall it to internal-only. Email can wait for a day. And the nice thing about the protocol is that it will all get re-sent automatically.


Try telling the whole company email can wait a day. Good luck!


this! i cannot fathom any executive choosing the shutdown of email sevices over some risk that something might happen.


Huh, you mean literally all email will be exfiltrated to some chinese actor? I’m fairly confident most will find that unacceptable.


you and i might. but for most businesses the prospect of loosing money/contracts because an email was read by noone is probably much more real than loosing because too many people read it.


Haven't there been incarcerations due to gross negligence like that?


No? Name one.

(Managers are very rarely jailed for anything except the most deliberate fraud; even negligence that gets people killed is routinely not punished at all)


I don't know any which is why I am asking. It seems like there should be well-known victims of being made an example of.


Why? We don't usually hold victims accountable for crime, and hardly anyone understands computer crime anyway.

There have been a couple of big GDPR fines for customer data breach, but obviously those are made against the company and not individuals.


Was Aaron Swartz the victim or the perpetrator? They set out to make an example out of him. They psychologically tortured him until his heart broke and he killed himself. They were extremely effective in terrorizing us into submission and now no one dares confront our masters again, lest you end up like poor Aaron. No one even speaks his name and we all know his story. And this was only an academic institution so imagine what these completely inhuman actors behind tax heavens and so on do to suppress those they prey upon.

It's a punishment system, not a justice system.


How is this relevant?


It replies to this: "Why? We don't usually hold victims accountable for crime, and hardly anyone understands computer crime anyway."

Is there a specific thing you want me to clarify?


Aaron was accused of a variety of charges, because he'd committed the “crime” of intending to violate corporate copyright on other people's academic papers – but the law was blatantly immoral and the prosecution even more so.

He was not a victim of cybercrime; he was the “perpetrator” of something that was retroactively classed as cybercrime. I know you're upset, and angry – we all are – but that just isn't relevant to this discussion.


I said: "It seems like there should be well-known victims of being made an example of."

"Victim of being made an example of."

I never said he was the victim of a cybercrime. And it IS relevant because because it makes it unavoidably clear what I wrote that someone didn't read with proper care: The financial sector doesn't have this experience. - They don't have their own Aaron Swartz.


Ah. I thought that was a case of “I a word”, where the missed word was cybercrime.


Negligence results in civil trials, not criminal penalties (unless you manage email for the DoD or other government entities).


This depends on the sender's mailserver to cache the mail for a day (or a full weekend) without rejecting it. Some mailservers will kickback mail much sooner.


Such servers are then not compliant with the standard of 4-5 days. See RFC 5321 sec 4.5.4.1.

Are non-standard retry intervals actually that common?


But exchange is much more than e-mail, is it not?


Calendar and contacts


Shouldn’t you have, or should I say don’t most orgs have, a spam filter or some other GW in front of Exchange that actually accepts the mail publicly? And then that gateway will send internally to the actual Exchange? This is what I’ve seen in a few orgs.


I don't think email proxies are built to cache entire org ail messages that long.


They don't follow the rfc if they don't. Email protocol is resilient to servers not being teachable for a few days. Humans forgot that though and came to think that email is like slack


Okay, I see what you and parent say, and I might be misunderstanding the setup of common email proxies.


Huh, this appears to be a change they made for 2019... the downloads for 2016 CUs, including the latest ones, are available publicly: https://www.microsoft.com/en-us/download/details.aspx?id=102...


Yeah this is not the case for 2013/2016, although Exchange CUs are a full installer you can run a fresh install from. Unusual for Microsoft software updates in that respect I believe, and it kind of makes sense that would require a valid license to download.

Clearly is is less customer friendly than 2016, but then Microsoft do REALLY want that sweet reoccurring subscription for Office 365 (or is it Microsoft 365 now?). Can't make it too easy to host your own Exchange server these days...


> So maybe you have a perfectly capable 24x7 tech team,

OK

> but the guy who manages license acquisition is on leave today.

Then I wouldn't have "the" guy for anything.


You are right. I think the people downvoting you just misunderstood the point you were making: In a “perfectly capable 24/7 tech team”, you should not depend on a single individual for anything.


Unfortunately a few comments here have honed in on one contrived example of why I think this strategy is broken. To give another contrived example: I personally had a logon to this portal, but it broke last year when they integrated logons with Azure and it took me like three months to get it fixed.

The fact a critical security update can't just be downloaded is bad. I don't care if someone in sales thinks every licensed user should probably be able to get it. Here NCC produced a list of "valid" files to help people scan for not legit files. Except they don't have Exchange 2019 CU 8 because they couldn't get it:

https://github.com/nccgroup/Cyber-Defence/tree/master/Intell...

Microsoft has a hard limit (5?) on the number of individual accounts you can grant access and in a big enough org it's still plausible they'll be scattered across the world and you'll find none of them available the exact hour you need this update.


In real life, however, this kind of thing happens all the time. Someone forgot to write down the login when they left and no-one caught it in the offboarding. Or someone set up 2FA on a system but didn't put that info into 1Password / the wiki, etc.


The failure mode of clever? It's "asshole." -- John Scalzi

I know you’re trying to save the original comment but that comment can legitimately be taken the way the downvotes are taking ... that the commenter believes that guy should be fired for being away from his phone. Why legitimately? Because I’ve worked with people like that.


It is not the guy that's guilty, but you for having such bad organization that one guy under the buss is sinking the entire ship.


I thought it was Microsoft that was guilty for hiding a security update in the licensing portal!


MS sucks, but a manager can't change them. What you can do is to have good processes, knowledge sharing, and maybe even the balls to take down the entire mail service if it is exposing it to a hack and you need time to patch.


Why do people still use MS products in 2021. Where are developers coming from that actively want to learn which hoops to jump through to be allowed the privilege to briefly use the hardware they rent out to Microsoft?


You mean sysadmins, not developers right? :)


I think that's what they're saying - that the problem is that there is a single "the guy".


I get what they're saying, but in the majority of organisations the persons who spends their entire day reading Adobe's EULA and counting Oracle client installations isn't considered the sort of role that anyone has ever considered needing redundancy or full time availability until this thing broke out.


I’m pretty sure he’s saying that guy is the problem for not being available and should be fired. The way it’s worded, it’s not clear who has the correct interpretation.


Then it should've been "wouldn't have a guy for anything". The issue with funny quips is that if you don't do them correctly ambiguity reigns supreme.


"You took a vacation the same day that a zero day dropped. You're fired"

Yeah uh, I don't think I wanna work for you then


I think their point is that "the guy for a specific task" can't exist on a team that actually is "24x7 perfectly capable".


Pretty sure the poster is talking about having one single point of failure for all license acquisition in the first place, not about firing the single point of failure.


If your organisation has key person dependencies that’s a problem in itself.


Not all organizations can afford to double up in everything.


If you cannot afford to double something mission critical up then you at least should have a backup plan what to do in case something critical happens while that one person is unavailable. If you cannot come up with a plan, consider if you should be running this.


Since when are licenses mission critical? ( OP said the update is only available via the licensing portal)


Since you need the licensing portal for critical updates at the latest.


If your 3rd party license expires and takes down your whole system then it is mission critical.


Agreed, but I’m curious how often you see how often this actually works out as it should.


That's fine, but that means you don't have a perfectly capable 24/7 tech team.


Yes it does. 24/7 means all the time for any reasonable given scenario. If someone taking leave means you can’t fulfil a function properly then you’re not 24/7 even if you are perfectly capable.


Exactly.


Bigger picture, what's the endgame here? It seems a lot of institutions handling sensitive work are considering air-gapping some or all of their networks at this point. Maybe that's even what has to happen.

Is there a means of fending off these attacks on the political front? If this same level of espionage was happening in person, there would be a kinetic response but it seems everyone is happy to just turn the other cheek.

These attacks have a very real impact. Copying others homework is a tried and true way to get a technological edge and in practical terms, it means a lot of research and development money is effectively wasted as it doesn't generate any returns.

Mind, I don't think there should be a violent response, but it's odd that even the threat of sanctions isn't made whenever this happens.


> endgame

If you mean the strategy as the end nears, it should be what it should always have been: trust no single product or supplier, implement multiple layers of defence for what is important. Maintain in-house expertise.

If you mean the "Lessons (never) Learned"... Train developers better, build better software through validation and verfication, train management to understand technology and risk. Humans become increasingly incompetent as complexity is scaled.

Everyone is doing espionage, no one is going to war because Microsoft has flaws.


I’m curious to hear more about cases of large institutions seriously considering air-gapping. This is the first I’ve got wind of something like that.


Yeah, runs contrary to my perception too. Even things that one would reasonably expect to be air-gapped are online these days.


Air-gapped systems really only make sense for the occasional need to access exceptionally sensitive materials. I.e. private keys for root CAs.

For most businesses, air-gapping would mean we are back in the 20th century of business with filing cabinets and armies of people pushing paper between 2 rooms.


It's not actually that bad. There's a lot of defense, security, and highly proprietary development that happens on isolated networks. You have to put significant effort into IT infrastructure but you'll end up with all your stuff hosted internally and most tools support custom package repo mirrors (linux distros, programming languages/build systems, docker). You'll also probably have a second system with internet access at your desk if not nearby for stackoverflow et al.

Basically the idea is defense in depth. The valuable stuff (design files, schematics, code, documentation) lives in the air gapped network while communications live inside a VPN and detailed technical discussion is often discouraged.


Air-gapping is common in some industries, and there are also network diodes: https://en.wikipedia.org/wiki/Unidirectional_network


Keep in mind, there's actual air-gapping, and there's secure enclaves. This specific attack would have no teeth if your Exchange server / OWA endpoint were only accessible from corporate VPN. You don't have to be one of the top-ten biggest corporations to run a global-scale intranet with off-the-shelf VPN servers, and it still greatly reduces your attack surface.


For us, that only makes sense for backups. ...maybe private keys?


Not true countries accept that they spy on eachother. They all do it its just that America are the "good guys" and its enemies don't do press conferences on how they got hacked. Also we already have copyright and patents so no you can't copypaste an iPhone.


>Copying others homework is a tried and true way to get a technological edge

The Soviets were better at spying than the West was, but their being better at copying the West than the West was at copying them didn't seem to help them all that much.


Lots of comments on the security arms race, but I'm curious about the geopolitical end game. What will Russia and China do with this information? Technological advancement is a means to an end. What is the end?


It’s likely that whatever the Chinese or Russians are doing to the US, the US has bigger and better exploits gathering intelligence within adversary networks. Being too aggressive about these would undermine the US position when they are eventually discovered. The US must have some fantastic assets if they are putting up so little fuss about solarwinds and this attack.


Pure unfounded speculation. The Russians and Chinese have a huge advantage over American intelligence agencies just by the simple fact that there are far more English-speaking Russians/Chinese than there are Russian-speaking Americans. Massive information asymmetry. How many native English speakers speak fluent Russian? Less than a few hundred in the entire world (I'm a Russian academic, I know). How many native Russian speakers speak fluent English? Hundreds of thousands of people. That's the reason why the Russian government is able to run massive projects that directly influence American public opinion through social media. America simply doesn't have that volume of talent and infiltration into foreign societies.

> The US must have some fantastic assets if they are putting up so little fuss about solarwinds and this attack.

Actually they are putting up so little fuss because they are incompetent and castrated since the last administration.


>How many native English speakers speak fluent Russian? Less than a few hundred in the entire world

I think this is one of the most ridiculous things I've ever read on HN, if not anywhere on the Internet. There are a few hundred native English speakers who are ethnically Russian/Ukrainian who speak fluent Russian in any one small neighborhood in a mid-sized city in the US, and there are dozens of such neighborhoods in the US, and the US is only 5% of the world's population. I personally know about 50 people who meet this description, I was at a Greek Orthodox christening with them last year! Not to mention that you can hire non-native English speakers who can read Russian, not to mention the new world of translation apps


I'm not talking about ethnic Russians/Ukrainians, I used "native English speakers" as a codeword to refer to "real" Americans, i.e. people who are have deep ethnic and historic loyalty to the American cause. You cannot rely on most ethnic Russians/Ukrainians in America to have blind loyalty to an American cause.


There's more than 100 Americans who have been trained in Russian in the Defense Language Institute. Looks like Russian is a 48 week course:

https://www.dliflc.edu/about/languages-at-dliflc/


The number who are actually fluent and culturally embedded is miniscule. I personally know some graduates of this program.


Just the number of native bilingual Russian-English speakers numbers in the thousands at least. Just think of everyone who immigrated to the US after the fall of the USSR.


It's more about impunity. If your previous actions didn't cause any serious reaction, you will continue doing more bad things. Tolerance to bad things is destructive.


> the US has bigger and better exploits

No, I honestly don't think so.


+1 Insightful.

Hardware backdoors most probably.


We are seriously looking at strategies for clean room rebuild of our IT infrastructure, potentially on a recurring basis via automation.

Obviously, you cant mitigate 0-day exploits in any situation where reasonable/expected network access is possible. But our concern, despite not being directly impacted by this, is that we may have accumulated malware over the past decade+ that has never been discovered. How many exploits exist in the wild which have never been documented or even noticed? Do we think it's at least one?

The thinking we are getting into is - If we nuke-from-orbit and then reseed from trusted backups on a recurring basis, any malware that gets installed via some side-channel would not be able to persist for as long as it traditionally would. Keeping backups pure via deterministic cryptographic schemes is far easier to work with than running 100+ security suites across your IT stack in hopes you find something naughty. It is incredibly hard for malware to hide in a well-normalized SQL database without SP or other programmatic features.

What if we built a new IT stack that was designed to be obliterated and reconstructed every 24 hours with latest patch builds each time? Surely many businesses could tolerate 1-2 hours of downtime overnight. It certainly works for the stock market. There really isn't a reason you need to give an attacker a well-managed private island to hide on for 10+ years at a time.


I’ve thought a lot about this. I think from a tech standpoint and a security standpoint, my ideal approach would be to rotate out an A team and a B team. Every 2-3 years, the teams switch off. So year 1-3 A team is running the environment. B team is completely rebuilding and re-architecting the organization’s IT. The company is migrated to B teams infrastructure for 3 years.

A team gets to re-build while B team is running and the cycle repeats. This has a few advantages, it keeps the org very current with tools and technology, everyone stays sharp on the latest tech, nothing is sacred, and teams get experience across the spectrum of design build implement and run. It also has good Disaster recovery properties if you idle the old environment so that you can fall back if some critical failure occurs in the new environment.

This would be expensive, but please poke holes. I like your idea of clean rebuilds and can see a path to it with automation / terraform / cloud resources. And you don’t need the downtime if you stand up the second one in parallel and just fail over. There’s still persistent data that needs to carry through, so you’d need to figure out how to separate your persistent data from the elements that reset.


I think the need to maintain multiple team may not be as urgent if you constrain the timeline.

The biggest requirement I see is automation. For this to be feasible in a general sense, it has to be down to a single method invocation completes in 1 hour what those teams are doing in 2-3 years.

The biggest challenge that will emerge from trying to meet this objective is the import/export of data to/from these now-highly-ephemeral IT systems. The ability to easily import pure business data back into a fresh instance of the system will likely constrain the vendor & product choices as well.

Very soon, you might find yourself building a 100% custom vertical to support these objectives explicitly. I think this is ultimately inevitable and desirable though. We just need to learn how to build these things quickly & reliably.


Yeah I see it as two items - total architecture rebuilds and redesigns for an organization’s IT system vs. blowing away enterprise resources each night and restoring with a known good application each day or week.

That would be amazing but incredibly complex. Each week I guess you would run a script to re-build your architecture in AWS with the latest builds and patches. Then run a config script to re-import all your data.

It would be painful to figure out, but you could essentially store a copy of your data at another AWS location and fail over within a day or two just given your two install scripts (the architecture build out and then the config script to read in the data), Depending how often and on how many systems you did this on, you’d basically make attackers restart every night or week. And ideally you’re patching as quickly as possible, so it might block some of them out quickly.


Since you asked us to poke holes:

1) turnover 2) skillset

Some people are amazing at the architect/build side of things while either sucking at or hating the run side, and vise versa. Mismatched skill sets leads to higher turnover, which makes running an a/b team routine even harder.


Fair points. I would say if it is done well that turnover would go down. I’d think re-architecting from scratch every 4-6 years would be extremely engaging and keep the role interesting. Or it would be extremely tiring and lead to burnout. Not saying the architects would need to run the application for 3 years - during the three years of run for their cycle they could determine issues with their architecture to fix for next time, work with the other architecture team to make improvements for the next cycle, and perform research for their next design.

I think the main drawback is cost - it essentially doubles the cost of staffing for the organization’s IT. I guess there is core functionality that could be shared and stay consistent.


Not the person you’re talking to, but I want to check my inference.

Maybe I misunderstood, but I’m trying to recap your point:

The teams are on a 3 year production deployment cycle —

0.5y - design next gen system

1.5y - implement next gen system

0.5y - deploy next gen system

3.0y - production (primary, solo, backup; 1y each)

0.5y - decommission

Is that what you had in mind?

I think what a lot of people aren’t seeing is what it looks like with multiple cycles overlapping:

You begin architecture design on gen3 1.5y after deploying gen1.

The coding team rolls smoothly from implementing gen1, deploying gen1, and running gen1 into implementing gen3, deploying gen3, and running gen3. (Assuming minimal coders for the backup phase.) It even works out roughly for promotion cycles: an SDE 1 at the start of implementation for gen1 manages a service as an SDE 2 (2yr experience) and can get promoted to SDE 3 part way through gen3 in time for them to design gen5 (having seen two implement-to-maintain cycles).

On the production side, operations are continuous: your 3 years of production overlap with the other team by 1, making the entire cycle for a single team 4 years in length. Your production crew spends their entire time on a commission-operation-decommission loop. There’s no downtime: they go straight from decommissioning gen1 to commissioning gen3.

Expense is the negative: each team needs a full set of architects, coders, and operators.

But nuclear submarines have two teams for a reason, so I think there’s certain domains where operating two full development teams in lockstep like this makes sense.

I think it would help a lot with “legacy” bloat: to have upgrade cycles be a fact of the business structure.


I wrote this in another comment, but I should have been more clear. Since this is the discussion about the Exchange hack, I was thinking in the context of large organizations and their internal IT architecture and being able to build from scratch without legacy bloat.

The challenge is “how do we run this organization as efficiently and securely as possible? What tools does the business need in place to get the job done? Is our current set sufficient?”.

The fact that any company hands a new employee a Windows 7 laptop in 2021 shouldn’t be happening, but a surprising number of Fortune 500 companies are in that state because of legacy dependencies that require Win 7 to operate. I think the ability to give an organization the opportunity to reset every 3 years would keep things efficient, better integrated, and identify legacy issues that often come up and cause emergencies (the guy who wrote that script left the company 5 years ago and it runs on a server under this desk... we just don’t touch it).

Right now, upgrading a payment system may be difficult due to certain dependencies or other legacy internal systems. If the whole architecture is being re-done, there is a lot more flexibility.


Sounds like an excellent strategy for resume padding.


doubles the cost of implementing anything. Say a customer wants feature-X. Unless you're magically at the point where in your 2-3 year cycle where you're switching, both the A and B team need to implement feature. Of course, that's assuming you don't just tell the customer to stuff it and wait 2-3 years.

You're also assuming that you know ahead of time all use cases and interfaces. It's surprising how dependencies are taken. I've seen large scale systems break when a HTTP 204 was changed to a HTTP 206, or a base36 field changed to base62. Now again maybe you're thinking the consumer can stuff it and update everything whenever you decide to switch over, or that you'll have captured everything and have tests around it. But.. for any sufficiently complex system with a sufficiently large customer base everything about your interface becomes your customer contract. Changing everything all at once is going to break a ton of things nobody ever thought about.

Doing upgrades every 2-3 years means you're pretty much never going to be good at them. Institutional knowledge seems to have a 2-3 year memory horizon. Sure, you get that one person who is a bit of an archeologist/historian but tenure at most shops is not long ("The median number of years wage and salaried employees stayed with their current employer in 2018 was 4.2 years" - first hit on Google). While you're upgrading every 3 years, each team only does so every 6 years. Nobody is gonna remember what it looked like.

There's also a meta point, which is what are you actually trying to solve? Is it so hard to go from architecture A.v0 -> A.v1 -> architecture B that you need to build A, maintain A and simultaneously build B? If moving between architectures is so hard but moving between versions of an architecture isn't - why is that the case and why can't you make the former case easier?

I'm assuming that your plan has you upgrading the A-architecture within those 2-3 years. Maybe you're saying you wouldn't touch it at all and just hope there are no security issues or features or scaling you need to do.

There's also another point which is you've coupled all changes to a particular cadence. Maybe you want to upgrade your network, servers, storage systems, OS, application services, etc on different cycles. At the very least you're sorta hoping that all of those things have similar release cycles, which realistically you're going to be picking some network switch that's been out for 2 years and marrying it to a storage product that was released last month (because the previous one is 5 years old and will be out of support before your next refresh).

And scaling... what happens when you can't get the same server you were ordering 2 years ago? Tell users they can't have nice things until the other team rolls out their massive platform shift in a year? Or would you adopt a new platform to scale on, in which case, why are you doing this A and B team thing again?

And not only do you need two teams, but you need two sets of hardware which means you need twice as much datacenter space, etc etc. Do folks need to two desk phones when you roll that out?

And ... I'm gonna stop here...


This is a great comment and thanks for the feedback.

I should have clarified the context and my experience. I was thinking this is a process for dealing with legacy bloat and mostly internal IT systems (IT Architecture) in mostly stable Fortune 500 size companies that are already operating at scale.

From what I’ve seen, big shifts are often a one time “transformation” with lock-in to a service. In cloud it’s azure or AWS or GCP. Or companies are stuck on legacy exchange and can’t move to O365 without a major initiative. Or there is no viable path to move from Microsoft to Google.

These things only occur with great pain, and resources aren’t often provided to reconsider alternatives and to stay current. I picked three years because things tend to operate at that pace at large organizations. It’s probably a faster upgrade cycle than where most of those companies are today.

It would be interesting to go back to the drawing board with the business lines to develop tech internally to better support them. Lots of stuff is just operating on terribly outdated systems. There is some lock-in (e.g. we’re going to use O365 for our office products for the next 3 years), but it would increase bargaining power because your org could actually migrate away.

For a lot of applications I agree with what you are saying - pick a good architecture and stick with it. And I don’t think there would be a need to change the way the company works for the sake of change, but I’ve seen enough big shifts that it makes me think a total redesign of an organization’s architecture every few years (or at least considering it) would be useful. Right now a big advantage to startups is that they can design much more efficient IT models than most legacy large corps.

I know if I could start from scratch I’d do a lot of things very differently and could show major cost, efficiency, and security improvements. So the idea would be to take a team who knows the company, break them off and say “build an architecture for the organization that will go live in 3 years” - take the best of the current environment and tool set, integrate new tech and security, and we will start moving users to the environment in 3 years. Then you get to run that for 3 years while the other team does the same thing.

You’re right on turnover point.

I think the whole goal of this would be to never go more than 3 years without seriously considering alternatives for major systems (ERP, HR, Security tools) while giving the chance to have it all be integrated and put into place as a cohesive design.


We use netboot for most desktop computers and servers that are mostly stateless. Any changes are temporary ending up on a dedicated temp partition that gets wiped on boot, or in ram.

Rebuilds are mostly automatic. Of course, netboot in itself opens new attack vectors where we're in early stages of exploring different approaches, even the painful secure boot crap. Honestly I think most of the security in our case right now comes from being an obscure in-house solution that you'd need to specifically target. Also, in case you do get pwned, a post mortem becomes mostly impossible since once you reboot a machine, everything is gone except stuff on network shares of course.


>What if we built a new IT stack that was designed to be obliterated and reconstructed every 24 hours with latest patch builds each time?

Inevitably an update is going to break something. So even if you can automate all of that, how can you make sure it doesn't break something? This requirement isn't just the automation and technology gathering, it's testing too. It seems to me like you'd need a lot more benefits to make this worth the time/money/effort. You'd probably be better off having 2 networks for employees: 1 for public internet and 1 for internal company stuff. I think the intelligence community has something like that?


Could the same principles from application development apply? Given sufficient coverage, running unit and integration tests for each "infrastructure build artifact" could help to provide assurance.

(and if your infrastructure service provider(s) don't have suitable test coverage they can offer, perhaps it's time for a conversation with them about that)


When you're deploying from backup every day, rollbacks are easy.


I remember this kind of thing happening all the time in the 90s and part of the 00s... It's just 10 to 1000 times worse now days since EVERYTHING is online now.


Practice. All those folks are still alive and now there’s more of them. They’ve all been practicing too.


Former US CISO Chris Krebs says this is a bigger deal than what's been reported so far.

This is a crazy huge hack. The numbers I've heard dwarf what's reported here & by my brother from another mother (@briankrebs).

https://twitter.com/C_C_Krebs/status/1368004401705717768


Chris Kerbs was definitely not the US CISO. He was the director of CISA, the Cybersecurity and Infrastructure Security Agency. CISO of the US is usually a meaningless figurehead, Krebs actually did things.


His last name is "Krebs".


Feel free to read the last sentence in my post.


Yeah I just had an awkward conversation with a relative who works for a company that has a on site email server running exchange. When I asked him had he patched or upgraded it he said no Microsoft does all that. Grim.


[flagged]


I mean organisations with their own Exchange Server are just organisations that aren’t on Microsoft 365 yet. Which is basically hosted Exchange.

It’s turtles all the way down.


Unfortunately "moving to Office 365" for many organisations doesn't get rid of Exchange. Microsoft's article on "how and when" is basically a list of reasons you might be stuck with it.

https://docs.microsoft.com/en-us/exchange/decommission-on-pr...


> However, we have put little effort into how to get you from a hybrid configuration to the cloud only.

It's hilarious to see someone at Microsoft say the quiet part out loud.

Next thing you know they'll admit in writing that they have no plans for supporting Azure AD tenant to tenant trusts. Or, for that matter, tenant to tenant migrations as well...

I mean, think about it: Who would want that? Nobody with a KPI of on-prem to cloud migrations at Microsoft headquarters, certainly!


Yep, getting 365 means you still end up hosting your own stuff, also paying Microsoft to host copies of it, and basically doubling your attack surface.


Even if you move to O365/Exchange Online, you’ll likely always have some Exchange footprint. The only way to get around this is to migrate your AD to Azure.


“But at bottom, is Perl script”


Just like the rest of the universe.

https://xkcd.com/224/


Meh, this is actually great publicity for O365.


Just like after the Experian hack, Experian ramped up their commercials for their paid Identity Theft Protection service. I was seeing their commercials every hour.

https://www.experian.com/consumer-products/identity-theft-an...


The huge data breach a couple years ago was Equifax, not Experian. It does not seem particularly hypocritical for Experian to try to capitalize on it


which one is your favorite alternative?


Postfix.

But that’s only an MTA i hear you cry, Exchange does both MTA & MDA! Bear with me.

Postfix is software to learn from. It might be written in C but the architecture is the epitome of beautiful modular design. It’s not just the meticulous separation of concerns, the care and attention to detail, everything from string handling to memory management is pristinely handled. https://github.com/vdukhovni/postfix

Even at runtime the beauty of the architecture allows for a sysadmin to choose (via master.cf) exactly how the components should be composed to fit their needs. The defaults are crafted for minimum fuss if you just need to get it running ASAP. The software is ergonomic in addition to being artfully crafted.

So what does all this care and attention get you? Only 9 CVEs in 22 years, only 3 of which are code exec, only 2 of which are (maybe) remote code exec, only 1 of which is unauth user RCE - but very hard in practice to exploit.

Maybe it’s just not that popular? It was 1/3 of all SMTP servers on the internet according to a 2019 scan.

So it’s the best MTA ever to exist, but what about MDA? Well, that was the whole point. Compose well crafted components together to build a system. You especially don’t run part of your mailserver’s web interface in kernel space because, well i’m not sure why IIS/Exchange does that :-)


So Postfix does about 1/10th of what Exchange does, and is secure. Very well, do one thing and do it well.

You talk about composing it with other stuff to create a system, but fail to mention if that system will still be more secure than Exchange. Even if each component of the system is individually very secure, that still doesn't tell you much about the security of the system. It's extremely easy to piece together two secure components and obtain 0 security.

Edit: accidentally said 'not secure' instead of 'secure' in first statement, completely changing the meaning. Corrected in-place.


Except this bug is an ssrf in the exchange web interface, so the MTA is equivalently safe to postfix. You could compose exchanges MTA with another MDA and get exactly the same security posture. Except with exchange, which is actually a good MTA.


wow if you remove features from software it becomes more secure? nice!


Minimization of surface area is a key security principle.

And, generally, security hates complexity.


..yet imagine telling the CEO you intend to turn of OWA because it is 'more surface area'.

Features are what people pay for


Why is written in C bad? Lots of great enterprise software is c/c++.


Actually I don't think there is much C going on in enterprise software anymore. Java and C# replaced it a long time ago.


How is it worse now? It looks to me that it's better now since SAAS companies today just patch their products on their end, and even this situation is better than needing physical media as in the past if the patch is too big.


He told you. Everything is online.


I believe my argument addresses that everything being online doesn't necessarily worsen security from hacking.


Isn't that scenario an extreme contingency?


The United States Government should actively be trying to protect its businesses. They should create a three letter organization to do so. They should call it the National Security something or another.


That name is already taken by the department of hodling cyberattacks. They should have a National Vulnerability Agency that handles it.


Or maybe let's revisit the charter of the NSA and make some major tweaks.


Why do people keep thinking that putting the military in charge of civilian cyber defense is in any way a good idea??


Why do people keep thinking that putting the military in charge of civilian cyber defense is in any way a good idea??

The NSA has some military staff members but it is a civilian agency


The NSA has title 10 authorities. It's part of the DoD, with a chain of command running through SecDef. It is commanded by active-duty flag officers.


At the end of the day, there is no distinction. "Civilian" is strictly nice-to-have concept.


So you feel like America should just move their entire police force under the military and forego any civilian-esque facade?

Probably not. And that's also why cyber law enforcement and national cyber defense should be two separate entities.


Step 1.) Let marijuana users be employed so you can attract talent.

Step 2.) Pay above market rate for talent, even import it from Israel or other friendly nation states. We need a Wernher von Braun style approach to recruitment.

Step 3.) ???? Profit ????


Also: hire people who don't live in the DC beltway.

This goes for . . . well, how about most federal IT-related jobs.


The US intelligence community is very successful attracting extremely talented folks.

It’s funny that you mention Israel. The would be one of the worst “allies” to partner with. Jonathan Pollard was just given a hero’s welcome. https://en.m.wikipedia.org/wiki/Jonathan_Pollard


Step 1 sounds personal


It sounds like it, and it might be, but testing for weed really does impact recruiting. I know of a very large US firm that has quietly stopped including it in their drug test over the past 3-4 years in part because of that.

(I wouldn’t be surprised if folks start to push testing as a HIPAA issue.)


That's not how HIPAA works.


I'm no expert but it could place you in a situation where you are faced with divulging a legal* medical prescription as a condition of keeping your job.


That has nothing to do with HIPAA, your employer is in almost all cases not a covered entity. People frequently believe that they have a right not to have to disclose health information to their employer; this is true in certain cases but by and large is false. There are explicit provisions in HIPAA saying that requiring doctors notes is legal, for example.


Also, intelligence agency requests are exempt from HIPAA. They can ask your provider for whatever they want, and your provider can give it to them with no penalty.

The few I asked seemed to feel they were _required_ to provide information if simply asked without a court order backing up the request. And they made it seem like I was the crazy one for asking that they agree to only provide my data if legally compelled. I ended up doing direct billing out of principle.


Good info! Thank you for clearing that up for me.


It does if you can claim it’s medical use only. Or at least it reasonably could be interpreted that way.


HIPAA applies exclusively to covered entities. Your employer is, in most cases, not a covered entity. If you want evidence of this (assuming you're in the United States) go look at the information required by your employers FMLA disclosure form.

Hint, the I in HIPAA stands for insurance. If insurance isn't involved, HIPAA probably isn't either.

https://www.hhs.gov/hipaa/for-individuals/employers-health-i...



I've been saying this for years. The government should actively be hacking corporations, state, and local governments. Then disclosing the vulnerability privately to these organizations. This levels up our offensive capabilities while securing us at the same time.


National Security Theater?


"This is the real deal," tweeted Christopher Krebs, the former CISA director. "If your organization runs an OWA server exposed to the internet, assume compromise between 02/26-03/03."


Chris acknowledged Brian as his "brother from another mother." :-) I was wondering...


Wow. Patching (or using cloud mail providers) would have mitigated the risk for this one...and many others in the past (and the future). The cleanup from this is big for those who were hit.

Launching attacks during major news events surely also helped the attackers stay under the radar for longer.


The cloud angle is interesting; on one hand, it creates an even-more-centralized single point of failure. On the other hand, given that virtually every computing system out there is a house of cards, letting the experts focus on securing (and updating!) just a single one might be the best defense.


The cloud providers can afford to hire and train elite teams to handle security. I remember seeing a post about a guy trying to break out of the docker container used by Cloud SQL on GCP, and apparently the GCP admins made it known that he was being watched pretty early on. I believe the issue was patched fairly quickly too.

It's possible that <Random F500 Co> has a great security team. But it's also possible that <Other F500 Co> doesn't.


Really what we need is the ability to self-host reasonably secure systems without a team of experts working round the clock... but that doesn't appear to be the hand we've been dealt


I might be biased because I work at AWS, but I really doubt that there are enough sys admins that know what they are doing and keep up to date let alone find vulnerabilities in the software they use to protect all companies. A Fortune 500 maybe, but at some point you simply can't afford someone who knows what he's doing and at that point you might as well have everything in the cloud so you can focus on your actual money making business.


Yes, that's how things are now. My point was that things shouldn't be that way, and I see no reason why they always had to be that way, though at the same time that's where we are and there isn't a clear path out of it.


Is the "cloud" with armies of above-average developers, SREs/ sysadmins/ systems engineers and security specialists really the solution or is the solution to actually sit down and make simpler systems that a few skilled people can fit in their heads and actually understand?


Clearly the latter, but unfortunately the economic incentives favor centralization and that in turn pretty much nixes the chances of significant resources being allocated to create those solutions. In fact, there are significant incentives to keep such solutions out of the marketplace entirely.

It's quite funny in a way: regular mail worked for two hundred or so years without too much in terms of trouble, ok, we had some spam but that was about it. And now mail delivery has become so complicated that the mere act of accepting mail can lead to your corporate secrets being made public or lifted without your knowledge.


Actually, you see Google hiring ever more people. My colleague has worked for Google and said on multiple occasions (it is even on video) you can barely get anything done using the systems and procedures they have in place. The infrastructure/ tools for anything they have programmed there was very high maintenance and almost begins to crumble under its own weight. It is almost like an oil tanker dragging its anchor - it is still rolling, but it sure isn't a sustainable development if what you want is efficient movement forward.


That describes my employer; 15+ year old SaaS vendor.


Physical mail could easily be stolen though.

But you can still send it if you want that kind of security. There’s trade offs galore, but obviously the cheapness and convenience of email seems to have won out versus security concerns.


But its like with electoral ballots. The paper mail is hard to copy without the carrier finding out, and has to be intercepted by a local.

Email cam be copied and sent wherever without the operators knowledge, from anywhere with internet, if they break into the mail daemon.


The path of the vector of attach changes but it doesn't go away. Corp B could just bribe an overworked and underpaid mail room worker at Corp A to make some copies of sensitive looking info before they deliver it upstairs. Even today, who is to say that this doesn't happen with some overworked and underpaid sysadmins? or secretaries with their bosses email account password? I wouldn't be surprised if bribes for emails happen pretty frequently in the business world even today.


Yes, but in actual fact the chances are much bigger that that person turns out to be ethical and will report the attempted breach.

In the digital world there is no such sentry.


> Is the "cloud" with armies of above-average developers, SREs/ sysadmins/ systems engineers and security specialists really the solution or is the solution

Also, at some point the cloud provider may figure out that they can increase profitability by hiring more and more below-average people and just market them as world-class.


This is true but every IT outsourcing company is decades ahead of them. Doing that will hurt the cloud provider’s business while most other F500 CIOs will get a bonus for saving money as long as they leave before an incident too blatant to blame on anything else.


Those outsourcing companies have a reputation for providing cheap low skill workers.

What I described is a situation of basically converting reputation into cash. Once you're known for having "armies of above-average developers" and then cut back on employee quality, it's going to take a long time for the market to figure it out (and you can probably extend that time significantly with slick marketing). In the mean time, you profit margins are increased.


Again, I’m not saying it’s impossible — only that, say, AWS or GCP have a lot more at risk cutting into a core market pitch and failing to deliver a promised service – and that’s entirely on them to deliver.

In part, this is the different service model: if I go to AWS and buy, say, S3 they have a very clear responsibility not to lose your data and to serve it quickly. If my CIO picks one of the bargain basement outsourcers and the centralized storage service fails badly, each different group will be saying that the failure wasn’t due to them but the company management, outsourced project management, the contractors who set it up/operate/monitor/secure, vendor products, vendor professional staff, Microsoft, etc. Since truckloads of cash will have been spent by then, many of those parties only care if it’ll reach the point of a lawsuit and everyone in the approval chain who didn’t say it was troubled before has an incentive to say the failure was unforeseeable and the solution is not to hold anyone accountable.


Interesting idea, but I don’t think that’s a gamble any of the current clouds would make. The business is by nature a long game, so a few years of increased profit followed by <danger> doesn’t seem like it’d be enticing to a cloud.

Besides, it’s not really true today that clouds only employ “above average” developers. I mean hell, they employed me!


It is already happening. If the most productive and smart people at your company are hindered by processes designed to basically hold them back, they are going to leave no matter how much you pay them.


"Simpler" is often at odds with "more features" here though. And while I anticipate the "I only need a handful of features" argument, great swaths of users feel differently.

When you proceed to the logical end of enforcing simplicity to achieve security, you get OpenBSD. That's great for certain applications, but I think we can agree it doesn't check a lot of boxes for contemporary feature set demands.

My point being, achieving that is way harder than it sounds.


That is exactly my point. You should be quite picky about the tools you rely on as a business or individual. Most users in one company, if you ask them, are fine using about 10 features with a great overlap, so maybe 20 features overall. That will have very large overlap with other companies in the industry as well. You have a few employees, that use like 50 features all the time. You know them by name as a sysadmin usually :-) and you tend to those people often and maybe even become friends. Those people will do just fine using a very professional/ complicated tool because they will invest time to learn using it. The other people would be overwhelmed. The solution is to use different tools for those two groups.

Speaking of OpenBSD, that might actually be a better OS for most stuff on the shop floor in companies that I have seen from the inside, where Windows is used almost exclusively. The plus being, nobody can really mess around with it. There is usually exactly one app that needs to run 24/7/365 with occasional opportunity to update e.g. during a maintenance window and that's it, anything that causes the app to close is lost time on the shop floor. OpenBSD being minimal is a large plus here.


In an ideal world the latter would be best. However in practice systems tend to get complicated over time as they evolve and more features etc.. are added. I think one way to look at it is that as a business to outsourcing the non-differentiated heavy lifting to another entity that has more expertise in it would let you focus on your core products. In this specific example why is using a cloud email provider any different from deciding to use power from the power grid instead of generating your own electricity.


In a healthy company, I don't think you can outsource any work on the core product without making the end result worse. Even things like translations need lots of back and forth and the feedback loop needs to be really tight. It has to be clear to the user in the culture what you want to say and the things are very subtle at times, like the eye icon in the password field to show the password. In some cultures, that might be associated with something creepy or negative maybe, but you just don't know. Show or hide may be twice as long as in the languages you know which breaks your assumptions and makes the slick UI less slick or the error messages less clear in that particular setting.


Most IT services should definitely be outsourced.

Let’s say you are a big airline company, there is absolutely not a single reason you should manage your email system. Your job is to fly airplanes not to manage some goddam emails.

The really fun part in that is that most of the big airlines actually outsourced some key part of their core job (IT wise I mean), like how they manage seats and load, this kind of stuff, while keeping some absolute non-core IT services internal, like an internal Exchange system with dozens and dozens of people to manage it.


The first one, 100%.

The state where big companies are make the second option impossible. That may be unfortunate, I don’t know, but that’s really where we’re at.

There is absolutely no way to cure big companies from all the shit they have accumulated. For them, the actual restart is to go to Cloud. Hopefully they will not go simply bare metal, because then they can recreate the exact same shit but in the Cloud.


You're right, but I fear we have built up an ecosystem of talent that sees the world in one of two very binary ways.

One camp assumes if you don't expose it to the internet, and keep it on-prem, it's secure. Think exchange server on-prem (but let's overlook the gaping internet exposed parts - they don't see those, they see the fact it runs in their office).

On the other hand, it's public cloud, hosted service, rely on a big company with the resources (but accept loss of tenant isolation when something big goes badly wrong, and hope the cloud host has the skills to mitigate and detect issues).

We need more secure systems, but if they're publicly exposed then you'll require that team of experts around the clock simply to detect the potential of a compromise. Something I see a lot of confusion around is knowing when something is compromised. Responding is then "easy" in comparison for them, but they don't know what they should be looking for. With complex exposed services (mixed user and management plane over HTTPS, email interfaces for multiple protocols with different versions and authentication mechanisms), the likelihood of serious comprise tends towards 1.

Better hardening services would help to get some way towards the world you describe, but that has to filter through the whole supply chain and ecosystem - no, you shouldn't be able to manage the exchange server from outside, nor should any such interfaces be exposed. No, the exchange service shouldn't execute aspx code from folders on the local filesystem that can be modified other than through a privileged updater service.


Yes, what we need is reasonably secure systems.

But what we pay for is features.


Eventually security becomes a feature


I remember reading this. When the Dev did it a second time there was a txt file on the host (container? Can't remember) saying "Hey this is cool, we're about to patch this, thanks for letting us know".


Yeah, here's the blog post you're thinking of, from August 2020:

https://offensi.com/2020/08/18/how-to-contact-google-sre-dro...

And the HN thread:

https://news.ycombinator.com/item?id=24216009


Riiiight, because cloud sw can't have 0 days.


The argument is cloud sw have better chances of remediating their 0-days compared to countless sys-admins/help-desk-admins


That is only accurate if they can provide a meaningful defense against expected attacks otherwise all you are doing is creating a single central target. Unfortunately, the cloud providers can not mount even a token defense against an attacker funding an attack at the $100M level, so I see no reason to assume they can defend against credible threats to a single Fortune 500 company given that they can not even stop an attack with such a meager amount of resources allocated to it relative to the size of a Fortune 500 company. That is not to say that the teams in a Fortune 500 company are any better, merely that everybody is completely inadequate.

By consolidating targets when you can not even reach the level to protect a single one you are making the situation worse, not better by consolidating. For it to make any real amount of sense they would first need to demonstrate an ability to prevent attacks at least in the correct order of magnitude and then demonstrate that they can scale up without creating correlated risk. Only then does it make any sense to actually centralize on a single solution, let alone a single provider.


I mean... they prevented this one in the cloud version.

I'm not advocating for a single provider, and I'm not necessarily advocating for cloud hosting as a solution, I'm just pointing out that in this case the cloud fared better than practically all of the self-hosted systems


The "cloud version" is an entirely different piece of software.

It's not the same OWA that one hosts on-premises. That one's still vulnerable even if it's hosted "in the cloud".

On a different note, if they could prevent this in "the cloud version", why couldn't they -- why didn't they -- prevent it in "the non-cloud version"?


You already said the answer to your own question; it's different and therefore it's not the same.


Maybe. These were vulnerabilities in every version of Exchange Server for 8 years since Exchange Server 2013 that were only detected because they were already being actively exploited. Unless Microsoft has two distinct Exchange solutions, one for customers to self-host and one for themselves to host on behalf of their customers, there is no reason to believe, despite their claims to the contrary, that they were not similarly vulnerable until the exploit was discovered and patched internally. That would mean their system fared at most slightly better than an average self-hosted system which should not really inspire any confidence in their ability.

Even if we assume that they did create two independent systems, there is no reason to assume that two products developed by the same company in tandem serving the same fundamental, lucrative use case should have material differences in quality/process. That there were multiple trivially exploitable, catastrophically effective vulnerabilities that were unknown for 8 years and that Microsoft never discovered themselves (they discovered it by realizing somebody else discovered it and was using it) should indicate that their cloud product is equally atrocious even if we assume that these were distinct products and thus would not be affected by the exact same bugs.

In conclusion, as you say virtually every computing system out there is a house of cards, so there is no reason to assume that consolidating on the cloud and letting one of those groups of people focus will result in anything other than more houses of cards, except in this case being used on an even juicier target.


That’s not how to look at this.

The point is not if the Cloud can defend against a very sophisticated attack, the point is whether they can at least do a better job than what those big companies are doing.

And the answer is really easy: Fortune 500 are at the Stone Age of security (among a lot of other computer science topics) so of course the Cloud is doing better. It’s not even the same world or the same order of magnitude.

And the abyss will become bigger and bigger because it’s becoming more complex. There is no way a Fortune 500 company can keep up with the complexity of what AWS, Google or Azure is dealing with, and the new tech world we live in. And it’s also quite stupid, that’s not your job nor where you will be making money. Just concentrate on the app/code that is indeed your core job, on top of solid and proven Cloud services.

Also, you talk about centralisation and the issue of a single provider, well, here’s the actual joke: the level of centralization and concentration is way, way bigger internally than if it was on the Cloud. Most of those Fortune 500 companies have only a few datacenters. Although they are international, some even have datacenters only in their local region of origin, with zero region/local hub of some sort, as crazy as it may sound.

And most of those Fortune 500 companies have only one provider for each of their key component.

If they were on the Cloud (and they will be, eventually), reversibility and transferability is « built-in » almost, because it is an actual feature, or because everything is way more standardized, or just because moving into the Cloud, you will think from the start about how to move back or to a different provider. And in any case is much much better than the state there’s in.


I think this was the conclusion from the Sony hack (2014- wow nearly 7 years already). People were scared of cloud security but Sony showed that on prem isn't any better.


Cloud providers are also more likely to have true off site backups in place. Your vanilla SMB running an exchange server on a pc in the closet doesn’t.


“Put all your eggs in one basket and then watch that basket.” - Andrew Carnegie


imagine if aws, gcp, or azure went down for one day.


That could never happen baring an unlikely critical even such as a nuclear world war. Any of them being fully down is nearly impossible - remember, we're talking about autonomous regions across hundreds of DCs across the globe.


Or a week ...


The proper mitigation would be actually using much simpler, better quality software. Microsoft Exchange Server is quite famous for being an attack vector on corporate networks. At my previous job, the company was advised (by a very capable and expensive security consulting company) to keep Exchange as separate as possible from the corporate network - this of course is a bit counter intuitive, when you want to use e.g. Single Sign-On, contacts and more typically with Active Directory (AD). Thankfully my job wasn't to administer or develop any solutions for AD or Exchange so I just took a note.

Obviously, no engineer can have even a sufficient overview of the full Exchange Server implementation not speaking of full understanding. In such a situation security, quality and user (or admin for that matter) experience always take a big hit. It doesn't help Exchange Server is most likely developed using programming languages and approaches that more or less demand complecting the solution with OOP-related ceremony. Supporting two decades or more of legacy features and protocols doesn't help. Some companies even want to connect AD and Exchange to SharePoint... which is at least as complex as Exchange.

The problem companies don't understand is that you have to work on simplifying, which is very hard - much harder than adding features. If you don't, the interactions between components will overwhelm even the largest and best skilled team on the planet. The result is, we see breaches and security issues like this every day and realistically, nobody who can decide anything in the corporate environment gives a f** anymore because nobody pays the more or less laughable fines with their own money and nobody really goes to jail but the user data is lost, peoples lives are shattered.


I find this comment extremely unhelpful.

There's a reason why everyone uses microsoft exchange, despite all its myriad of flaws, and the flaws of its major client Outlook.

And it's because it offers so much functionality, precisely because it so much more complicated.

It's like saying you can secure your house if you build a 20ft wall round it with no gate.

Sure you can, but it becomes pretty useless.


I don’t think that’s true at all. Exchange is awful. It’s slow, hard to configure and doesn’t offer anything you can’t do better with simpler tools.

Like the majority of awful “enterprise” products on the market, the primary reason that it’s popular is because it’s from a megacorp who speaks the language of the buyers, who are all aspiring megacorps. I was horrified the first time I used exchange and couldn’t wait to change providers the moment I had the chance.

So it’s more like saying you can secure your house if you use a security service who sets security targets instead of sales targets.


Exchange ... doesn’t offer anything you can’t do better with simpler tools.

I call maximum shenanigans on this. Exchange is a fully-integrated groupware suite with a single-pane-of-glass on both the management and the user side. I am aware of precisely zero feature-complete alternatives, let alone anything "better".


...with a huge army of engineers who can do basic admin jobs on it because of the AD integration...with a full suite of structured training programmes to bring up more of those engineers and keep them current.


I suppose if, as has been my experience, “feature complete” actually means “whatever features ship with Exchange” then you’re right. But isn’t that the tautology of enterprise software? Vendors build a moat of features which end up in enterprise RFPs and ultimately lock out other vendors - not because the features are necessary or even useful, but just because they exist and, perhaps, some department head thinks it might be useful one day - or, more likely in my experience, the consulting firm involved in procurement gets a cut of the action. Having been on both sides of this process, I know exactly how it works in practice.

So I’m sure that for some huge enterprises, the complete feature set from Exchange is actually necessary, or at least desirable. But for everyone else - including many companies I’ve worked in at a senior level, and almost certainly many of the victims of this vulnerability - I’d call shenanigans right back at you.


Feature completeness doesn't mean the software is better. I think, many of those companies affected or the poor people there certainly wished for at least a moment they didn't use Exchange.


It’s pretty bad, but it’s probably the best of the available tools at the time. Serious competitors like Groupwise and Lotus were kind of nightmares. Open source alternatives offered great individual components, but not the integrated solution you got with Exchange/Outlook/Sharepoint.

Sometimes being the least worst option is all it takes.


I think the point is that you can provide a lot of functionality by using back-end APIs to communicate to servers in different trust zones rather than having a big ball of trust - especially an internet facing big ball of trust.

And you are right, loose coupling does rule out a very small set of functionality. For example an email sent to a user might have an smb: link, and then Outlook used to do a preview of the email, automatically loading all the links, which would cause your credentials to be sent to the smb:// server just by previewing the email, thereby allowing malicious attacker to steal password hashes by sending emails to victims (no click was needed).

So that would be an example of excessively tight integration and a design philosophy that was fast and loose with shipping both credentials and executables across the network. I think we have learned from those lessons.

In terms of why it is dominant today, it is because of fairly rational C level decisions, not users clamoring for it as opposed to some generic email/calendaring solution. Microsoft still knows how to do support, there is a large pool of cheap IT admins certified to work on it, and it allows you to run your own server instead of buying a service from gsuite. Really if Google could shed their disdain for human beings and learn to think of them as customers, they could take a lot of market share away from Exchange, because right now it is a trade off of security versus support - the functionality is basically the same.


If you don't want outbound smb, don't allow outbound smb. Your bad firewall policy isn't exchanges fault. Bonus, blocking outbound smb also blocks the myriad other vectors for this same issue.


Gsuite doesn't need to be the same as Exchange in functionality. It needs to be the same as AD + Exchange +Teams +onedrive +Sharepoint.

Gsuite email doesn't even have good support for things like delegated access to shared mailboxes, treating them more like a distribution group. On Outlook they appear by magic on your sidebar.

Source: I am currently migrating some acquired users from gsuite to 365


It is unhelpful to your business, if you get hacked and your customers lose trust into your ability not losing confidential data. The daily toil of using Outlook and Exchange is also substantial.

You conflate functionality and complexity. If you think about it for a minute, complexity actually hinders functionality. There is some intrinsic minimal complexity to useful features of a software system for it to be functional. Exchange could be way more useful, if it wasn't so complicated and it could be a lot easier to keep somewhat secure.

Exchange in many circumstances feels more like a banks vault but instead of steel door with a wooden one with the cheapest padlock you can buy and a sign "we go here once a year to check everything is in order" where real banks usually work a bit differently... There are many cases, where an attacker gained access to the complete Active Directory through Exchange. At least so I was told by a company that did the consulting afterwards to clean up the mess.


"access to active directory" is granted to every user account in a domain (how do you think address lookups work?) and isn't nearly as scary as it sounds.


The default installation of Exchange 2013 and 2016 make changes to security descriptors in Active Directory that can make privilege escalation attacks easier. Presumably this is what the parent is referring to, rather than just "plain old" user access to Active Directory.


I know. It really is quite scary, if you have been in a for-real security audit actually. We wouldn't be creating admin workstations if the security story with AD would be so great, hint - it isn't. Exchange must communicate with AD with much higher permissions than most users. It really is scary how many barriers will be crossed just so anything you would expect, like contacts, works.


I've been a part of many for-real security audits. This is largely incorrect scare-mongering. Separate admin workstations are symptomatic of a bad security posture and check-box security, they are absolutely not necessitated by nebulous concerns about the directory being readable.

Of course a server process which is designed to modify (among other things) group memberships needs different permissions than a user, why would that not be the case...?

If you don't like it being highly privileged, don't grant it the permissions. Or hire someone who can.


"It doesn't help Exchange Server is most likely developed using programming languages and approaches that more or less demand complecting the solution with OOP-related ceremony."

This statement certainly doesn't help the credibility of your comment.


Cause you could never have exploits in functional...ever.


Exchange product architecture was absolutely to blame for this. Very particualrly, the "/ECP" directory should have never been allowed to be Internet accessible. (I believe the upcoming version finally rectifies that in a "supported" way.) In general, though, Microsoft hasn't focused enough on making Exchange more compartmentalized. The servers' privileges in Active Directory are too high (though this is supposedly being addressed in the upcoming version too.)


Thank you for the insight.

Certainly, "in the upcoming version" is a bit late for those affected and most of those other Exchange-related hacks in the past. The thinking around Exchange is still more or less left in the 20th century and it shows.


>"It doesn't help Exchange Server is most likely developed using programming languages and approaches that more or less demand complecting the solution with OOP-related ceremony"

What on Earth OOP has to do with the quality / security of the Exchange? This reads like someone is on crusade.


Well, I kind of am to be frank. OOP really mostly obscures an implementation and is often taught almost religiously as "the true one way". In the end, Exchange is a stellar example of the obscured and therefore unfathomable implementation.

You should really watch "Simple Made Easy" by Rich Hickey and think really hard about it. If you don't come to the conclusion that most software development could be way more sustainable in the long run would we use simpler tools and approaches instead of complecting everything especially with questionable OOP balast then maybe we have very different experiences.


>"OOP really mostly obscures an implementation "

I see nothing wrong with OOP. It is convenient for many things. It is not a silver bullet though. Nothing is. Personally I do not adhere to any concept / programming paradigm. They're just tools. I use many. Depends on what I am doing.

Generally one can take a tool and put it to good use while the other will fuck things up regardless.


I think OOP is useful where you need to have state and functions kept together.

It's a little easier to have foobar.update(), rather than update(foobar, state).

I started off mostly programming in R, so using mostly bare functions, but I have to admit that objects are really, really useful when you need to maintain state. Yes, you can do it with closures, but it's a little harder and a little uglier.

That being said, the mutability that makes objects useful is also problematic in that you can end up with magically updating references without defensive copying.


The point is exactly, it is somewhat easier exactly in that one case when you have a single state _foobar_ somewhere and work with it all the time. In almost all other cases the second case is actually simpler.

I don't know R and I don't really want to know it. For me, it doesn't seem to bring anything extra to the table that I couldn't do in Clojure or ClojureScript much more consistently and simply. If in my project, I have a number of transformation functions for my state, passing it around isn't a huge deal as it is just a nested map usually. It forces you to be very consistent and helps you as the project grows. Also, most of the functions are easily transferable between projects even when the state would have a very different structure.

Of course the whole thing is a complicated topic and in some cases you want mutability and local state e.g. because the performance is a bit better. Usually, that involves a few simple transformations.


See, this is because you don't deal with data and statistics on a regular basis, which I do.

And if you figure out how to fit a generalised additive model in clojure with one line of code, please let me know :)

So, in DS/stats you end up needing mutability because the datasets are really large, and the models take a long time to run, so copying is generally bad.


I feel that many of the theories concerning software developer's practice are concocted concocted mostly out of the author's desire to show that their way is the only one and capitalize on that. It is their own business. Not mine. I have bigger fish to fry and do not dwell on my screwdrivers.


What s even more complicated than convincing a company simplifying is necessary?

Convincing a developper to add features rather than remove requirement when the feature has no simple implementation in view :D


I know it was meant as fun. I had a bitter laugh. :-)

Actually, you want to work in a setting where you understand the need and value of a feature and how it fits into the overall design and feel of the (software/ hardware) solution to a problem. Is such a case, you understand that there is no requirement but a need or pain point that needs to be addressed, if you want to deliver more value to the user, some of which may turn into financial or other benefit for you.


The vulnerabilities being exploited were all zero-day. Up-to-date installations were still vulnerable.


It was a 0-day exploit. The patch wasn't released until March 2nd but the vulnerability was being exploited at least since January.


If I had to guess it's a huge laundry-list of organizations that for some legacy reason (Going back 10, 15, 20 years) are running on-premises Exchange, and don't have a full time person one of whose roles is to keep up on patches, security advisories and such.


> to keep up on patches, security advisories and such

Until you've personally experienced the full horror of attempting to keep on-premises Exchange patched, especially in the SME space where you may have few servers, it's hard to imagine how awful this is.

Cumulative Updates are essentially "completely uninstall Exchange" and then "reinstall Exchange again". This is not what one might call a "patch". Then you get into dependencies on .Net and suddenly you need to upgrade the OS as well while you're in the middle of completely-uninstalling-and-reinstalling-Exchange.

Last time I got sucked into this, I told my client it was nuts to run on-premises Exchange, to bin it completely and move to a cloud-hosted [Linux] IMAP mailbox system.


It's hardly a "full horror". I manage on-prem Exchange in the SME space, with single-server installations and multi-server installations (with and without high availability). The patching process is, arguably, inefficient (doing full installs over top of the existing installation) but, in terms of success rate, I've had good luck.

I wouldn't put out any new on-prem Exchange today, but the ones I support have reasons to be on-prem or planned migration off-prem.

Aside: I've been administering Exchange since version 4.0. I've never experienced "horrors" like so many people talk about. Failing to follow best practices, using dodgy hardware, and cutting corners are the reasons for problems that I've been privy to by way of friends, emergency engagements with non-Customers, etc.


> Failing to follow best practices, using dodgy hardware, and cutting corners are the reasons for problems

I'm sure there are some SMEs who are happy to throw serious budget at doing on-prem Exchange "right".

For everyone else, I'm not sure what they're supposed to do.


Everyone else pays for monthly Office 365 subscriptions and ends up spending more money. (Which is what I recommend now, but it galls me to no end.)

I don't buy the "Exchange is expensive to support" argument. It's cheaper on-prem than paying for the subscription. We always saw break-even at around 16 - 20 months.

I have billing records for a small business Customer w/ a single Exchange 2016 server for last year that amount to 6.5 hours for the entire year, including installing CU's 16 thru 18 (CU 19 fell in this year). Yes-- a piece of their overall Windows Update application budget applies to Exchange, as does the amortized cost of backup software, and server computer and support hardware. Even w/ the OS license, Exchange license, and CALs at 120x an Office 365 E3 monthly subscription they're still money ahead over the 4+ years they've been running Exchange.


However from the point of view of a medium sized business paying for office365, in terms of dollar per month per employee, they're getting much more than just exchange, they're getting onedrive, sharepoint, teams, and the office suite software itself as well.


For sure. And then there's the CapEx/OpEx tax games to take advantage of, too. It's not a bad deal on the whole, but I think it's overhyped as being better than it really is.

Moving to subscriptions results in a net increase in spend for organizations that were executing on-prem IT well and frugally. That's the only game now. I just think it's disingenuous to say that it's a cost savings. I reject the massive availability increase argument too, at least in the US, because of the lack of competition in the ISP space and the tier of service that is available to SMEs in their budget.

You spend more for the same stuff, are forced to "upgrade" (read: lose features, see changes in UI) at the whim of a third party, and may experiece decreased availability if you're unwilling to spend more on Internet connectivity. There "upsides" for sure, but too many people peddling hosted solutions fail to recognize downsides.


I don’t buy that for 365 unless you’re a small Microsoft consultancy and admin is “free”.

365 is a really good value, even comparing it to running an large scale standalone environment. Ditto for Google Workplace. For almost any other product, I subscriptions always drive more cost than value.

The


Do you manage any Internet facing Exchange? If so, what have been your remediation strategy with this attack?


All of them are Internet-facing. I have done a lot of patching and some restoring from backup (followed by parching) this week.

Some people disabled /ECP facing the Internet. It was "unsupported" by MSFT so I never did that. In retrospect it would have been worth the gamble. If I had it to do over again I would have taken that bet.

None of the compromised boxes I saw this week showed signs of post-exploit activity. They dropped their payload and left. Every compromised box was restored from backup, temporarily isolated from the Internet, and patched.


I used to run a large on-prem exchange system with about 75k users. It’s literally the only product I’ve ever seen where the admins were the biggest, loudest advocates for outsourcing it to the predecessor to O365.

It was more beastly back to run back then though. We did reduce our risk profile at the time by putting OWA behind a sslvpn and only allowing BlackBerry.


Thankfully for my mental well being it has been 15+ years since I touched Exchange.


It'd be nice if CUs were easier to install, but on-prem Exchange management isn't that much work once it's running smoothly. It'd be nice if they made it easier to firewall off more from your AD environment too.

But most Exchange management I do is mailbox management, and you have to do that if it's in the cloud too.


This jibes with my experience. My Customers who have migrated to Office 365 have been using roughly the same labor as when they had on-prem Exchange. (If anything, they're using a little more.)


> I told my client it was nuts to run on-premises Exchange, to bin it completely and move to a cloud-hosted [Linux] IMAP mailbox system

What did they reply?


The patches went out Tuesday... after many organizations were already compromised.


> (or using cloud mail providers)

Why? I don't see moving to a cloud solution being much better. The cloud service itself would be the single point of failure and would be just as vulnerable to a zero day. The organization would have even fewer risk mitigation options like NAT, firewalls, etc.


As someone who surveys different organizations networks day in and out, the amount of unpatched and out of date Exchange servers (and other internet facing services) I see is ridiculous. Most sysadmins don't have a tangible idea of the risk they take when they set this stuff up. At least Office 365 is patched and monitored on a regular basis and has actual security teams tasked with looking for potential exploits.


I patched my Exchange servers the morning this was announced, a few days ago. The patch takes about ten minutes per server, and does not require a reboot. If your server was a client facing one (CAS) users would have seen a brief outage in Outlook connectivity.

The patches were single file downloads, one for each version of Exchange, yes you needed to be on the latest Cumulative Update for Exchange, so if you weren't you really have no right running a production mail system...


The last few security patches have been available independent of the Cumulative Updates, so it was reasonable to be a few behind. But this one required the latest CU to install.

Bear in mind after updating you still need to check if you were already hacked.


It's almost like all of our institutions shouldn't use the exact same software vendors


Really? Wouldn't multiple softwares be equally vulnerable overall but the hacks would be more distributed in time as they're discovered at different times? Is that the problem you'd hope to solve? That it all happened within a few days instead of at different institutions at different times?


Yes, distributing the same number of hacks over a period of time would on its own make things a little bit less fragile. In general, having a single point of failure is bad for the stability of any large system. But more likely: imagine all these orgs were distributed across three or four providers. A bad actor comes up with a zero-day for one of them. They can now a) go ahead and use that, far fewer systems are compromised and awareness of the threat is raised, or b) wait a much longer time until they come up with vulns for all the other systems. Either of those is less bad than the current situation.

These days it's starting to feel like China might get to a point where they could shut down an entire country, all at once, with the flip of a switch.


One hack happening doesn't raise awareness for the risk of different unknown vulnerabilities in different software. So the total number of institutions getting hacked would be the same.

It's not really one system. It just looks like that because it's one news story. If instead, all school districts were hacked this year and all police departments next year, how is that any better than both together? If it was one system like one network, your idea is even worse because having more different software increases the attack surface so hacking any one of those compromises the whole system.

Would you personally use uncommon software to avoid being part of a big hack like this? I don't think that's a valid way of protecting yourself.

Your idea would make sense if many of these institutions were just providing services that were redundant with each other. Then if some of them are disabled, the others can take their place. But a police department's email server can't do the job of a school district's one. And if confidential information is taken in a hack, redundancy doesn't help at all.


Right. Software diversity (as long as it's real, all-the-way-down diversity and not just different branding of the same tech) is beneficial to overall security for more or less the same reasons that gene pool diversity is beneficial to the survival of a species. This is one instance where our choice of metaphorical vocabulary, like "virus", is very apt.

Only one vendor for all corporate email is bad for the same reason that only one popular variety of banana is bad.


The opening volleys of major future conflicts might be coincident with each country shutting down the other's computers.


> Really? Wouldn't multiple softwares be equally vulnerable overall but the hacks would be more distributed in time as they're discovered at different times?

Yes, but you're describing a more resilient system. A monoculture can get totally knocked out by one vulnerability.


I'd rather just hate on Microsoft specifically :-p


It's almost like we shouldn't indiscriminately connect everything to the internet.


I mean in this case it was email, so I don't know how you usefully disconnect that from the internet


Just drop the 'e' from email.

/s


And here I am fighting a one-man battle to bring back the dash from e-mail



The attacks were on port 443, i.e. the webmail interface. That could be behind a VPN.


zero trust my pal. vpns are over.


Sure VPNs aren't perfect. Nothing is. It's layers. Defense in depth. Of course the users don't like having to connect to VPN to read their email. Pick your poison.


The consumer VPN market looks like a snake pit of spyware and shadiness. The tin says “Hide yourself and your data” and there have been reports that some companies are doing the exact opposite: funneling it to shady actors across the world. And this is not even a freemium product in many cases!

I’d assume the enterprise segment is not as bad, but I’d also assume GP is talking about something along these lines - that you can’t trust vendors for anything these days.


VPNs like Tailscale look like the future to me.


Except now the authentication servers are hacked (easier if you run it yourself).

Doesn’t seem more secure than traditional VPNs.


That ship hasn't just sailed, it's been around the world a few times.


or Identity Providers.


I really wish the reports on hacks could treat attribution more seriously. Everytime a hack like this occurs it gets blamed on 'the Chinese', or 'the Russian', or 'the Iranians', without every showing any evidence to prove this. Attribution on the Internet is hard, like really hard. I want proof.

And if you don't have proof, or can't show me the proof, then don't just blame Americas enemies. It's sloppy and dangerous.


I agree that they should show proof.

But if they won't show proof yet it's nevertheless true and they have strong privately held evidence concluding it (perhaps from the NSA), that doesn't suddenly make it dangerous to blame the actual perpetrator.

It's only dangerous if they're doing it incorrectly or presumptively or deceitfully (which you don't know to be the case).


I would guess that many times the evidence is from NSA signals intelligence and they can’t show their work because it’s classified. We end up just having to take their word for it.


How convenient that the perpetrators always turn out to be political enemies.


> Adair said he’s fielded dozens of calls today from state and local government agencies that have identified the backdoors in their Exchange servers and are pleading for help.

I can imagine they are sending an email to support@microsoft.com pleading for help. A future attacker would be well served to deny email to be sent to any mailbox @microsoft.com

EDIT: I'm now realizing that this follows the Microsoft-angle of the Solarwinds' attack. These customers are not going to be happy with $MS


> EDIT: I'm now realizing that this follows the Microsoft-angle of the Solarwinds' attack. These customers are not going to be happy with $MS

Won't hurt MS in the long run. There is no viable alternative to switch to, for any of their products:

* OS: macOS runs only on expensive Apple hardware, Linux can't run business software, plus both have retraining costs for employees

* Office software: Libreoffice just... doesn't cut it, let's be honest. Apple's stuff only runs on Macs.

* Exchange: Lotus Notes is dead, and while there are open source solutions, there is no comprehensive single solution.


The viable alternative is Google Workspace.

At most companies, a small percentage of employees will still need Excel for really complex/large spreadsheets, or Word for complex formatting destined for publication. But for 95% of people Google's good enough or better.

Year after year, Google keeps stealing more of Microsoft's customers, and it's extremely common for new companies to adopt Google rather than Microsoft.


It seems MS is winning in the Education space - the existing mindshare for MS Office means it's hard to accept free Google Workspace and the learning curve that might come from that vs. M365 with the free desktop Office licenses they give out to every student and teacher.


> It seems MS is winning in the Education space

Which country, and what level of education? In the US, cheap Chromebooks with GSuite have taken over K-12


Germany. Since Corona, Teams is suddenly everywhere.


Google is absolutely dominating the K-12 education market.


I would think tertiary matters a lot more than k-12


I’d agree, as it is one step closer to the labor market, and Enterprise is the goose with the golden eggs. But is Microsoft really dominant in tertiary?


That's a great point, and when I think about it I can only remember using google docs. Even if I could have afforded excel (I'm sure they give it to students for free), google docs was way easier for working on a team.

And nowadays, I use excel (in part because I don't really work in a cloud-friendly industry).

So I guess my point falls apart pretty hard.


You'll see that collapse when Google faces a few more privacy lawsuits and schools realize forcing their students into Google's system likely isn't legal. There are already a few cases in process about it.


I’ve been using Linux for work and personal for 4 years. Almost everything used in the enterprise today has a web app/electron version or runs natively. Including MS products themselves. I laugh maniacally every time I’m working in Word online... muahahaha


When Microsoft starts writing apps for Linux...


There's Teams... >snicker<


Teams ... is terrible. Doesn't matter the platform.


I've been using Teams for a couple of months in Linux Mint, works fine kinda. It froze only once and it was ok after a restart.


@Shared404 has written applications not garbage. /s


Lol. TBF, Linus never specified that they had to be good applications :P .

They (did? / are doing?) Edge for Linux though, for some reason.


How many employees do you have?


The macOS hardware is a little more expensive but over the long run it's significantly cheaper: https://www.vox.com/2016/10/20/13337652/mac-ibm-business-che...


Apple hardware have some downsides though. One big advantage of HP, Dell, etc is their support. Apple repairs takes weeks (especially here in New Zealand) as they expect devices to be sent long distances to wherever they repair them. HP, Dell, etc can do on-site repairs in <24 hours in many cases. If it's just your personal device then a few weeks may be an inconvenience. But for businesses it can cost them enough that getting a support contract from HP, Dell, et al can be worth it.


I’m sorry, what are you talking about? You boot your new Mac laptop from your time machine backup and are back working within hours, not weeks.


That's bollocks. Time Machine performance over network is atrocious.

With a HP/Dell Enterprise line model all you need is a decent set of screwdrivers (and if you're touching anything that requires taking off heat pipes, skme thermal paste) and you can literally replace any part in a hour or two from a spare laptop - or you just swap the disk in a spare.

With Apple's newest shit you can't even do that since everything is soldered.

I'm a die-hard Apple fan, but for large shops professional machines are lower in maintenance cost.


It's been awhile since I was in a big org, but when I was, no one was replacing laptop parts. The deals we had with Dell/HP were basically overnight replacements (this was different from servers where we had 4 hour on-site support). Then we would send them broken machines that would eventually come back fixed.

So do big orgs actually have people internally swapping random parts in a laptop to see if they can fix it?

Doesn't change the point that Apple was more expensive, but mainly because Dell/HP prices go way down at volume.


That really pisses me off.

Go look at the source, IBM. They started a pilot program, with power users, and converted them to Macs, and then a year in said that Macs need less support and cost less over their 3 year lifecycle.

See the problems? They couldn't know a year in about 3 costs over 3 years, and taking power users that demand Macs and saying they don't need support is obvious. That's a bullshit and obviously wrong "statistic" and source to use.


Typing this on a late 2012 iMac which remains my main workhorse. I'm considering a new M1 iMac when they come out, but really no need at the moment, now I've fitted a a 2TB SSD


Hm.. I find it funny that organizations use MS products and stay in business. The amount of downtime and ridiculous failures I saw regularly as a consultant were astounding.

My coworkers used Macs which really don't cost anything given hardware lasts 8+ years now. Most companies using Windows have a large budget for laptop IT that costs more than replacing expensive machines often if that were necessary.


I've found Macs don't really last that much longer. The previous Mac I had, I actually begged for it to be replaced as it had a spinny HDD and recent versions of macOS run very poorly on these. Luckily it turned out it was close to being sent back to the leasing company. $EMPLOYER policy (as often is the case in larger employers) don't allow me to replace the HDD with the SSD. The newer one I now have which has a SSD is now performing poorly so I am already looking forward to a replacement already and it's still under the 3 years window. My colleagues with PCs (as we are issued PC or Macs depending on the location we work at at the time) seem to be happy with even the older PCs. I had a oldish temp PC for a while when my Mac needed repairs and it ran Windows fairly well. I used to be a big advocate for Macs but not any more.


The HDD macs work very well with an SSD swap (hard in a corporate setting,) or just an external SSD (easy in a corporate setting.) But should have maximum RAM (hard to change in a corporate setting.)

I'm not at all fond of the newer more disposable Macs. Still, they should perform pretty well. One of my coworkers installed browser themes that seemed to be crypto mining or something equally ridiculous once. You may want to create a fresh user without any personalization and see if problems go away. I find Mac users and PC users tend never to do a wipe/install and almost everyone tends to port their problems with them by bringing their home directory even to new machines of the same OS.


If a place's main problem with their laptops is what type of hardware they selected they are doing extraordinarily well in my book. It's usually the 10,000 lbs of bullshit apps in the standard image and grossly inefficient means of dealing with users issues that create the performance issues or drive up department costs in the long run. Few workers really have much of a local performance need that they wouldn't notice if a raspberry pi came in as long as it was well maintained and behaved the same functionality wise.

At my previous employer they started to allow Macs and people were clamoring for them because they ran GREAT but after the first few thousand went out they started building up the amount of BS loaded approached being equal to the Windows ones and suddenly the satisfaction levels started to even out with the standard build. Chromebooks actually became very popular because they were even harder to be loaded with crap than Macs.


> Office software

GSuite seems like the answer here. Mail, Docs, Sheets, Slides will work for a vast majority of businesses.


If you wish to compare MS offerings to GSuite then you should not compare it to on prem but to Office365.

I do not believe without data that switch from one closed source proprietary software provider to another one will guarantee you from hacks.

Switch to open source most certainly does not. Nor is switching from one proprietary provider for your software from Microsoft -> google


It's not a guarantee but I'll trust Google for application security over Microsoft every time.

That said, it's not the question. The question is if a company wanted to switch away from Microsoft here, what is their option? It's not an inherent statement that one is actually better, but that there is an option if one feels burned by Microsoft here.


I’d trust them marginally better, but certainly not orders of magnitude more. We don’t know if Google would do significantly better if they operated at the same scale as MS in Enterprise. Google operates some consumer properties at an even larger scale, but I get the feeling that Enterprise is particularly attractive to hackers for the potential rewards.

Last week Firebase sent me a notification that several of my properties (some of them enterprise apps) had lost domain name verification. The panel in the console was clearly glitched when I inspected it. Two days later they sent a correction saying that this had been a mistake. No big deal, but it goes on to show that Google is not perfect.


I'm not sure what scale would affect this that Google hasn't already hit? I've already had GSuite for my high school, my college, and my work. What additional scale would open up massive security holes? This seems like it's very much just horizontally scaling the same product.

Not to mention that GSuite already has 6M different customers/tenants. They're already at a comparable scale, and that doesn't mention that the free versions have the same application security model, with zero incidents (knock on wood). "but scale" feels like it's ignoring the already existing track record and scale and just making excuses for Microsoft.


That's 6 million, vs. the 260 million that Office 365 has.

And that's individual licenses. I can't easily fetch the number of medium to large companies on Microsoft Office vs GSuite but I wouldn't be surprised if it was significantly larger than 50x.

My original contention is that hackers may be particularly interested in that dimension, rather than in the number of individual licenses (which MS also dominates by an order of magnitude).


At this point, I'd trust a guy selling software out of a van over Microsoft.


There just isn't much you really need e.g. Word or Excel for. If your corporate application doesn't run with a useful web interface, you probably have other issues too.

Word is an application that puts looks, thousands of mostly useless features and pixel-pushing up front. Excel at least really enables normal people to do some advanced calculations on data but the former still applies. Both are very complex tools mostly hindering any kind of value-added thinking and creativity but give you enough foot-guns and are really "fun" to support if you count Outlook in as well. I mean, how do you program an application that regularly crashes and corrupts the email database? LibreOffice is the same kind of thinking, because it mostly is a copy of the ideas in Word, Excel etc. Actually, when we are at it, Google Docs is more or less as problematic as the other tools.

Actually, just opening any of these applications seems a bit overwhelming. Why should you care that the readable font is 11 or 12 px big (it actually isn't that comfortable to read, but ok)? Why should you care that the default font is called Calibri or whatever? This is information and complexity that is shown by default that usually adds exactly nothing to your business. The same is with colours. Why should you want to have the option to select custom colours with two clicks or so when most people choose colours badly? The default colours offered are really not that great either.


You’re arguing against decades of experience to the contrary.

I certainly would prefer plain text for most content a business generates, but the market has overwhelmingly voted in the other direction.

I believe for some interactions with the U.S. government Word is even mandatory. And it’s effectively mandatory for collaboration with everyone else.


many industries are dependant on Excel also. entire industries. Accounting, finance etc...


You mean the industries, that are liable for most of the initial suffering during the 1930s, 2000s and so on? A case could be made that some of it lead more or less directly to wars.

Accounting and finance should know much, much better to use something actually auditable. Pretty much all software in any way associated with those industries that I have seen is at best average by enterprise software quality standards but most is barely useable. In that sense, Excel is probably the better choice. :-)


Spreadsheets are insanely versatile and useful. I think if you were to redesign a lot of the things they do as custom apps, you’d end up with poorer version of a spreadsheet, like you’re saying.

I’ve experienced this first-hand when building custom business apps. You’re building your UI in React or whatever only to conclude: “Fuck, this is a spreadsheet.”


Actually, at OrgPad.com my colleague Pavel (~Paul) is writing a collaborative editor in ClojureScript + re-frame/ Reagent/ React. There is even a very rough video about it (in Czech though) https://www.youtube.com/watch?v=SkFJ1zcRjQY where you can see the current state of work, including the debugger Pavel has written. We will have some basic tables/ spreadsheets in the final version and we plan on having some very cool table calculation abilities later. ;-) So yeah, we thought about it.


Here is a more technical video in English that was just uploaded: https://www.youtube.com/watch?v=VeVcNmNFzmc


That people use something and maybe even extract some business value from doing so doesn't necessarily mean the product or the ideas around it are great and cannot be improved upon in various substantial ways. People used to ride horses and cows, people used slaves for manual work instead of inventing and using the steam engine at scale much earlier e.g. in ancient Greece or Rome.

I don't mind rich text and I know a bit of typography to avoid some common mistakes but I don't think most users really appreciate a full scale of sizes in pixels for a font or other information not relevant to the content they are producing. Most would be much better of using normal, small, large, very large for presentation or posters or something like that. The absolute values could be set in settings or overwritten somewhere maybe but Word isn't actually meant for designing websites or posters. It does all of those things to some degree but it very much isn't the right tool for the job in those areas and shouldn't be treated like one.

Btw. nobody can tell, if the businesses wouldn't be better of using something more robust than Excel even when that would mean actually training people to use a different tool. Most companies probably never train Excel, so even using that is very certainly inefficient. You know, there isn't much business value in Excel macros with viruses in them or macros nobody understands - so maybe what they calculate isn't even correct in some or all cases.

Excel is great for some things, but for many things it is used in practise it is actually quite bad. E.g. some people write working hours in Excel. There are much better apps just for that. You could have Google or Microsoft Forms, that are much easier and more robust. The data can then be used as well in a spreadsheet or imported into a DB. Unfortunately, Word and Excel (and Outlook) have developed their own gravity field in many industries and so the (very low) local maxima cannot be escaped (somewhat easily).

Having a government use anything as a stamp of approval does it a bit of disservice. If we would rely on current governments for innovation, we could just as well return to the caves directly. More seriously, if by collaboration you mean sending people word documents by email named final-assessment-v2-final.doc (because docx hasn't really arrived in many places and people suck at useable version control) then I am with you. Everyone else (including you probably) just writes the text into the email directly or uses something actually collaborative (for example Google Docs). The real final version is produced, after a consensus has been reached using more efficient communication channels.

The state of affairs is the market for pretty much everything currently is in a bubble. The US governments debt is more than twice the total amount of gold mined during the whole of human history (https://www.gold.org/about-gold/gold-supply/gold-mining/how-...) if a tonne of gold is roughly worth 60 Mio. USD. We haven't improved the working efficiency since the end of 90s much if you are frank. I wouldn't be so sure the market is a good measure of a products absolute quality actually.


This is similar to the plain text argument, but the main thing about standard ms office apps is that they are accessible and easy to use. I can use excel to open a spreadsheet made in the last 20 years.

I know everyone who has worked in an office setting can at least open and read a spreadsheet. I don’t know about an ms form or an access DB. The default (and sometimes only) ways most people can process text files on their machine is notepad or word. Word is way better than notepad for text processing.

If I send out a docx file, I know the formatting will be consistent when they open it. We can track changes easily without having non-technical people figuring out git or some other repo, and it will be compatible and easily viewable if we acquire any companies or are acquired.

The MS apps have basically become the standard applications to process plain text.

Lastly, I understand the value of some applications for data processing over excel. But when you’ve got to train up a new marketing or sales person every 6 months in R, that will get old very quickly. You can at least expect they know Excel and should be able to understand a spreadsheet.


Actually, Microsoft Office apps (others are not much better if at all) are not objectively easy to use for everyone. Just sit down a kid in the 2. or 3. grade and let them write about what they like with some structure, include pictures, print it out. You can go further: can they share a Word or Excel document on social media and will it generate a preview or do people have to download the whole document first and have some app that understands office documents installed? Ok, now sit them down in front of a brand new computer with Windows. How long will it take until they can edit a document in Microsoft Word, when they have to buy and install Office first?

Not very hard tasks to me - because I have done all of them hundreds of times. Other, even more advanced tools by Microsoft of course would fare much worse even with people like you and me, otherwise quite proficient with digital tools, if we haven't learned to use the one tool beforehand.

Yeah, Word is better than Notepad if what you want is to write rich text, but is it actually much better than WordPad from the usability perspective?

You have other problems, when your environment is so unstable that you have to hire new people every 6 months. Nowadays, you cannot expect any knowledge really unless the people can show a certification. Even a diploma in CS from a university doesn't mean the people know how to program useful stuff.


I’m not saying they are easy to use, but they are the standard. I’m also not saying I like that they are the standards, but everyone with 3+ years experience at a large company out of college can use Word to edit text in a document. Or should be expected to.

A new trainee every six months for a sales or marketing department isn’t crazy - it could be growing or a team of 6 people rotating out every ~3 years. I’ve bounced between WYSIWG and plain text, but there is a hard and steep learning curve when you ask people to use plain text.

Word also has spell-check and other features we take for granted.


As far as I can tell, the legal world still runs completely on Word redlining/track changes. Also, Excel almost literally runs many businesses.

As long as those are true, I'm not sure you can say "there just isn't much you really need eg. Word for", unless you're never on the business side. If you deal with the people who use them, you probably also want to use them to avoid headaches. Network effects are a bitch.

If you're only ever slinging code, sure, congrats, you may never need to use either.

Note, I'm not arguing that either is good.


> There just isn't much you really need e.g. Word or Excel for.

You can't be serious. There's not much businesses need excel for?


Do they need table calculation or do they need Excel specifically?

I am not saying, Excel isn't useful in any case. I am saying, it is very far from a good solution in many, many cases and state concrete examples.


Everyone only uses 10 features from Excel, the problem is that those 10 features are different for every user. Anytime someone says users don't need Excel or it can be easily replicated by some other tool, they likely haven't spent much time with Excel or users.


Well, I have used Excel extensively so I know its warts very well. You can use Excel right but in my experience anytime I have seen even quite capable people working with Excel, nobody in that work setting would exactly describe it as fun. I know at least one person, who really uses Excel very proficiently and maybe even have something approaching fun while doing it. But he is literally teaching Excel to other people. I have done 30 hours in his course and learned quite a bit.


Being unfun to use != being useless to the business


You'd be shocked how dependent office staff can be on obscure MS Office features that I, a systems engineer, has literally never heard of and couldn't imagine someone needed.

Heck, this year I watched someone struggle to find and license a third party add-on just to do a mail merge on Google Workspace.


I know, I have seen it first hand. It was lots of wasted time for very little value pretty much every time I have seen it. I hope you have a better experience.

Most of the time, actually stepping back a bit and thinking about the problem at hand for a minute can save many hours of tedious work. E.g. keeping track of hours worked - probably just use Toggl and export a CSV at the end of a month or something - much better UX overall than a form in Excel that you have to print out. Doing project planing in anything from Excel, over SharePoint, OneNote, Outlook Calendar etc. was always extra hassle in my experience. Everything kind of works but not really, you avoid doing changes, because it is very tedious.

I have seen all the enterprise "Export to Excel" web interfaces that are usually so bad, you cannot get anything done without the Export/ Import feature. I mean, Export/ Import is great but maybe you should just have na API and/or a useable web interface. There of course, Excel/ Spreadsheet is a temporary saviour but you should think about why do you have to use such a bad software system at all!


Article focuses on US, but this is global.

> “It’s massive. Absolutely massive,” one former national security official with knowledge of the investigation told WIRED. “We’re talking thousands of servers compromised per hour, globally.”


It is really an epic hack, historic.


Just one more in the long line of 'historic' recent hacks.


Slightly related, on BBC iPlayer is currently an interesting documentary series available called "China: A New World Order", which touches hacks like these a couple of times.


Can attest to this being very good


I wish the title was a bit more clear from the original post. This feels a little bit vague on purpose.

Microsoft Exchange server software , not to be confused with MS Outlook email software or the lesser Windows Mail software.


Exchange is often externally open in some way for OWA.

One that server is hacked, you may be wide-open internally.

I’d be at least as concerned about an Exchange vulnerability as I would be about Outlook, but probably more.


Maybe it's because I haven't dealt with MS products in awhile, but my first thought was who puts OWA on the open internet without requiring a VPN? That's just asking for trouble.


I'm curious to know why this did not affect Office 365 / Exchange Online.

I used to work for a law firm which ran on-premises Exchange, but had OWA running behind a VPN. I remember finding it extremely inconvenient at the time. But they're the ones laughing now.


Apparently because they are different code bases. So the answer could be: random chance.


Yet another superb reason not to run your internal company comms on a publicly accessible email server.

Or to replace email for internal use altogether. TMTP is a new protocol with that goal:

https://mnmnotmail.org/

https://twitter.com/mnmnotmail


Who needs a whole new protocol? Just type at each other with netcat.

https://www.digitalocean.com/community/tutorials/how-to-use-...


You can publicly expose an SMTP relay while keeping Exchange itself private, right?


Absolutely. The parts of Exchange that got exploited are exposed because people want ActiveSync on their phones and web-based email. If you did without on those, or ran them over a VPN, just having SMTP exposed didn't make you vulnerable to this.

Hardly anybody does that, though.


Yes totally. Most of the people running exchange now with this issue are cheapskates or attorneys.

If defense contractors keeping exchange on prem for security/compliance reasons are offering OWA on the internet, obviously there’s a deeper problem.


My Exchange server avoided this mess by living in this state.


> As more sites adopt TMTP for their own reasons

Isn't this the problem in replacing almost any technology that we know is "broken", it is often too ingrained to be replaced easily.


Consider the huge variety of messaging and discussion apps; it's relatively easy to embrace new communication tools.

EDIT: you might never silence SMTP altogether, but a suitable protocol could supplant it for the great majority of its use cases.


You can embrace new tools and replace the old new tools. What you generally aren't able to do is to replace email not matter how hard you try.

The appearance of email being replaced exists in some places, but you find pretty quickly that you can't survive without it because it's still getting used for some critical communication or process.


> What you generally aren't able to do is to replace email not matter how hard you try.

The "new" email has been launched quite a few times now. It doesn't seem possible at this point unless all the major players agree on some new protocol which is seamlessly implemented in their mail services, while still allowing SMTP to function as a fallback to the improved protocol


References?

I'm not aware of any alternative email protocol that's implemented, except TMTP. I don't believe closed-source, walled-garden services, which don't allow third-party clients or servers, really count as legitimate alternatives.

There's Matrix, but that's a synchronization protocol for chatrooms, not a store-and-forward messaging scheme.


DMTP/DMAP

https://en.wikipedia.org/wiki/Dark_Mail_Alliance

The stuff Ladar Levinson created after the Lavabit takedown, (famed for begin Snowden's email provider). Although I'm not just talking about email protocols; but additions or other improvements that always have the same problems (PGP encrypted email, or whatever Facebook tried to do when they reinvented email...)


I reckon that a new mail protocol with use cases of excluding unwanted communications may find it harder to gain adoption. It's like an anti-viral quality.

I could see this being rolled out within an org where the one org can deploy clients & server to all internal users at once.


the old wisdom used to be "don't expose microsoft stuff directly to the internet" apparently that's still true?


This take was always problematic in my view. It's the reason that I had years of inherited servers with a proxy sitting in front of exchange that consistently broke things everywhere it was done. In practice, this exploit is just another in the line of exchange issues that would get forwarded directly through a proxy to the backend unchallenged. Meanwhile the only place I've heard of pushback against applying this patch, is the "but we have a proxy we're secure" crowd.


that's still a form of exposing microsoft stuff directly to the internet, in my view.


Are any other vendors better? I don’t think this is a MS issue.


much of the software and design for MS stuff is from a period in personal computer history when people weren't worrying as much about public internet style security problems, so it has always seemed to have been at a disadvantage in the internet era which they have fought hard to try and overcome, but nonetheless a lot of code and culture remains.

this affects not only the operating system and platform itself, but also major applications, development philosophies, major utilities and even the approaches used to operate it in production.

it's actually an interesting question, while internet security problems largely outmoded old pc inspired designs and product-market fit (the diy part time sysadmin), will they outmode the personal operation of any software... that is, will computer security problems grow to the point to where everything must be actively managed and defended?


I've heard there is a few Redhat servers on the internet


Which suffer just as many security issues if you try to do something complex with them.


One can only ask what's the point of the forced automatic updates when this stuff is still happening at this scale.


There's no forced automatic updates of server software. In fact, Exchange CUs can only be installed manually by mounting an ISO file on the server.


How does Microsoft bear no financial liability for the many major security flaws in their for profit software? I’m sure they have clauses in their legal agreements, but come on...


There's a powershell script to check your server here: https://github.com/cert-lv/exchange_webshell_detection


This is the kind of thing that keeps me up at night.


For those of us working incident response, it is exactly what is keeping us up at night these days


Wonder what has changed. It was standard practice 15 years ago not to expose Microsoft Exchange (nor any other Microsoft product) directly to internet.


It was standard practice 15 years ago to never ever use Microsoft Exchange in the first place. Wonder what has changed.


MSFT still outperformed SP500 index this week.


Security vulns are a profit center for Microsoft.

I have a client who was hit with ransomware that exploited holes in RDP. They paid Microsoft about 5% of their annual IT budget to upgrade.

How much more license revenue and 365 subscriptions will this latest fuckup generate?

And if vulns are this profitable, where's the incentive to prevent them in the first place?


> And if vulns are this profitable, where's the incentive to prevent them in the first place?

Prior to upgrading their software, where was the incentive for your client to keep everything up to date and put in the infrastructure needed to patch all of their systems minutes/hours/days of a new zero day?

I can't speak for your customer (obviously), but do you think they would have invested 5% of their budget in upgrades for this particular hack? A ransomware attack shuts you down. This is blackmail/corporate espionage stuff. Very easy to ignore depending on what your company is saying in their email.


>about 5% of their annual IT budget

so basically for free / at low cost?


People are still buying into the ‘nothing is secure, they can’t help it’ storyline.


So if you discover one of these hacked servers, how should you let them know - send them an email?


1. Find approximate geographic location (whatismyip.com, traceroute, a couple pings to nearby datacenters)

2. Do a speedtest

3. Add location and speed to remote desktop access marketplaces on darknet

4. Collect passive income from renters looking for clearnet computers in certain areas to use.

Often times all the known VPN IP addresses are polluted - even their "dedicated residential IPs" and this can ensure you have worse treatment on the internet, such as more captchas, outright bans, inability to use streaming services, and for actual criminals it means their stolen credit cards don't work. But with remote desktop marketplaces, you can find a computer near the postal code of the credit card you have and this ensures your online transactions go through. Obviously not "you" as you don't have to care what the people do on the other side of your tollbooth. Since you weren't the one compromising anything (computer, credit card, any actual spending) you'll be fine, but you're also going to do all this over Tor anyway. But because you'll be fine you don't have to worry about being detected due to some flaw in Tor because you won't have triggered a criminal investigation, the actual hackers and skimmers and thieves will have though, and incurred all the liability for themselves, people who will have paid an address on a darknet marketplace in Monero and gotten temporary access to a server.


This is the answer to a very different question.


haha you're right, I read "what would you do" not "how to contact"

but I also figured that the spike in traffic or someone messing up their botnet's activity windows would alert the computer owner to something


Scary! my university uses Microsoft for email, but I think they use the cloud hosted version but wonder how much code is shared between the versions. When I added it to the mail app on my iPhone, it mentioned it could wipe my device. Guess that's a default with the implementation but that is a turn off. So I ended up just installing the Outlook app instead since couldn't find imap support. I feel like on desktop, just using the web version or even adding it to my home screen would be another use but partly was hoping to just have all my accounts together.


If you're on Android there's an app to get around that device management requirement when adding an email account.


Can you elaborate?




MS, Solarwinds, ...

I suspect that the number of compromised software companies are much larger than these 2 companies. I'm almost certain that we will hear about others in the future. If you manage a software product I hope you are auditing the code regularly. You should also harden the security for it and who has access to the source code and its build no matter how unlikely you think you are a target.


> I suspect that the number of compromised software companies are much larger than these 2 companies.

Given one of the CVEs is CVE-2021-26857, there has already been more that 26000 vulnerability submitted for an CVE ID this year so there are indeed countless other compromised systems - the two big hacks of recent are only in the news thanks to their large blast radius.


Today is the day for all of the other victims to disclose under the umbrella of Microsoft attention.


They attribute the attack to a particular actor without providing any evidence to the public. A bug could exist that enables such an attack, but it's not proven any emails were ever even taken.

They did find a tool left behind it seems.

I am just increasingly skeptical of these hacking stories that have a nat sec angle on them after the previous ones have been shown to be mostly or entirely fraudulent years later.


> the previous ones have been shown to be mostly or entirely fraudulent years later

...they said, while providing no evidence to the public.


I was referring to Russiagate and the allegations made over Wikileaks. Even if I was some kind of international troll (hint: I'm not), it wouldn't matter because it remains true that the attribution is an evidence free assertion about the current target for dirty tricks by the USG. It moved a few millimeters across the world map.

"Bombshell: Crowdstrike admits ‘no evidence’ Russia stole emails from DNC server"

https://thegrayzone.com/2020/05/11/bombshell-crowdstrike-adm...


Small map you have there


You only need to move across an invisibly thin border line.


Technically, but I highly doubt Siberia is where a bunch of hackers live.

Tangentially, it could still be Russia going through China...


Last comment from me in this thread, but I meant the target of US government ire, not a literal movement of people.


Those examples are nothing like hack disclosures.


On the contrary, they literally were allegations of hacks in several cases.

Remember "Russia is hacking our democracy!"?


Please be specific with your 'they's when throwing around suggestions that someone is being deceptive.

In this case, Microsoft is identifying the actor quite clearly:

https://blogs.microsoft.com/on-the-issues/2021/03/02/new-nat...


Microsoft is also providing evidence free assertions.


You're failing to state a claim, other than perhaps "some guy on the internet is distrustful".

If you have a reason other than your feelings about domestic politics for skepticism about this case, please share it.


Based on the previous hacks of the DNC regarding Russiagate and Wikileaks being fraudulent allegations, the current leadership being Democratic (1), and the current target being China, I am not convinced by simple assertions alleging the an actual exfiltration of emails by China without evidence.

(1) Not that Republicans wouldn't be above this, just that the Democratic party has a history of this particular tactic.

EDIT: My guess is that if they actually showed what they were basing this allegation on, a lot of people would conclude it was extremely weak stuff, maybe impossible to decide who did it if anyone did. Hiding information is extremely useful for spinning authoritative narratives. Of course let's not forget that NSA implants are probably present in strategic locations around China, but that's par for the course.


> Hiding information is extremely useful for spinning authoritative narratives.

Telling the criminals, publicly and in detail, exactly how you know it's them is also extremely useful -- to them! -- in that they know what not to do the next time around. It's in your own best interest to keep that information secret, especially if you have any reason whatsoever to expect that it may be useful again in the future.

Yes, this requires that we simply trust Microsoft when they say it was $attacker. You can choose not to believe them, if you like, and demand to see all the evidence. I don't think that will hurt their feelings all that much -- and I also don't think you should hold your breath while waiting for them to give you that evidence.

Ultimately, it doesn't matter all that much -- as far as I'm concerned -- whether it was China or North Korea or Canada or New Zealand. I'm less worried about who did it than I am cleaning it up and doing whatever I can to prevent it from happening again.


>Telling the criminals, publicly and in detail, exactly how you know it's them is also extremely useful -- to them! -- in that they know what not to do the next time around.

Great point, although I wonder, isn't the goal for them to not do it again next time anyway? Seems appropriate to weigh the costs and benefits of continued detection versus sunlight as a disinfectant.


Great video for anyone has the time:

Conducting a Successful False Flag Cyber Operation (Blame it on China) - Blackhat Europe - Jake Williams

https://www.youtube.com/watch?v=W2vBu_Jui9A


> They attribute the attack to a particular actor without providing any evidence to the public.

This statement still holds whether you replace "they" with "Brian Krebs" or "Microsoft".

Attributing blame on cyberattacks is a very difficult problem, as it's easy to cover and obfuscate your tracks. Even your tactics; using strategies and tools from other state-sponsored groups, for example.



I am not quibbling with the existence of an exploit but the assertion that it was definitely exploited and by China. My message to all security firms: put up or shut up re evidence.


attack.mitre.org is your friend. The truth may not be, by the smell of things.


like the Sony hack blamed on NK because of a Seth Rogan/James Franco movie? From what I remember there was absolutely no proof there either.



I can't tell from the article, but was this vulnerability already being exploited but to a lesser extent or did the hackers apparently discover it as a result of the patch being released? If the latter, then maybe we need processes for patching faster than people can reverse engineer the patches.


Yes, it was being used to target specific organizations prior to Microsoft's patches this week. Since then, attackers have basically used tools like Shodan to find unpatched servers, and mass-backdoored them -- regardless of who the victim organization is.


Do you have any details you can share with us (support@shodan.io) about how attackers are using Shodan? We have a lot of mechanisms to prevent abuse (blocking anonymous access, limiting number of results/ searches, restricting certain search filters) and if there's more we can do please let me know.

Btw Microsoft, CERTs and a bunch of other orgs are also using Shodan to find out who is exposed. We already had all the data to determine vulnerability before the announcement was made so enterprise customers could search their local Shodan database for affected systems. And we've been sending out notifications as well.


I don't think that's an accusation against you, but I have to imagine there's a Shodan inspired darkweb site somewhere that takes crypto in exchange for bypassing all those noble restrictions.


Keep it real, Shodan-bro. Thanks for the additional context.

Lovin' my membership.


Bigger companies or at least ones with significant relationships with Microsoft often get NDA-covered security bulletins before they are publicly released to help mitigate this.


Interesting! This seems futile at times, especially with the SolarWinds espionage that went undetected for so long.

The question that comes to mind is: to what extent did Threat Actors have unfettered access to security bulletins?

There is no easy solution to the issue. Thank you for bringing this up.


Really? I thought the article was quite clear.

> On March 2, Microsoft released emergency security updates to plug four security holes in Exchange Server ...

> ... [Volexity] first saw attackers quietly exploiting the Exchange bugs on Jan. 6, 2021, ...

If it still wasn't apparent by then, though, I would have thought that this line should've cleared things up:

> We’ve worked on dozens of cases so far where web shells were put on the victim system back on Feb. 28 [before Microsoft announced its patches], ...*


What are the chances this was independently discovered and weaponized in the two months after the original report to MS? Can't help but wonder if the security researcher or MSRC were compromised or have a leak.


CISA are indicating that the attacks go back to at least September:

https://us-cert.cisa.gov/ncas/alerts/aa21-062a


Exchange has been a security problem since 1998. Surely there are open source solutions available that have better security? Seems obvious, have I missed something?


Does anyone know how to check for malecious activity on exchange 2010? All the logs/tools explained in the articles do not exist befor exchange 2013


With the caveat of "You know you shouldn't be running that anymore": Look for unauthenticated POST requests to "/ECP" URLs as a starting place. I haven't dug into 2010 (because I tried really hard to get rid of it in the run-up to end-of-support), and since there's no PoC available I can't try exploit code against one to see.



If you have a Microsoft rep, hit them up for a patch, once you’ve switched off OWA. Exchange 2010 is out of support, but not super super out of support.


This will be the nail in the coffin for on premise email servers. Putting all of your eggs in one basket might be an even worse idea over time.


Eh, this could just as well have happened to everyone with 365 as well. On-premise servers allow you to manage the risk at your own level. For instance, you can decide not to expose a service to the Internet that 365 does.


lol - don't run services you can't competently manage.

edit: this tweet restates this in a much nicer way:

https://twitter.com/SwiftOnSecurity/status/13668672289148108...

> If you're not an F50 running your own Exchange Server is organizational clownery at this point.


Couldn’t you put these servers behind something like CloudFlare? Assuming they were knowledgeable of the attack and could block it.


There will always be stolen emails. The problem is that the emails are in plain text on the server...


This needs to be considered an issue of national security and the US forces needs a 'Digital Force' more than they need a 'Space Force'.



As another replier mentioned, the US has that and has had that in DOD as well and have had it for longer.

However, NSA, has been around a long time. Dont forget about them.

If by "applying digital force", do you mean attacking other countries?

If so, does the same apply when the US destroys computer systems in other countries? If your country is attacked by the US what force are they reasonably allowed to use as a counter attack?

You ruin one or more nuclear weapons facilities; they get to destroy a few nuclear installations of several types in the US?

The US is not sitting around being the innocent victim. The US is engaged in offensive attacks on regular basis.

(At the same time the US is engaged in massive real world war as well. Unike most of its counterparts. Oh "military conflicts" not war. Unless you are the country at the receiving end of "military conflict" in which case you will have to spend a lot of time trying to figure out why it is not war.


Your lack of close parenthesis might have been a typo, but I think it does well to symbolize that the point you're making is less an ancillary explanation and more a real world issue that we can't just ignore.


I see several straw men and an anti-war argument.

None has the mission of broad national defense of civilian assets from cyber warfare by foreign nation states.


well it is Patch Friday after all



[flagged]


Bill Gates hasn’t been CEO for a long time now.


The cynic in me thinks it’s not a coincidence that the cloud office 365 was not affected.

Almost like a certain company would like to get its customers to migrate AD to Azure and Exchange to full office 365.


One uses Outlook Web Access (where the first exploit exists), the other does not.

If the hosted version lacks the component that has a security issue, it won't have that issue, it is technically misinformed to conclude anything nefarious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: