Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Chose Profit over Security, Whistleblower Says (propublica.org)
676 points by tyleroconnell 6 months ago | hide | past | favorite | 305 comments



The solution is complete zero trust and distrusting the network in organizations. You should treat the internal network as external -- hostile. Google does this. They were the first ones to widely adopt zero trust with BeyondCorp and there has not been a Google internal organizational breach since Aurora (which made them adopt BeyondCorp, what they call zero trust).

You have completely managed endpoints, strong hardening of the endpoint and complete inventorization of all the resources in the organization. You have certificates installed onto each device. You have an ACL engine that determines whether a user should get access to a particular resource. You can use deterministic lists and also incorporate heuristics to detect anomalies (working hours, etc). All Google internal apps are internet-facing. You can open them, get redirected to the SSO portal. Come and do try to get in. You will not.

Many of these security problems are solved. You just need to implement the solutions.


Google makes zero-trust work by having a highly "centralized" or "uniform" tech stack all the way from tooling, to hosting, to infra. So everything defaults to zero-trust and it's not something you would need to think about setting up.

Most big organizations have built up their internal/external tech over decades, with large parts of it being essentially "mothballed", and high degree of heterogeneity stemming from tech changes over time/acquisitions/departments having flexibility in what tools and design they can use. Shifting to zero-trust requires a lot of migration work across all of this, "training" ie figuring out how to get stubborn IT people to buy in to the new way of doing things, and most likely a shift to the "centralized" kind of model that Google uses.

Even if the first two are funded, that third "centralized"/"uniform" model can be very expensive. One of the reasons Google has to deprecate things so much is that the centralized model requires constant migrations and breaking upgrades to keep things running, which makes it so "mothballing" isn't a thing: you either have enough people assigned to handle the migrations, or turn it down.

I agree that zero-trust is the best security model and it solves many problems. But I guess I'm also saying it's much easier said than done. With my startup I want to solve a lot of these kinds of problems (ie introducing "uniformity" that follows best practices) for my customers, but it's inevitable that some point a customer will ask for the ability to "turn off zero-trust and allow for IP whitelisting" - is it worth it to close a potentially big deal? It's also probable that any reasonably successful company will at some point perform an acquisition involving a company without a zero-trust model - is that reason to cancel the acquisition?


> Shifting to zero-trust requires a lot of migration work across all of this, "training" ie figuring out how to get stubborn IT people to buy in to the new way of doing things[...]

You're right, most times organizations need a fire lit underneath them to change, for Google, it probably was the NSA annotation "SSL added and removed here :^)" on a slide showing Google's architecture from the Snowden leaks.


> You're right, most times organizations need a fire lit underneath them to change, for Google, it probably was the NSA annotation "SSL added and removed here :^)" on a slide showing Google's architecture from the Snowden leaks.

As an insider, it was not. The move to zero-trust started with "A new approach to China": https://googleblog.blogspot.com/2010/01/new-approach-to-chin...


> You have completely managed endpoints, strong hardening of the endpoint and complete inventorization of all the resources in the organization. You have certificates installed onto each device. You have an ACL engine that determines whether a user should get access to a particular resource.

None of those are “solved” for any mid or large-sized enterprise where tech isn’t their code competency. In fact, I’d say most of these are insurmountably hard.

This is a “draw the whole owl” kind of response. Its very well to say that you can do things differently, but imagine Shaw Industries (22k employees, largest carpet/flooring manufacturer in the USA) doing any of that.


The first step to saying you're not doing any of that is saying you're not doing it. I work at an older company and it has successfully moved an number of apps to this model. Some will take another decade, but many are there.


I think I agree with this conclusion. But I work in a Shaw-like enterprise (only the products are more mundane than flooring). What are the hurdles we’d see if we tried it? What processes and practices are we likely using, that would break under the zero trust model?


For a start, you'll have a bunch of internal applications that are not hardened to be exposed on the public internet, and that you have neither the time nor the money to replace. A "zero trust" product vendor will therefore offer you something exactly like a VPN, but for some reason they'll say it's not a VPN.

You will have "heuristics to detect anomalies" and users won't be allowed to directly see what 'anomalies' are being detected, for security reasons. Instead, if someone plugs their phone into their laptop to charge it, they'll start getting network timeouts when they try to use the ERP system. After waiting 30 minutes for it to come back online, then calling the helpdesk, they'll be told that the charging phone counts as an unencrypted disk and they need to unplug it.

Other heuristics will create a huge backlog of 'maybe' alerts they'll invite you to manually review. Warning, a user who hasn't logged into the holiday booking system in 9 months just logged into the holiday booking system. Finding the real problems will be like looking for a needle in a haystack.

In-house infrastructure - which your team provides - will start appearing flaky, with mysterious outages. Public-internet SaaS products like Github will start looking better and better.

It will turn out the "zero trust" system doesn't work with your office's networked printers, access control system, CCTV cameras, meeting room conferencing system, server BMCs, networked UPSes, networked oscilloscopes, networked 3D printers, networked telephones, and so on.

It will also turn out, once a vendor is giving you "completely managed endpoints, strong hardening of the endpoint" you can't update without going through them first. And they aren't in any hurry to support the latest OS versions. Maybe they'll support Ubuntu 24.04 some time in 2025? Of course you'll pay them the same whether they hit that target or not.


Ah, I see you've played knifey-spooney before.


> if someone plugs their phone into their laptop to charge it, they'll start getting network timeouts when they try to use the ERP system. After waiting 30 minutes for it to come back online, then calling the helpdesk, they'll be told that the charging phone counts as an unencrypted disk and they need to unplug it.

Good thing they killed the nic or blocked the LAN traffic on the laptop while it's connected to that high speed cellular network modem! And as we all know, once a potentially malicious payload delivering unencrypted drive is unplugged, the threat is gone and you can have your network back. If that weren't true, you'd see folks sprinkling usb thumb drives in the parking lots of their target's offices. What's next, usb cables with microcontrollers, keyloggers, and wifi?

/s

If helpdesk is making calls like that, add all the zero trust you want, you're still screwed


I think the hard part of trying it isn't using it, it's implementing it.


More of a "pay someone else to do it" situation then. And the question is how much do they value security, and can they afford it without killing their business.


> Many of these security problems are solved.

I have found that thinking a security problem is "solved" is a big warning that you're at risk. There's no such thing as perfect security in anything. If you adopt the mindset that you're "safe" in some sort of absolute way, you stop looking very hard for security breaches and won't catch the one that will, sooner or later, happen.


"The solution"

Lost me there after two words! There is never a THE solution ... ever. As any engineer will tell you: "best efforts and here is why ..."

Zero trust is a philosophy and quite a good one in my opinion but it isn't a solution.

I suggest you stop thinking in terms of (absolute) solutions and perhaps think in terms of philosophies and good practices.


Indeed, zero trust is a powerful mindset but only an incompetent organization willingly opens all internal resources to the Internet for no good reason.

Another important philosophy is defense in depth: Just because you use zero trust principles internally doesn't mean you shouldn't still put a big freaking moat around your environment.


That is a bit hypocritical of Google though since they do have scanned mails of their users for example. In that way they certainly did implement zero trust, but maybe here it has another meaning.

And I don't think such an architecture fits every company. Most (non-software) tech companies suffer under simple social engineering, scam mails and giving third parties their credentials. A threat is also economic espionage in all its forms.

Google certainly has other security concerns as well. Internal whistleblowers and maybe activist circles that run counter to the vision of management. For these problems their architecture might make sense, but it doesn't mean every company has the same threat vectors.

Of course security problems can be solved, but the infrastructure needed isn't trivial and many software stacks for engineering just don't allow for third party auth anyway.

Many developers (software or not) also shudder about their "managed endpoints". Works for Google obviously, but they are a special case here.

Much more effective here is sensible network segmentation. You don't need fancy auth services for that, just classic IT with a little sense for real threats. "Everything facing the internet" certainly is a very specific strategy that cannot be generalized.


Are there any other companies besides Google that have implemented this solution? If not then I don't think you can really call it a solved problem.


For most companies Zero Trust is strong device management, cert in TPM, buy okta service and then buying an appliance that you put between users and services and then cut off direct user access. Can still use vpn or expose the appliance to the internet.


I'm half with you, but riddle me this: in a ZT environment, every request needs to be accompanied by some verifiable assertion of identity and authorization. In this case, and others we've seen recently, the identity provider themselves has been compromised. For example because an attacker has obtained signing keys that allow them to effectively masquerade approval from.the identity provider. So even in a ZT environment, isnt it game over at that point?

It seems that we have a situation where all out trust is in the identity provider now, and we suffer when that provider is compromised.


The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift. I'm not sure at this point what could even trigger one.

I've always dabbled in cybersecurity, taking on the hat in various roles over the years but have refused to go full time into it due to what I have personally seen in the industry - an overwhelming focus on compliance rather than actual good security practices, and the compliance standards are either very lacking or poorly enforced.


This is exactly it. There is no incentive to prioritise security. It is not visible to customers, except in terms of compliance, most likely a check-list approach.

I think it needs a massive cultural shift, but from customers. If customers were willing to evaluate security (consumers cannot, but enterprise can) properly, demand binding assurances, and make buying choices accordingly industry would respond.

Of course MS is too strongly entrenched in the desktop market for this to be completely effective.


When I first left offensive security consulting and joined an internal defensive team, a wise ex-agency person said to me "In product development, the first things to often get axed are security, and performance. They are invisible to the user, until they aren't, and rarely do failures in those areas end a company."

Granted this was prior to ransomware really blowing up, but even that itself is a different threat model that doesn't mean your product has to be good at security.


> If customers were willing to evaluate security (consumers cannot, but enterprise can)

Where i work, IT is outsourced and decision to buy most of the SW is made by managers who have no idea about computers.


The purpose of using Microsoft products in an office environment is so that your office can be run with as much personal computer enhancement as you originally realized when you first effectively replaced the traditional office machines or more-labor-intensive tasks with software-powered substitutes.

Which all occurred way before any of the things like "single-sign-on" got popular among those who didn't seem to know any better. The second this appeared it was easily recognized as one of the many consumer/entertainment features that must be disabled across every bit of any serious corporate network.

Also best disabled on any home computer before it is allowed to touch the internet.

There was no forthcoming mitigation, all Microsoft leadership could do was throw up their hands, after all there were unsurmountable reasons why such a threat could not be overcome.

>it required customers to turn off one of Microsoft’s most convenient and popular features:

Like any other office no-brainer:

>the ability to access nearly every program used at work with a single logon.

Duh.


Proactive methodology: seatbelts, reactive methodology: hospitalized with traumatic brain injury.

The problem is more reactive environments take a Russian Roulette gamble on potentially unrecoverable catastrophes before taking action.

(Proactivity is more expensive than clicking a seatbelt.)


It’s a market. There’s no demand for security. How often has the average Joe has one of their online accounts hacked or credit card details stolen in 2024?

Obviously the most effective way of incentivizing companies to focus on security is NSA assembling a team to hack important companies, create real harm and accompanying press releases. Ah sorry, I meant Russian hacker news.


> If customers were willing to evaluate security

Many big, famous firms (especially Microsoft) would not exist


I may be off, but to me as an affected outsider (user) the continuing insistance of using passwords after decades (yes, several decades) of problems and proven vulnerability, then to 'mitigate' with putting second line of 'defense' on the very fragile and non-transparent smarphone infrastructure instead of doing real reforms is a sign of not giving a faint fack.


> an overwhelming focus on compliance rather than actual good security practices

Yes, this is sad and mostly a waste of time.

However (and perhaps it is what you meant) this is a direct reaction to the lack of that cultural shift towards caring about security.

So security teams are mostly left with two choices. One, argue for building secure products because security matters (and be laughed out of the room). Or two, argue for compliance with what the auditors require and that at least move the needle a tiny bit toward security (sometimes).


I read a quote once that a CISO's job was to do enough public talks that when their company inevitably got popped because nobody values security, they've got their next job lined up already.


Did you buy the more expensive lock for your house? Are your doors fortified, if they are why isn't the steel an inch thicker?

Do you also choose having money over security? Sounds like the government also chose having a more productive work force, etc, over higher costs and lower productivity.


> why isn't the steel an inch thicker?

In our analysis we determined that if the steel doors were thicker it would hinder our team of ex-special-forces security guards from operating their bazookas effectively in the event a suspicious person is spotted. Unfortunately it’s all too common that potentially dangerous fugitives on the run are trying to blend in as “mail carriers” and “neighbors on a walk”. Anyway, the auditors relented on the steel door issue, but then hammered us on why we didn’t have any tanks moving in formation in the front yard as a deterrent. In fairness, the FedEx guy made it all the way to our front door in two separate incidents last week. So the auditors have a point.


Not sure what point you are trying to make - for one, I don’t keep anything valuable in my house. Two, I have adequate security measures for the threats I am likely to deal with - I have cameras, locks on all windows and doors, and I have alarms.

The rough security/compliance world equivalent is a checklist that says “Do you lock your doors every night?” and you say “yea I do” regardless of whether or not you even have a lock or what kind it is, and they say “ok cool.”

It’s a false dichotomy that you need to choose between security and productivity.


It isn’t. If it was a false dichotomy everyone would just work from airgapped systems because there’s no trade off.


It's common knowledge clipboard audits of the perfunctory type skew towards security theater and are the most likely type to be performed because they're cheap and mostly automateable.

OTOH, a few SCAP baselines I've seen contain good shit.

Standardization and change control with deep, vigilant internal and external review help because infosec is a cross-cutting concern requiring holistic, defense-in-depth controls, checks, and application. Also, avoid a Tragedy of the Commons scenario originating from an attitude of "it's everyone's responsibility" by having a dedicated security team with the resources, authority, and accountability to pushback against unsafe practices, and monitoring and remediating problems.


There is nothing wrong with processes per se, From civil engineering to automotive to aviation there is a tangible outcome to all the laborious audits and paperwork.

These systems are lot safer after regulations were put in place however onerous and ineffective they seem


What would this even look like in software?

I always wonder if software is different than physical construction, or if software is just less mature of a discipline.

In software, we can’t estimate projects accurately and consistently. We have to build a few to throw away just to get a better (yet still incomplete) picture of the problem we’re trying to solve.

Imagine if the people building your house had to build half of it and then start over. Maybe twice.

That never happens in physical construction. Maybe something has to be redone because someone made a mistake, but almost never due to not understanding the problem. So what’s different about software?


Correctness checking in software already exists for mission critical applications and is seen in sectors like spacecraft or avionics and to a degree in core finance etc.

Development in those fields are lot slower and very conservative and is by no means perfect, it is matter of culture and regulation and what you are ready to spend .

Project management challenges should not be conflated with product quality. Take JWST, a notoriously hard project to manage costs or timelines, but their product quality was perfect, if we can launch a telescope like that accurately the first time we can build software well, if we can afford it.

> Imagine if the people building your house had to build half of it and then start over. Maybe twice.

No need to imagine, I have seen people do that all the time, if you have the money you can afford endless remodeling and some people actually do that.

It all comes down to appetite to spend on good quality and culture to do so, in the era of low/no/AI code or off-shoring before that, there is constant downward pressure on costs, quality and security are the trade-offs.


In the profit-center view, everything is either a cost center or a profit center. And it is nearly impossible to get anyone to truly care about a "cost center".


In my experience, the conflict in many bigger orgs isn't even on the cost vs profit axis, it's on the tangible vs non-tangible axis. It's a lot easier for middle managers to show they did well if they deliver customer impacting features than a nebulous "improved security". This is item true even when higher up management actually wants to invest in security.


What if the company is providing only cybersecurity-related services? Could it be in this case, that everything is on profit side.


Sure, but to the client hiring them, it's a cost. We'll take the basic compliance package please, no need for any of the gold tier high security features.


"...because our executives won't get thrown into prison as long as they check all the compliance boxes. In fact, they won't get thrown into prison even if they don't check the compliance boxes, but that would be a minor nuisance, so we'll take basic compliance."


Precisely this


> The misaligned incentives between security and profit

Cynically, there is no incentive for security; there is ONLY profit. Security comes into play only where it can increase profit, it's a second order effect. (Of course, there are legal and regulatory pressures here for it as well.)


it was like that in the 90s too.

until people like cult of dead cow started to both sell the solutions and give it the tools to exploit everyone not implementing the solutions.

today things like dmca actually protect the malicious incompetent and business which don't take on it are fools.


> an overwhelming focus on compliance rather than actual good security practices

I'm an application security engineer. I find that it depends widely on the company. You're right that compliance is purely just a checklist and does and doesn't actually do much for security. At best, it slows down a determined internal attacker. ie, a developer can't install a back door since code reviews are enforced by SCM before merging is allowed. But all the ISO-27001 and SOC-2 audits in the world won't prevent trivial attacks like SQL injection.

So the actual security depends on how much buy-in the AppSec team can get from project management. I've had companies where I point out an obviously exploitable flaw that can easily cause DoS, and with some determination could get RCE, and I get radio silence. Others, I point out a flaw where I say "It's incredibly unlikely to be exploitable, and attempts to exploit would require millions of requests that would raise alarms, but if someone is determined enough..." and project management immediately assigned the ticket and it was fixed within a week.

I can tell you one thing that's not doing any favors is overly zealous penetration testers that feel like they need to report SOMETHING so they invent something that's not an issue. For example, in one app I worked on, after logging in, the browser would make an API call to get information about the current user, including it's role. The pentester used Burp Suite to alter the response to the call to change the role to "admin", and sure enough, the web page would show the user role as "admin", and so the pentester reported this as a privilege escalation. They clearly didn't go on to the next step of trying to do something as admin, though, because if they did, they'd see the backend still enforces proper RBAC. Changing that role to "admin" essentially just made all the disabled buttons/functionality in the web app light up, but trying to do anything would throw 403 Forbidden.

But I digress...

> The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift.

The EU seems to have figured it out, but the USA is a hypercapitalist hell-hole. It's such a shame that the population is mostly convinced that any regulation is bad and an attack on freedom. I roll my eyes at the Libertarians that claim that the Free Market(tm) will punish bad actors while the worst actors are rising to the top. Bad acting is profitable.


> The pentester used Burp Suite...

We run an SaaS and we get a ton of these. Most of these are absolutely inane and complete waste of our time having to look through their poorly written email begging for 50/100 USD payouts.

We pejoratively refer to them as "Burp Babies", the equivalent of "script kiddies".


I think that when companies sell to the government, there is so much money to be made, and such a huge PR boost, that they are incentivized to cover up the naughty bits (a certain airframe manufacturer, comes to mind).

It can mean anything from concealing slightly embarrassing stuff, to massive, systemic, deliberate, fraud; sometimes, the whole spectrum, over time.

It often seems to encourage a basic corrosion of Integrity and Ethics, at a fundamental cultural level.

When leaders say "Make Security|Quality a priority," but don't actually incentivize it, they set the stage.

For example, routinely (as in what is done every day) rewarding or punishing, based on monetary targets, vs. punishing one or two low-level people, every now and then (when caught), says it all. They are serious about money, and not serious at all, about Security|Quality.

If you want to meet a goal, you need to incentivize it. Carrots work better than sticks. Sales people get a lot of stress, and can get fired easily, but they can also make a great deal of money, if they succeed. Security people don't get fired, if they succeed, and get fired, if they don't. Often, the result of good work is ... nothing ... No breaches, no disasters, no drama. Hard to measure, as well. How to quantify an absence?

Sales: Lots of carrot, and the same stick as everyone else gets. Easy to measure, too.

Security: No carrot. All stick. The stick can be a really big stick, too; with nails driven through it.

I'm really not sure what the answer is, but it's cultural, and cultural change is always the most difficult thing to change.


I think this is sort of it but I don't think it's the carrot that's the problem here. I believe it's the process and yeah ultimately the culture.

I don't think you want sales concerned about security, their focus should and only be on growth. The problem is if you don't give jurisdiction and power to the other side to actually say no this priority (security fix) goes in before work is done on this new feature, then you have an imbalanced system.

If the project manager who is incentivized toward growth is the decision-maker for deciding what is prioritized, well of course naturally you'll have the PM choosing growth over security.

Process needs fixing, give more agency and jurisdiction to the other side to effect change. It's not like security doesn't see what the issues are, it's just the fixes are not prioritized and the culture and process isn't balanced between both.


You're not going to like hearing about regulatory capture...

There's pretty significant incentives on the government's side (or at least the individual decisionmaker's career) to also see the deal go through.

Both sides want the deal to go through, both sides have motive to hide flaws unless end users will find out before they retire.


> I think that when companies sell to the government, there is so much money to be made, and such a huge PR boost, that they are incentivized to cover up the naughty bits (a certain airframe manufacturer, comes to mind). > > It can mean anything from concealing slightly embarrassing stuff, to massive, systemic, deliberate, fraud; sometimes, the whole spectrum, over time. > > It often seems to encourage a basic corrosion of Integrity and Ethics, at a fundamental cultural level. > > When leaders say "Make Security|Quality a priority," but don't actually incentivize it, they set the stage. > > For example, routinely (as in what is done every day) rewarding or punishing, based on monetary targets, vs. punishing one or two low-level people, every now and then (when caught), says it all. They are serious about money, and not serious at all, about Security|Quality. > > If you want to meet a goal, you need to incentivize it. Carrots work better than sticks. Sales people get a lot of stress, and can get fired easily, but they can also make a great deal of money, if they succeed. Security people don't get fired, if they succeed, and get fired, if they don't. Often, the result of good work is ... nothing ... No breaches, no disasters, no drama. Hard to measure, as well. How to quantify an absence? > > Sales: Lots of carrot, and the same stick as everyone else gets. Easy to measure, too. > > Security: No carrot. All stick. The stick can be a really big stick, too; with nails driven through it. > > I'm really not sure what the answer is, but it's cultural, and cultural change is always the most difficult thing to change.


> “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,” the company’s CEO, Satya Nadella, told employees.

Satya's model of making security a priority at Microsoft:

- Cram ads in every nook and corner of Windows. Left, right, centre, back, front, everywhere. What else is an operating system for?

- Install a recorder which records everything you do. For the benefit of users of course - you know, what if a user missed an ad and wants to go back and see what they missed.

- Send a mail to your employees and tell them "Do security". Mission accomplished - Microsoft is now the most secure platform.


The Microsoft bribes scandal broke not too long after I had to take the "hey don't do bribes" training at Microsoft.

That event really drove home for me the fact that all of the trainings, emails, processes, etc. are mostly plausible deniability. There are people who care about security at MS. I know, I've met them, but for the most part all of this exists so that Satya can plausibly say in court or in front of congress, "well we told them to do security better. This is clearly the fault of product teams or individual contributors, not Microsoft policy and incentives."


I dunno, that’s a pretty cynical take. Isn’t it just as plausible that they became aware of the bribes internally and were trying to curtail them when the scandal broke out? Or maybe the “don’t do bribes” training actually worked enough for someone to whistleblow even if official internal channels failed? Those who are doing wrong often try to stymie others from making positive changes out of fear, greed, etc.

Edit: I just want to add that there are things to be cynical about - I’m not completely naive. If it’s your legal department heading up the training then you can be pretty sure that there was a cause for it.


Yes, massive companies are a nest of conflicting priorities. The sales team wants to do whatever it takes to win the deal, and the legal team wants everyone to behave ethically at all times. The board wants to be shocked(!) when it turns out those goals are in conflict, with the ethical side sometimes losing out, to remove any personal risk to themselves.


> legal team wants everyone to behave ethically at all times

do you really believe that? compliance under scrutiny, more like it


Having worked with many lawyers... for the most part, yeah. Legal wants you to behave ethically at all times, not because they necessarily have some ideological commitment to ethics (though some do), but because it keeps the company out of lawsuits.

The overwhelming goal of a company's legal department is "don't get sued", followed by "if sued, lose as little money/leverage as possible".

In general the lawyer in the room is going to be far more risk-averse than the engineers, product people, sales people, or marketers.

The trick is that outside of some limited circumstances the legal department at companies are not the final say. Many lawyers who "go in house" (i.e., quit a private outside firm and go work directly for a company) find this frustrating. They come into a room, say "don't do that", and then a few weeks/months/years later someone did it and now they have to prepare for a lawsuit.


The best job is sitting around and doing nothing. So ideally yes.

But sure, ethically speaking when things get heated they will exploit every loophole they can find to avoid liability. So, lawful evil?


Most corporate law guidance is about risk mitigation, not about ethics. Less activity generally translates to less risk.

You can see a similar phenomenon with security professionals. True, the only secure computer is one disconnected from the Internet, turned off, put in a Faraday cage, on the moon, under armed guard - but that's not useful.


> under armed guard

Get rid of the guard. They might turn the computer on.


> The best job is sitting around and doing nothing.

That sounds like a terrible job.


Well you can take it as literally or figuratively as you wish. Depends on the person.


Even if everyone in the company magically complied with the wishes of the legal department, they would still have work to do. Defending the company against frivolous lawsuits and incoming regulations, suing competitors and other bad actors outside of the company, writing and evaluating contracts, and any internal legal consultation needed.


That doesn't seem plausible, because you can't stop bribery by telling people that bribery is against the rules. Everybody already knows that.

If they became aware of bribery and genuinely wanted to stop it, the way is to publicly punish the culprits as harshly as they can, to demonstrate to others that enforcement of the rules can happen.


Yes and no. You might not even realize that what you did constitutes giving or receiving a bribe. What cracks me up though is that all large US megacorps give tens of millions of dollars in thinly veiled bribes to officials each year, as they browbeat their employees into not accepting a god damn fruit basket from a thankful client.


Maybe. However such training is essentially considered mandatory compliance at any publicly traded company once you reach a certain size, especially if you sell to the government, and IMO probably not related to any specific event they became aware of.

I've had to do the same mandatory anti-bribing public officials training annually at US companies a fraction the size of Microsoft. The anti-bribe training is so common at large companies in the US, there are companies that sell ready made one-size-fits-all training videos specifically on this topic that are then usually the thing the employee has to sit through anually.

In my experience, different cultures have different feelings on the moral failings of bribes. Some of my colleagues grew up in countries where it is a common business practice, it probably makes sense for large orgs with global employee base to have to establish some kind of baseline for acceptable business practices. Similarly, I know several people who came to study computer science in the US and tried to bribe police officers upon being pulled over for speeding, simply because it's how you handle the matter where they grew up.


Probably neither, "don't do bribes" training is standard onboarding procedure at any Fortune 500 company. Just ironic timing from OPs POV


But this is exactly why it's standard procedure. I worked for a huge Credit Reference Agency and it was very obvious that this is ass covering.

Sarah and Bob in the New York Office of Huge Corp must take the training so that the CEO can swear all his employees know not to bribe people. In the event that Manuel, who is given $100 000 per week of company money to bribe the locals in Melonistan so that they don't interfere with Huge Corp's operations is actually brought before the government and forced to spill the beans the CEO will insist they had no idea and some Huge Corp minion gets sacrificed. Manuel will be replaced, Melonistan will be assured quietly that his replacement will provide make up money ASAP.

In Arms this is even worse, because there it's secretly government policy to bribe people, even though it's also illegal. So then sometimes even if you can prove there was a crime, the government will say "We'll take that evidence thank you very much" and poof, the crime disappears, if you make too much fuss you'll be made to disappear too.


Not just onboarding. Most, if not all, large companies waste at least an hour of their employees time on this per year, while themselves bribing politicians in DC.


It was, in fact, a story arc in an at the time recent-ish season of SBC[0].

[0] Microsoft's yearly training that is done in the form of a TV drama about MS employees facing ethical dilemmas


An hour? My annual training is typically about 6 hours of drudgery, and often about 2/3rds repeat courses from years previous. Great fun.


That's just the ethics training, depending on your role there's much more than that.


> dunno, that’s a pretty cynical take

Just days ago a major US corporation was found guilty of hiring Death Squads in Columbia. Literally to murder people.

Why do we have this common illusions that corporation will not steep down to the dirtiest crimes they can get away with?

https://www.bbc.com/news/articles/c6pprpd3x96o


The article you linked says that Chiquita was extorted into illegally paying money to a Colombian death squad, who also murdered people, and were ordered to pay restitution to the victims' families. It doesn't say that they paid the death squad to murder people on Chiquita's behalf.


Microsoft has for over two decades been one of the largest and most sophisticated employers of security talent in the industry, and for a run of about 8 years probably singlehandedly created the market for vulnerability research by contracting out to vulnerability research vendors.

Leadership at Microsoft is different today than when the process of Microsoft's security maturation took place, but I'll note that through that whole time nerd message boards relentless accused them of being performative and naive about security.


It would help if there weren't all these employees and ex-employees stepping forward to talk about how Microsoft is performative and naive about security. I won't go as far as to say that, but I will say I don't think my incentives as an IC lined up with the security-focused mindset that company execs tout publicly.


I don't think anything is going to help here; it's just a message board fixity that companies like Microsoft are unserious about security.


Same Microsoft got their master authentication secret stolen and they still don't know how that happened.

It's also turned out that it's impossible to revoke or cycle that secret. The whole issue is so hushed now, I don't know what happened at the end.

Same Microsoft one of their license golden keys on some installation media, too.

Even if they're serious about security, these events don't look good.


I don't know what "looks good" means. Every major tech company has had multiple bad things happen that would look very bad to people on message board.


None of them got their two different, non-revocable master keys stolen, I may say.


It's been a while now but at one point, just about every giant tech company simply make install'ed a key-material-leaking TLS bug on just about every endpoint they ran. The bug was introduced by, effectively, some guy on the internet. It implemented a feature statistically nobody was going to use.

It's trivial to re-frame all sorts of mishaps as evidence of unseriousness about security, especially if done selectively and in hindsight. It doesn't really tell you much of anything meaningful.


I remember that incident.

I think there's a difference between compiling and installing a buggy software and developing the whole infrastructure yourself on top of the operating system that you solely develop and build.

But that's me.


Microsoft isn't a single entity! Like any large corporation there are many teams and people doing great work, and they are many teams and people incentivized to downplay that work.


Yes, hence why I take all those company values trainings as Bull******.


To be fair, it's not really possible to come up with good policy to handle this at scale. It would be too intrusive to require employees to divulge their private financial accounts (and near impossible to audit that the employee has truly divulged all their financial accounts), and the more internal controls you put in place, the slower the deal-making gets, with no guarantee of good behavior.


At higher levels compensation is now tied to security outcomes. This is as committed as it gets. Definitely not theater.


It will still be theater. Security outcomes will be gamed.


Eh. For the most part, the trainings can be taken at face value. Even if the management's dealings with governments and partners are questionable, no company wants random employees accepting personal kickbacks from vendors.

There's a liability avoidance component to trainings, but mostly for non-business misconduct. For example, for sexual harassment, the company will say they tried everything they could to explain to employees that this is not OK, and the perpetrator alone should be financially liable for what happened. That defense is a lot less useful in business dealings where the company benefits, though.


To be fair to Satya, every leader should be judged on what they do not what they say. This isn't a Microsoft or Satya problem, pick a large corporstion and you'll find examples of this behavior everywhere.

Words in an email hold absolutely no weight, when leaders choose to trade security for something else that's all employees need to know.


In particular when you need to answer to shareholders and can be voted out of your position/company. I don't pretend that Microsoft's past hasn't been an issue, but if we compare the past to present, Satya has had somewhat of a positive impact (although know there's a lot behind the scenes that I'll never know about, as well as most). It's good to be critical of every company, otherwise the end users get rolled over.


I have no broad evidence of this, but I suspect that the more beginner-friendly Linuxes are guilty of a lot of the sins that you laid out here. I seem to remember some controversy with Canonical recording your searches when hitting the super key, and Ubuntu having Amazon ads built in by default.

People who love to geek out about computers can of course install Arch or Gentoo or NixOS Minimal and then audit the packages that they're installing to see that there's no obvious security violations, but it's unrealistic to think that most non-software-engineer people are going to do that.

I really don't know how to fix this problem; there will always be an incentive for Microsoft (and every other company) to plaster as many ads as they think that can get away with, as well as collecting as much data as possible. I don't know that I would support regulation on this, but I don't know what else could be done.


> I seem to remember some controversy with Canonical recording your searches when hitting the super key, and Ubuntu having Amazon ads built in by default.

It was also other way around with Microsoft. If you deploy Ubuntu VM in Azure, they contacted you in LinkedIn to offer commercial support.

Not joking: https://www.theregister.com/2021/02/11/microsoft_azure_ubunt...


Debian is a perfectly reasonable choice for casual linux users. Ubuntu's supposed usability improvements over Debian are greatly exaggerated. It's mostly just marketting.


Fair enough. I haven't used Debian in quite awhile (I think since 2009 or so?), so I can't speak to current stuff, but I do remember it being pretty hard to install then. I'm sure they have refined it considerably since then, and of course I am fifteen years more experienced now than I was.

Personally it's hard for me to go back after I accepted the dogma of NixOS, but maybe if I manage to talk my parents into using Linux I'll install Debian for them.


> do remember it being pretty hard to install then.

It has always been easier than windows, which has never stopped the millions of people who used to format their drive and reinstall every few years after suffering from slowdowns.


install arch. not even kiding.

make a "shutdown" button on the desktop that locks everything and do a full upgrade.

any issue is solved with, try tomorrow after a reboot. you'd be surprised how fast fixes arrive at rolling distros


I do NixOS-minimal. As far as I'm aware it doesn't really add any runtime overhead in comparison to Arch, the package manager is generally quite good at figuring out which changes are going to break your system, and everything is snapshotted on every rebuild so for the most part I can be fearless. Doing a full upgrade is generally as straightforward as pointing to the latest version's repo and doing something like `sudo nixos-rebuild switch --upgrade`.

That works great for a geeky dude like me, but I don't think I'll ever be able to convince my parents on the beauty of NixOS, so having a straightforward mypackage.deb thing that they can download and click on to install stuff probably would be an easier sell.

I ran Arch for about a year, and I liked it, but I had to abuse the `snapper` tool because I was constantly breaking things with the video driver and the like. It worked but I personally think that NixOS's model is just more elegant.


you still need a os. and i fail to see how nix would make video driver problem any better.

the problem with running debian is that fixes are often not backported, specifically for things end users will care about, like libre office


> you still need a os. and i fail to see how nix would make video driver problem any better.

That's actually easy to answer; video drivers can be really finicky to get working. If you screw it up, it's very easy to get into a state where you have no GUI. Nowadays I am proficient enough to work my way around the command line and I probably could fix a bad driver, but 13 years ago that wasn't really the case, and if I broke the GUI there was a risk where I'd have to nuke the machine and start from scratch. I've also had issues where updating the kernel breaks drivers, and I wasn't able to figure out how to downgrade it.

With NixOS, since adding packages and boot parameters and the like require a rebuilding of the configuration.nix, and each rebuild takes a snapshot, if something is broken all I have to do is reboot and choose a previous generation to get it into a working state, and I can debug the configuration from there.

This actually happened somewhat recently; I had a NixOS server that I was controlling via SSH that I broke the networking support for. It's kind of annoying to control a server if you can't connect to it, but all I had to do is plug in a keyboard and a portable monitor, reboot, and select a previous generation, and fix the broken change, and rebuild. The entire process took like fifteen minutes.

> the problem with running debian is that fixes are often not backported, specifically for things end users will care about, like libre office

Are there not more evergreen releases of Debian?


I'm willing to bet that you knowing NixOS is going to make Debian installation a completely easy and smooth experience. If you can use Debian stable, you are going to setup a rock solid system for your parents. If you can start the installation by using an Ethernet cable instead of wireless, I think you will have an easier time, but once you get all the updates complete you should be able to switch over to wireless fairly easily. With Debian stable, it really doesn't take very much time to figure out if you're going to have hardware issues within an hour or so of beginning installation. This is coming from someone that seems to have less knowledge than you do about Linux, and has also installed systems for people that were not very forgiving when things go wrong. I suggest starting with Debian stable because security is backported, and if you can get it running within approximately an hour, you should be good for quite a while. That's not to discount what everyone else has said here, just my experience as someone that is closer to a "consumer" level of Debian usage than a sysop. I did get into Linux with Red Hat in the 90's, and have dealt with the pain of manual configuration, but haven't had to deal with it in over 10 years now. I mostly deal with Windows and .NET development now, but am looking to get back into Linux now that I can make use of .NET and drop server costs and resource usage.


Oh I have no doubt that I could easily set up Debian now if I wanted to. Since that last time I tried it I've installed Arch and Gentoo and Ubuntu Server (converted into a desktop OS) and ran through the Linux From Scratch book once. I'm pretty sure that I could get the 2009 version I had trouble with as a 19 year old working pretty easily now.

Sadly, I don't think I'll be able to convince my parents to switch to Linux in the super near future; I need to work on them for awhile and maybe I can convince my dad (though he's pretty entrenched in Windows).


> Are there not more evergreen releases of Debian?

Debian sid or "unstable" is a perfectly fine rolling release distro.


> ... the problem with running debian is that fixes are often not backported, specifically for things end users will care about, like libre office

I don't disagree completely with your general sentiment, but the latest version of Libreoffice is available today in Bookworm backports.


I agree, I discovered Ubuntu around 2003/2004 when they were giving out free CDs to anyone that requested them. Once I discovered that Ubuntu was based on Debian, I started using Debian and wouldn't look back. Even if you need something that only Ubuntu provides, you can get the .deb package for it and install it yourself. I prefer relying on Debian stable if I need to maintain anything for more than a year or two (and am realistic that software usually fails fast, or hangs around for a long time). It's possible that my knowledge is dated at this point, but I always preferred working with the Debian filesystem and tools more than Red Hat/Fedora's filesystem and tools (rpm and yum). Apt and apt-get somehow "clicked" with me more than Red Hat's tools, and I even took multiple classes on Red Hat administration and general usage (although do far less administration in comparison to software development than I used to do in the early 2000's to mid-2010's).


> can of course install Arch or Gentoo or NixOS Minimal and then audit the packages that they're installing to see that there's no obvious security violations, but it's unrealistic to think that most non-software-engineer people are going to do that.

It's a fantasy to think that random devs can audit kernel/security code. No single person can. Too many lines of code to audit (that you didn't write yourself). Even if you hired a team, by the time the team does the audit, the goalposts have moved with new source code.


Sorry, I guess I didn't really mean to imply I was going to dissect everything line by line, but I can at least look to see if every package in there is directly open-source and if there are any packages that are being pulled in that are frequent security concerns.

ETA: I know I can technically do that with Ubuntu or Fedora or OpenSUSE as well, it's not like it's a secret which packages they include, but what I like about NixOS Minimal or Arch is that I have to explicitly add every package I want. There are transitive dependencies obviously, so there of course can still be stuff on my machine I'm not happy with, but I still think it's better.


> if there are any packages that are being pulled in that are frequent security concerns.

As an individual, do you think you can do that? I know a lot of packages with security concerns where CVEs are never issued. You just need to go to their PRs and luck into finding descriptions of a security fix. I doubt this would scale for a given individual.


It's not surprising when a linux distribution was taken over by a capitalistic firm, it decided to forgo good values, and instead prioritized profits over everything else.

> I really don't know how to fix this problem

Stop using software made by companies that do bad things. Improve the software that doesn't.


I don't really think that's realistic. I can of course use software from non-profits or something at home, but we all work for a living, and every single job I've had has relied on software from a for-profit company in one way or another.

I guess I don't have to be an engineer, but even if I were to go be a cashier at Taco Bell or something, I would still be stuck using a proprietary POS system.

Unless I want to go live in a unabomber shed off the grid, I'm probably going to be stuck using software made by companies that do bad things. The software world is overwhelmingly run by Microsoft, Apple, Google, and Oracle (and probably a few others I'm missing), all of which do bad stuff all the time.


> Stop using software made by companies that do bad things. Improve the software that doesn't

Or stop buying their stock... but that is difficult thing to embrace. As, we know, these companies are very profitable.


I'm not even sure how that's really realistic in the US at least.

I think a lot of people with full-time desk jobs have a 401k or a Roth IRA, and most of those are stock-based (which is really the only way to make sure your money doesn't decay in value due to inflation), and those are generally going to be stuff like total-indexes or S&P500-based index funds.

There's probably technically something else you could peg it to, so that's probably not strictly true, but I think an awful lot of people are sort of buying Apple stock without fully realizing it.


I mean, if you have no evidence of this, why even post such an (incorrect) conspiracy theory comment?


Well the Amazon ads in Ubuntu absolutely did happen, as well as the searches with the super key. [1]

I'll admit it's maybe a bit of an extrapolation to assume that they're as bad as Microsoft, which is why I disclosed that I didn't have a ton of evidence for this.

[1] https://www.gnu.org/philosophy/ubuntu-spyware.en.html I realize that GNU is sort of conspiratorial in its own right, but at least one entity seemed to agree that there's problems with it.


Well, here are the facts (I was an insider at the time, and this is my testimony).

Searches were anonymized and sent through Canonical servers to provide extended search result sets. This was configurable and could be disabled. Canonical of course had your IP address so they could reply, just like any and every HTTP server does. Your search query was not stored anywhere or aggregated, and it was not associated back to the originating IP address except to reply. Your privacy was respected and protected at all times.

The Amazon search did appear as a plugin in an early prelease. It was never shipped in a released Ubuntu.

The goal was to make things as easy as possible, even for the technically averse (who were still commonplace a decade ago), while still respecting and protecting your privacy.

Of course, no matter what you do, someone is going to scream for everyone to come witness the oppression inherent in the system. We did it anyway with the expectation of baseless knee-jerk outcry and we were not disappointed.


Yeah, fair enough, I'll admit what I said was probably reductive, and if you worked on it you certainly know a lot more than I do; obviously the engineers at Canonical aren't idiots and they're not mustache-twirling supervillains. Just to be clear, I did run Ubuntu on my laptop for quite awhile (for about two years starting immediately after ZFS got integrated support), and I did like it, so I don't mean to suggest it was a terrible product.

I guess I'm just always worried about for-profit companies, because their goal isn't necessarily always aligned with the customer's best interest.


I agree Microsoft is a problem. I just wish you tech guys took an equally critical stance towards Google, a genuine ad company.


And Apple, the upstart ("stealth mode") ad company.


The upstart ad company that spent years and tens of billions of dollars to develop a privacy focused AI in the cloud platform? The same upstart that offers encrypted cloud storage that even it can’t decrypt? Congrats on the false equivalency argument.

Guys like you do yourself a disservice. No one takes your hyperbolic statements seriously. Keep posting this nonsense if it makes you feel better.


It's not hyperbolic, I genuinely believe (and there is plenty of evidence suggesting it as well) that Apple is building a massive ad empire.

BTW, related to all their "encrypted" cloud, if the CCP having the decryption key is not enough to convince you of the BS, they also clearly showed their hand a couple years back when they wanted to introduce local on-device scanning of customers' pictures and comparing against an opaque database of hashes produced by nameless government-connected entities, including uploading the unencrypted pictures for review by humans who'd later send them to authorities. It took massive uproar to change that direction (but not before the same Craig guy who's now talking about the "private cloud" took his time to educate us "screeching minority" about how we misunderstood the thing - e.g. "you're holding it wrong").

So yes, they have amazing PR, but they're just as bad (if not worse) than the likes of Microsoft and Google.



> you know, what if a user missed an ad and wants to go back and see what they missed.

Unrelated, and maybe this actually exists, but with the rise of LED billboards, there have been more than one occasion where a billboard was displaying something and it cycled too fast, or the print was too small.

I would actually be interested in visiting the billboards website that lets me click on the geographical billboard location and show me what it’s been showing.


> you know, what if a user missed an ad and wants to go back and see what they missed

I have meetings with adtech guys and this gets pitched every time. Along with "a way to save ads so you can watch them again at home later!" And "alexa enable ads that you can talk to!"


Sheesh you guys are annoying.

- I do not see ads in “every nook and corner of Windows” and neither do you.

- I do not have a recorder installed on my Windows machines and neither do you.

- no one qualified to make that statement has said that Microsoft is the most secure platform.

It is so hard to listen to anyone who exaggerates at this level. If anything, it drives interest in Microsoft because these are all obviously false statements and some readers will wonder what your true motive is. You just raise suspicion in yourself.

At least you used a new account to distance yourself from any other identities you may have here. In fact I would say that was the only smart move in your entire comment.

Anyway, this is a damning revelation by the whistleblower and I hope Microsoft feels a good amount of pain because of it. NEVER make any decision with money as your sole input. It will always be a bad decision, and it’s just a matter of time until that decision bites you or someone you care about.


You're getting downvoted for your tone most likely, but I agree with this statement:

> - I do not see ads in “every nook and corner of Windows” and neither do you.

As a professional "Windows user" logging 8+ hours a day on my PC, I see no ads. Unless you count "OneDrive" ads which in that case, would mean I see iCloud ads on my iPhone too. I'm fine with classifying these as ads, but I'm certainly not seeing them "in every nook".

Are these ads only bundled with a certain versions of Windows?

Disclaimer: I do not work for Microsoft or Apple.


I don't get ads in Windows 10 or 11 Pro. I don't think you need Enterprise. When I first install Windows, I turn off every single feature that you're presented with (advertising ID, the ink workspace/writing recognition, safe search suggestions in the browser, etc.) and I haven't had any ads pop up on me (or software installed that I didn't specifically install myself). I turn off Cortana and don't use it, I have the search set to only search my local machine and not use Bing. I really don't feel like I go out of my way to turn these "features" off, I can get a Windows machine running in under an hour, and then it's hours of updates. The original screen that asks you for advertising ID and everything I posted previously does come back with a major Windows update, but I just turn off everything again, and I'm back to no ads. I work in Windows and .NET development, but I still don't deal with any of these annoying issues.


Same thing here, and I'm running Home (22631.3737). I turn most "things" off, but nothing insane and I don't see any ads.


> Are these ads only bundled with a certain versions of Windows?

Yes. Enterprise customers can get builds without them, but home users can't.


I keep hearing that, but I still see plenty of ads (and other dark patterns like data collection you can't turn off) in Enterprise builds.

I heard Win10 LTSC was somewhat better so I'm hoping there will be a Win11 LTSC coming out at some time with longer support.


You can absolutely turn off everything, but you don’t change those settings on the Win10 enterprise machines themselves. You can, technically, but it’s registry stuff. You put the machines in an Active Directory domain and you configure group policy to turn those things off, and you push those policies to the workstations in the domain.

This is 100% normal everyday stuff for enterprise customers, and an enormous pain in the ass for someone who pirates Windows Enterprise.


I'm running Windows 11 Home 23H2 22631.3737

No ads - like anywhere.


Hey Jer, please review the CELA policy about disclosing your employment connection to Microsoft.


I don’t work for Microsoft, and I never have.


From my experience in big tech they would be categorized as privacy concerns not security. Might just be different conceptual models here.


About an ad missed. Are I'm the only one who would rewatch an ad on Youtube? There is no easy way to do it


Say one thing, do another.


As per usual, executive platitudes around "security first" don't matter.

If you pay and promote people for features, and don't reward security culture, people are not dumb: they and the management layers will optimize for that.

I don't know how to design incentives to solve for this, but this is always going to be the way it is.


I do.

It's law, regulation and liability.

Until heads roll, until someone is punished, likely nothing will happen.


This is a non-solution, and automatic "head rolling" and punishments will only lead to reducing the actual meaningful experience accumulation - the mean time between major breaches like this is long enough and variable enough that the next person would be likely equally incompetent, inexperienced and inattentive.

There's no easy solution, because it's inherently very difficult problem - making a correct trade-off between security and everything else for the society, and determining what exact line needs to be drawn, are inherently extremely difficult problem, and no amount of laws and punishments will help with finding the right balance.

I do like what CISA seems to be trying to do, and I think they can do a lot more here - I think we need CSRB or some similar org to get to a place where NTSB is - I think the key value of NTSB for humanity is ensuring that some of the critical knowledge around safety incidents get accumulated and shared across. Right now, learnings from key infosec incidents are not broadly shared in any reasonable timeframe, if ever, and so we repeat the mistake over and over again.


You are completely and totally wrong and fundamentally misunderstand how the world works.

This is old stuff, man, but it always plays out.

SKIN IN THE DAMN GAME is the only thing that matters.

The parties involved don't feel any pain from sucking at security, so they may continue to suck at security. It REALLY is that simple.


I think that it could be "security as a feature"

Usually, a feature is included in a product if the marketing show that it will grow the business more than the cost of the feature. Maybe we can try the same idea ?

"We identified this vulnerability, and it will impact X % of our customer and Y % will leave (+ reputation damage) so we will loose BIGNUMBER $. However, we can correct it for SMALLNUMBER $ in Z days. Decision ?"


Security shouldn't be seen as a feature, it should be the default.

Advertising something as "secure" SHOULD be seen as silly as advertising it as "doesn't crash". But we're not ready for that, I guess.


It's absolutely hard, but you need to advertise and promote security for it to stay relevant, internally and externally. The moment it becomes the "default" I think the only way is downward.

The marketing dept should do something for that, that's their job. If Apple can tout privacy as a feature, Microsoft can find a way to have security as a shiny feature on their keynote, with internal projects rewarded for increasing security by x% etc.


With the increasing number of breaches over the years, it is 100% a feature. I see it as insurance: ideally nothing happens, but if/when something happens the company should be ready to compensate for damages.


They did that in FTA:

> In the months and years following the SolarWinds attack, Microsoft took a number of actions to mitigate the SAML risk. One of them was a way to efficiently detect fallout from such a hack. The advancement, however, was available only as part of a paid add-on product known as Sentinel.

So you sell me a submarine with screen doors, avoid fixing it for years, cripple internal processes that would fix it, and then you want to charge me for a water alarm? That's chutzpah.


I didn't think that it would be a feature to be charged for the consumer... only that it's a way to present it to top management


And where do you take those numbers from?

Also identification is one thing, but good security should mean the vulnerability didn't occur in the first place.

Then you also need to get budget for identifying vulnerabilities.

After that you need budget to research how costly the vulnerability could be.

But before getting those budgets you need budget again to propose all of that and data to prove its value.

Unless you use your own time to do all of that or accidentally stumble upon something.

I think the only realistic way to get any sort of budget is if a deep enough incident actually happens. And this will only last maybe for a year until most of the decisionmakers have been rotated with new ones wanting to only deliver again.


Real security cannot be feature.

Your complete system design and other features should be based on the idea of ”security first”, if you really want to build secure systems.


> Your complete system design and other features should be based on the idea of ”security first”, if you really want to build secure systems.

One can argue that the most secure system is the one turned off and not used. And i am not talking about devices with builtin batteries.


One can always argue that, but, fundamentally security is about limiting the systems' use for its purpose and eliminate all unwanted scenarios.

If you need to use the system, you cannot turn it off or not to use it.


Managers are already held accountable for their teams when they underperform. The same should also apply for their security blunders.


> Managers are already held accountable for their teams when they underperform. The same should also apply for their security blunders.

...years and years later


There's a pretty big caveat in this story which I feel is being looked over:

"Disabling seamless SSO would have widespread and unique consequences for government employees, who relied on physical “smart cards” to log onto their devices. Required by federal rules, the cards generated random passwords each time employees signed on. Due to the configuration of the underlying technology, though, removing seamless SSO would mean users could not access the cloud through their smart cards. To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards."

The U.S. Government (USG) is one of MSFT's largest (if not the largest) customers. The user base is enormous, and the AD footprint equally so. I have experience working in this space; the user and roles management is a nightmare with comprimised credentials, locked out accounts, and the like. Given the nature of their work, it's a constant target.

The USG has been attempting to move everyone to smart card auth to help mitigate some of these issues. Removing passwords and turning everyone to two-factor auth would greatly reduce their attack surface. They've been pursuing this for years.

So along comes this guy, and he says that, as part of this fix, just tell all of their customers to turn this off.

I don't dispute the danger of the original SAML flaw. But I think Harris is unfairly judging the rest of MSFT's reaction here. He's asking them to turn off two-factor auth across entire agencies. I might as well hand an attacker a set of credentials because that's the amount of effort and time they would need to phish a set off someone.

To reiterate, the flaw in AD FS was bad and needed immeditate attention. But the short term mitigation Harris proposes would drastically hurt their security and open tons of customers to attacks of the very sort they were trying to prevent. This story is spun as another instance of a company not caring about security, but I see a "whistleblower" who had a very narrow view of their customers overall security posture, and threw a fit when this was pointed out to him.

"To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards.

Harris said Morowczynski rejected his idea, saying it wasn’t a viable option."

I would fully expect most government agency Info Sec Systems Managers (ISSMs) to say the same.


I mean... I guess the issue here is more that Microsoft didn't make customers aware of this flaw, and continued to sell the service.

Which... is exactly the articles point. They knew there was no secure way to administer it, and yet sold it anyway.


I'm not defender of Microsoft, but I don't know if I could point to any company which does not put profit over security.


I guess the issue becomes when they say security is the top priority (and have been for two decades), yet all actions point towards it not being so.

> Bill Gates in 2002: "So now, when we face a choice between adding features and resolving security issues, we need to choose security."

https://www.wired.com/2002/01/bill-gates-trustworthy-computi...

> Satya Nadella in 2024: "If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security."

https://www.theverge.com/24148033/satya-nadella-microsoft-se...


Turns out businesses have a stated preference for "nice things for the customer/society" but a revealed preference for money.


Would that be securities fraud, because they're lying to investors?

(Going by Matt Levine's "everything is securities fraud" logic here to see if that might actually change behavior…)


Investors are very happy with profit over security choices. Moreover, decisions to maximize profitability thinking only in short term is also not bad for them if they perceive that can sell their shares before the consequences. A company that do not place profit above other things is not a good company to invest money and see it grow. A company will invest in security only as long as it increases profitability. Doing otherwise is not maximizing profits and lose investors. If you are a "security company", surely this means that you need the security to sell the product and get profitability. Other companies will have other tradeoffs to choose how much they invest in security to maximize profitability.


I think securities law usually only applies to things you tell investors? I could be wrong here though, I am not a lawyer.


then the laws need to change so bad security costs companies money.


Obviously, nobody is going to outright admit they put profits above security; indeed, they will often state the opposite. But their closely-held beliefs will shine through when it comes time to make decisions and the outcomes of those decisions are exposed to their customers and to the public.


Does Bill or Satya write code anymore? It could very well be that they consider security the top priority but it's a moot point because they're so removed from operations.

Although I would suspect that you're effectively right in that they either don't have it as a top priority or think they do but have a reveal preference of they don't. For example, an engineer that does rigorous security testing and finds nothing as well as launches one project gets promoted less often than an engineer that launches two projects and doesn't do rigorous security testing.


Profit is an implicitly assumed first priority for basically every business, otherwise the business wouldn't be around.

I don't know of any company that has profit in their slogan, or in the core values statement, etc.


I don’t put “breathe” at the top of my TODO list, either.


Related to the GPs point, do you know of any company that publicly admits that they chose profit above all else?


Unless you care about your review and promotion, in which case do features.


I genuinely think Proton as a company would prefer to cease to exist rather than offer insecure products. In fact there's a lot of offerings I would use (and pay more for) and they could make but choose not to (like a calendar that is not over an airtight protocol and could integrate with my regular calendar clients).


counter point: nordvpn

from day one everyone knew they were fsb pupets, and people are still giving them money.


They are rare, but Mullvad comes to mind immediately. They have made several decisions that directly impacted their bottom line (no recurring subscriptions where they need to keep the customer's credit card on file) to the benefit of their customer's security.


I'm sure there are some companies that realise security (or rather the critical lack of some important aspect of it) can impact profits, but that depends a lot on who their customers are too. Ultimately, if the customers who pay for a vendor's products and services don't value it, then the vendors won't value it either, short of any regulatory or legal requirements that might compel them otherwise. However, given that many large organizations (including governments) are Microsoft customers, it's strange to see in this case. Maybe there's a kind of "it can't happen to us" or "nobody will find out about it" arrogance going on, but they must now be seeing that the reputational damage is likely to have negative impacts, including hurting future profits, down the road.


Microsoft possesses, to put it lightly, a number of government contracts. I think this puts them in a bit of a pickle.


If no company can make security the priority then maybe no company can be trusted with OS development.


Isn’t there a point when a company becomes so big and so impactful to multiple layers of our life, that it should be impossible for them to continue focusing on profit alone?

I’m not talking about regulation per se, but holding humans in charge of such corps more accountable.


I don't think it's going to happen unless we decide to nationalize private services that are vital to people.

Why don't we have a public maps system, or a content sharing platform? Services like google maps/search or youtube by now are part of the infrastructure of our society.

The same way as roads/railways or energy production are publicly owned in many countries the same should happen for digital services. In good parts of Europe railways are publicly built and maintained while the trains are privately owned.


today that means "too big to fail". in wall st it's called "jackpot"


https://en.wikipedia.org/wiki/Lavabit famously shut down rather than compromise its security.


Let's Encrypt

Google Trust Services

Disclaimer: I've worked in both of these :)


Any company with sufficient size will fail to incentivise the things they claim at the top, unfortunately the impacts of decisions (especially during austerity) are poorly understood, so even the supposedly best intending will fail once you reach a size


What products do those two companies sell?


Let's Encrypt and Google Trust Services are both CAs.

LE is of course, a non-profit, so maybe this doesn't apply there.

Google Trust Services operates under Google, and is technically "for profit". But no, we did not put profits over security.


they sell market protection. to google.

it makes crawlers much more expensive. makes everyone depend on their CDNs etc.


Are you referring to google trust services? I don't see how that applies to let's encrypt otherwise.


go make a cost analysis of crawling the entire internet once or twice a day on http vs https and report back


lol are you claiming that https is done to make web crawling more expensive?

Wild.


the most pressing to google at the time was telecom abusing monopoly of mistyped urls to dictate search engine of their choice. but that too.


I'm deeply confused by this?


Agreed it’s deliver value for shareholders >>>>>>>>> everything else


This isn't about Microsoft, per se. This is about the fact that there's no risk for companies who do, even if they're bidding for government work. Hopefully whistleblowers making these things public will lead to the public putting pressure on their elected officials to actually make some regulations with teeth in this area. I'm not holding my breath, but it is something I consider in the voting booth.


In other words, when faced with an existential threat...

* go bankrupt because we can't be secure

* be less secure and stay in business

...guess which one will almost always win.

Microsoft of course, as a multi-trillion-dollar company has no such threat and there's no reasonable excuse for this.


Imagine a major bridge that was built by a contractor. A internal safety inspector repeatedly warned his supervisors of structural deficiencies that could lead to the collapse of the bridge. Furthermore, in the pass of time two external sources publicly warned about the issue, but the company downplayed the importance. Finally, the bridge collapses. It becomes evident that the company did nothing about the issue because it didn‘t want to loose contracts selling more flawed bridges. The public would justifiably go nuts, and there would be legal consequences for everyone involved.

What is different in our industry that companies (and managers) get away with such malice?


Here in Norway a bridge built with known structural deficiencies did in fact collapse[1], and basically nothing has happened except tax payers get to pay even more for a new bridge.

Unless enough lives are lost, people generally don't care that much it seems.

[1]: https://www.nrk.no/innlandet/statens-vegvesen-legg-fram-rapp...


I'm not sure if this would line up with the Dunbar number or something similar, but it sure seems reasonable that societies and centralized power should never grow beyond the scale where people stop caring.

If the public is expected to keep government and corporstions in check but the public doesn't care, it can only end poorly.


> basically nothing has happened

Maybe they proudly stated knowing the risks, and while unfortunate, risks became reality. And then everything is fine.


Boeing in a nutshell.

>What is different in our industry that companies (and managers) get away with such malice?

Software isn't immediately life threatening. That's why it's all thr wild west outside of medical and aerospace. While it sucks to have PI leaked to the internet, you do have time to at least take action compared to a door in an airplane coming off.


> Software isn't immediately life threatening

being a boeing whistleblower is though


I don't understand how this doesn't destroy a company. They willfully ingored a serious risk and it had major national security implications.


Have you tried to use Google customer support


>What is different in our industry that companies (and managers) get away with such malice?

Lack of professional licensure that binds you to state regulation with jail time as one of the stated punishments besides financial liability.

Heh, the government could start effecting change by mandating licensure and sign-offs by licensed individuals when contracting for software products sold to the government.


Wasn't there something a bit like that with the Morandi bridge that collapsed in Italy?

(There was definitely something like that with the Mottarone cable car that had been running for years with the safety catch disabled. When the tow-rope snapped, wiht no catch, the cabin rushed down and killed everyone on board.)


So software developers should be criminally liable for introducing security bugs?


Management that knowingly chooses to ignore a major issue should be charged with criminal negligence. The creation of the bug is a common and difficult to avoid mistake. But once it has been found, choosing not to change it despite being warned if the consequences makes you responsible for those consequences.


So if send an email "Fix all your bugs or else bad stuff will happen", and if they don't fix all their bugs now I can put their devs in jail ?


Don't be obtuse. That is obviously not a genuine bug/vuln disclosure.


And you decide what is genuine?

Sorry, this whole thread is a fantasy of nerds thinking they can create a punitive policy for behavior they don't like. But there is no actual substantive framework under which any of these fantasies can come true.


knowingly? yes.


What standard do you suggest to prove intent?


How about the same as for fraud, manslaughter, conspiracy... But that's the judiciary's problem anyway. People who campaign for this higher accountability argue that it's such a drastic change from fines that it will change company cultures overnight.


A policy proposal needs a legal framework under which can actually can work. You can't just push that off as "that's the judiciary's problem".


So...Golden SAML isn't a vulnerability, as the CyberArk article quoted in the post reiterates, it's a type of attack that requires completely comprising the box before using. Unless I am misunderstanding something, I don't see any particular flaw, per se. As Microsoft (mocked in the article) would say, it's not crossing a security boundary. SSO will ALWAYS have this particular tradeoff. If your SSO infrastructure is compromised, everything that uses it is at risk of being compromised.


Yes, it requires getting admin to the AD FS server https://www.netwrix.com/golden_saml_attack.html which is kind of glossed over but surely is the real "hack"?


Exactly! AD FS is part of Tier 0 in the same way as Active Directory itself and needs to be treated and secured as such. Of course, security goes a long way when it's part of a holistic approach like zero trust.

Mitigation is also not really possible when using SSO. One way would be to require the target service to require a second factor in addition to a valid SAML token, but then each user needs to keep current its second factor, whatever it might be, in each target service. This get unmanageable quite quick not to mention that there are basically no SaaS or self-hosted applications out there that support SSO and a second factor at the same time.


Yes, that's what I understood too. The article seems to exaggerate some points, and this is one of them.

It's like creating an attack called "GOLDEN ADMIN". If you have admin credentials, you can log in as the admin and do anything you want! Wow!

(I know that letting attackers authenticate to anywhere without generating logs is bad, but still... i agree with the parent reply)


Sounds like the vulnerability was one within AD FS and that exposed the private key, making golden SAML possible.


It was the SolarWinds hack that gave internal access and potential admin rights. It's no different than if a domain controller gets compromised. The attacker has gained control of the keys to kingdom; it's an inherent risk to SSO.


> If your SSO infrastructure is compromised, everything that uses it is at risk of being compromised.

Really? Can you not think of any approach that gives SSO and accountability?

I think there are


Not unique to MSFT. I’m a security engineer.

If you want sanity paired with outcomes in the career, work at places that are technical and have a strong regulatory incentive and related funding, or a strong threat model closely tied to profits to care about security culturally.

Main examples for me that hit that are:

- pre-IPO startups that want to pass SOC2 etc to go public: have the reg and profit incentive and pay to buy a security team from scratch

- crypto: has the threat model and profit incentive due to key theft and so on. Pays well too and great risk space to test out sec skills

- public tech cos providing a lot of critical infra: to an extent, some can veer into Too Big to Fail like MSFT, some have stronger internal sec teams like Google/Project Zero, Verizon/Paranoids, Cloudflare seems good.

- Banking is maybe: they have funds, more risk-averse culture, heavily regulated. But healthcare is also heavily regulated and id never work in it due to the volume of exploits and lack of care.

So ya, don’t work at MSFT as a sec eng IMO unless you’re on the DART team and want to see a lot of diverse incident response with legit threat actors, or want to do really low level OS sec.

No idea about Apple sec eng work, on this note.

This is also why the avg tenure in security careers +/- 10 years. Your sanity runs out and often pay is good enough where you can save up and do something else with your life by 30/40.


The hearing will be streamed on YouTube in ten minutes from this comment:

https://www.youtube.com/watch?v=kB2GCmasH4c


Gosh, I'm watching this now... the amount of bullshit from Smith is ...wow!


This whole article seems a bit odd to me. What is "the product" ?

Presumably this is not related to earlier problems with SolarWinds.

Did MS screw up. Yes.

However, all things have bugs.

I takes one person finding one bug and exploiting it. and there are enormous resources going into finding one, and I am certain that this is the only one.

I am sure the NSA is sitting on a pile of them.

Whereas the developers have to think about everything that can happen and protect against it.

Does this make Microsoft different from its competitors?

I think Microsofts strategy is somewhat similar to Linus:

Where security patches are often not part of new releases due to the burden of establishing what the consequences of bigger changes would be, and the fact that security people dont do sane things.

(But you can of course pull them and make it part of an in-house distro.

https://lkml.iu.edu/hypermail/linux/kernel/1711.2/01357.html


> Harris said he pleaded with the company for several years to address the flaw in the product, a ProPublica investigation has found. But at every turn, Microsoft dismissed his warnings, telling him they would work on a long-term alternative — leaving cloud services around the globe vulnerable to attack in the meantime.

That is not a screw-up, that is a deliberate decision.


You might want to read the actual article.

My understanding is that it was a two-part exploit:

1) The Solarwinds product was hacked to allow backdoor access to organizations' on-prem networks.

2) The hackers then took advantage of the "Golden SAML" vulnerability in Microsoft's Active Directory Federation Service (AD FS) to leapfrog via "seamless SSO" from the on-prem network into the organization's cloud resources hosted by Microsoft.

The article is all about how various Microsoft leaders and staff did not fix #2, because many said it would never be an actual issue exposed to the world.

This is extra damning because Microsoft is selling components at the core of both governments' on-prem and cloud systems, so if they don't take security extra seriously, their systems can present passive vulnerabilities.


> You might want to read the actual article.

ProPublica articles in general are structured in a way that makes them a pita to extract actual useful information from.


It's in the article's headline.

And at the risk of annoying everyone, a GPT summary:

This article investigates how Microsoft, in pursuit of profit and market dominance, overlooked significant security vulnerabilities that left the U.S. government and other entities exposed to cyberattacks by Russian hackers. The whistleblower, Andrew Harris, a former Microsoft cybersecurity specialist, discovered a serious flaw in a Microsoft application used for cloud-based program access. Despite Harris's persistent warnings over several years, Microsoft delayed addressing the flaw, prioritizing business interests, particularly securing a lucrative deal with the federal government for cloud computing services.

The security loophole was within Active Directory Federation Services (AD FS), which if exploited, would allow attackers to impersonate legitimate users and access sensitive data without detection. Microsoft's decision to deprioritize this issue, despite internal and external warnings, eventually led to the significant SolarWinds cyberattack, affecting numerous federal agencies and demonstrating the consequences of the security oversight.

Microsoft's response to these accusations has been to emphasize its commitment to security, stating that they take all security issues seriously and review them thoroughly. However, ProPublica’s investigation reveals a culture within Microsoft that sometimes places business growth and competitiveness over immediate security concerns, reflecting broader issues within the tech industry related to balancing profit-making with customer security.

The article sheds light on internal conflicts, the company's handling of security vulnerabilities, and the broader implications of such practices for national security and customer trust. It also highlights the challenges faced by whistleblowers and cybersecurity professionals in advocating for swift action on security issues within large corporations driven by profit motives and competitive pressures.


Microsoft had a known, high consequence, security flaw that they did not acknowledge or fix, they had evidence that indicated it had already been exploited and they knew they had limited to no ability monitor for exploitation. This choice lead directly to the SolarWinds hack that happened in 2019 was discovered in late 2020 and acknowledged by the USG in early 2021.

Many companies make bad choices around security for profit, however that factors I listed above make this extremely egregious.

I would seriously question any use of Microsoft products in any security conscious organization after this reveal. I also hope that anyone negatively effected by the Solar Winds sue Microsoft for knowing about the vulnerability for years without fixing it or disclosing it.


> However, all things have bugs.

There are bugs and there are critical flaws you’ve been warned about. This is the latter.

The fact that this was known by Microsoft but not fixed is the story.


Because as far as I can tell, there was no "vulnerability" here, it's just how the product works. Stealing an OAuth key is just as bad. Stealing a domain's krbtgt key is just as bad.

Businesses want that when they login to a computer, they are SSO'ed in to all their apps. That's how ADFS works, you authenticate to it using kerberos and it issues you a SAML token. Here they stole apparently the key used to sign the SAML token so they could generate their own.

Unless there was some vulnerability that exposed the key publically, I fail to see how in this particular incident its Microsoft's fault.


>Stealing an OAuth key is just as bad

What is an "OAuth key"? Do you mean an OAuth token? No, Golden SAML is worse than stealing an OAuth token, because an OAuth token is valid for 1 user, but Golden SAML can be used to impersonate any user. Also, OAuth tokens expire, but Golden SAML doesn't expire (although if you steal an OAuth refresh token, that won't expire).

>I fail to see how in this particular incident its Microsoft's fault.

Andrew Harris wanted to warn customers about the weakness, and tell them they can prevent the weakness by disabling seamless SSO. Other Microsoft people said no, that would alert hackers to the attack, we want to keep the attack secret, and it also would jeopardize our contracts by making the default setting sound insecure. Then Golden SAML was published publicly, so that first reason was no longer valid, but Microsoft still wouldn't tell customers they could prevent the attack by disabling seamless SSO. Then Solarwinds happened, and Microsoft finally advised customers to disable seamless SSO.


I think there is too much confusion in the details of the actual attack.

You have to steal the private key for the SAML signing certificate for an app. The correct answer would be to scope any token to only have access to what the app has access to, the second layer which is documented in their 2020 article, is to require mfa on admin actions, and the 3rd layer is to disconnect azure admin accounts from on-prem admin accounts preventing this type of attack.

But disabling SSO altogether is non-starter for most businesses, what are we going to do tomorrow? Spend months recreating 100,000x accounts in various applications, no.

We decrypt ssl traffic in our company, someone steals the private key and now can read the entire stream including your bank account details, lets stop decrypting ssl traffic because someone might leak the key? The answer from the infosec communinity has been its worth the risk.


If all those other solutions are better, why does the article say this:

>In the immediate aftermath of the attack, Microsoft advised customers of Microsoft 365 to disable seamless SSO in AD FS and similar products — the solution that Harris proposed three years earlier.

And did Microsoft advise those other solutions prior to Solarwinds happening?


This is the mentioned article: https://techcommunity.microsoft.com/t5/microsoft-entra-blog/...

What it says is "be careful when using federated trust relationships, because if one of your trusted environments is pwned, it will be trusted by the others". That's very obvious.

And about "disable seamless SSO", I only found this: "On-premises SSO systems: Deprecate any on-premises federation and Web Access Management infrastructure and configure applications to use Azure AD." (Seems pretty basic too, especially considering how vulnerable on-prem ADs are).

The original article seems to paint this MS page as a security advisory or vulnerability notification, while it just seems to me to be a very very basic security guideline.


I think those things the article is advising are the same things Andrew Harris wanted to advise customers to do 3 years prior, but Microsoft didn't want to, because it would make the default configuration sound insecure (it kind of was), jeopardizing government contracts, especially since various government systems would break if those config changes were made.


I get what you're saying, but from my point of view, this seems like something that doesn't need to be advised, because it is so trivial. Yes, if someone pwns my AD, then they can also pwn my cloud if i'm using some sort of federated trust. Even if i'm not, and both systems are completely separate, they just need to steal passwords from the cloud admin, which should be easy given they're already domain admins.

Maybe Andrew being overly cautious, was assuming most government users didn't know these basic facts, and should be warned anyway? Was MS pushing back on his report because communicating something like this to users would probably sow too much confusion?

That would still a failure on MS's part, but would make for a much more boring story. The article makes it seem like Andrew discovered an atomic bomb and MS pushed it under the rug. The reality seems much more bland.

But still, could you elaborate on the default configuration being insecure? I know next to nothing about Azure/Entra, maybe I'm missing something important.


>this seems like something that doesn't need to be advised, because it is so trivial

According to the article, that's not the reason Microsoft gave for not advising it. The reasons they gave were (1) it would make governments scared and jeopardize contracts and (2) it would let hackers know about the attack.

Also according to the article, the NYPD weren't aware of the problem until Harris warned them of it, then they quickly disabled seamless SSO:

>On a visit to the NYPD, Harris told a top IT official, Matthew Fraser, about the AD FS weakness and recommended disabling seamless SSO. Fraser was in disbelief at the severity of the issue, Harris recalled, and he agreed to disable seamless SSO.

>In an interview, Fraser confirmed the meeting.

>“This was identified as one of those areas that was prime, ripe,” Fraser said of the SAML weakness. “From there, we figured out what’s the best path to insulate and secure.”

>But still, could you elaborate on the default configuration being insecure? I know next to nothing about Azure/Entra, maybe I'm missing something important.

I'm not very familiar with Azure either. I'm getting most of this from the article. It sounds like the weakness is that by default trust federation to Microsoft 365 is enabled. Microsoft's post-Solarwinds article recommends disabling it.


It is pretty boring. Where I would blame Microsoft, there needs to be an easier way to setup AD, AAD, ADFS, without having a bunch of people be domain and global admins, like out of the boxed roles and better gui. Every ad deployment I’ve ever worked in is insecure due to complexity of secure deployment. So people running it are going to be logging in domain admin /ga to do basic crap like add a new hire.


> What is an "OAuth key"? Do you mean an OAuth token? No, Golden SAML is worse than stealing an OAuth token, because an OAuth token is valid for 1 user, but Golden SAML can be used to impersonate any user. Also, OAuth tokens expire, but Golden SAML doesn't expire (although if you steal an OAuth refresh token, that won't expire).

Stealing the OAuth token signing key, since then any fake OAuth tokens signed by it would be considered authentic.


There isn't necessarily an OAuth signing key. The OAuth tokens might not be signed. They might be random values, which act like a password, with a hash of them stored in a database so they can't even be stolen from the database.

Even if they are signed, it doesn't need to be as bad as Golden SAML, because OAuth tokens have a short expiration, so the signing key can have frequent automatic rotation, so any stolen signing key will quickly be useless. For the refresh tokens, they don't have fast expiration, so frequent rotation won't work, but you could have a hybrid system where the OAuth tokens use a frequently rotated signing key, but the refresh tokens are random values with hashes stored in a database.


> "disabling seamless SSO"

It is never going to happen in the corporate. Never.


The article says Andrew Harris worked with the NYPD to disable it for their setup.

And Microsoft themselves advised customers to disable it after Solarwinds.


This is ignoring security in depth, weaknesses, and security architecture. When ignoring that, you can not pretend, and MS did pretend, that you had a good enough stance on security. Fixing discovered vulns alone is mandated, it gives you maybe half a point, but the other 9.5 points or at least 5 before you can claim you care about security require more than fixing known vulns or waiting for world scale incident to "respond". You have to prevent issues.


It is true that nothing is 100% secure. Sitting on a major security vulnerability internally with a motivated employee pushing to fix it and doing nothing for business reasons is not negligence, but malice. People in the chain of command need to be held accountable for this.


>What is "the product" ?

Human attention sink where you can throw ads and other propaganda, what else?


I don't see a future here that doesn't involve significant legislation over network security and include jail time for major offenses. Every time something like this happens, there's always that organizational Cassandra (usually the CISO) that saw it all coming but was ignored. Sooner or later someone will get burned badly enough that the consensus will be that tech cannot regulate itself on security. We've already got this a little bit for the most egregious cases, but it's still about as secure as banks in the 1920s.

A less actionable gripe I have is that we have so few players that even if the US government loses trust in Microsoft's cloud...where else will they go? There aren't a lot of players here that could handle that scale. It's like if there were only 3 banks in the world.


I can think of one solution to the "too few players" problem. Break them up.


I, too, wouldn't mind living in that timeline, but that's just not going to happen for a number of reasons that are so obvious as to not be worth enumerating. Dwelling on unrealistic solutions prevents you from perceiving the possible.


I worked at Microsoft the entire time period covered in this article. I've interacted with MSRC on multiple issues with the product team I'm on. My experience inside Microsoft is not at all consistent with what the whisteblower is saying, for what it's worth. It's completely unrecognizable, in fact - and I'm having a hard time reconciling what I'm seeing with what I'm reading here.


Sounds like the same Microsoft culture as has always been. Like a cult. It can do no wrong. The conversation with Microsoft businesspeople at conferences was always the same: Microsoft has no deficiencies, there is nothing it isn't working on and it has a solution for every possible problem. Other sources of software do not exist. There is only Microsoft. Total illusion put forth by delusional employees. The outside world can be ignored because life in the cult is good.


This comment sounds hyperbolic, but it really isn't. It's really bad. This has been my experience with Microsoft employees also.

In my experience, what makes for bad software is PM and engineering hubris. You definitely need some vision and confidence as just following user feedback is a recipe for terrible software as well. The key is to find the right balance and straddle that line.

If it's been long enough for insiders to tell the story of Windows Phone and the eventual cancellation, I'd be fascinated to hear the story of that (from inception to death) and how that went internally given the culture.


Can confirm, it is 100% hubris based on my limited time of working at Microsoft.

There is pervasive NIH syndrome, re-inventing the wheel, and massive amounts of over engineering and unnecessary abstraction caused by chasing the endless "But what if...?" dragon.

This behavior is justified, and critics are silenced, by the "But we're an enterprise company!" cop-out


Just wanted to say that I thought the Windows Phone (the last version of such) was relatively nice. It had a decent developer experience, but was pretty much an also ran and didn't have enough market share to overcome mindshare for first party apps. When so many first apps were iOS first and Android later, throwing a third option in the mix just missed the mark more often than not.

I was already in the Android ecosystem and far less cynical at that point about Google.


I didn't get a Windows phone because i don't trust Microsoft. A friend had one and it was really ok, but no way for me.


In retrospect, I don't trust Apple or Google either...


So you need a GNU/Linux phone?


If memory over the last two decades serves, this is a relatively recent degradation.

Microsoft's security reputation prior to the recent (5ish years?) failures was largely built up on top of the work stemming from the Trustworthy Computing memo.

https://www.wired.com/2002/01/bill-gates-trustworthy-computi...


Bill Gates in 2002: "So now, when we face a choice between adding features and resolving security issues, we need to choose security."

Satya Nadella in 2024: "If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security."

Microsoft in 2024: Run this software on your computer so we can take a screenshot of everything you do, index it and we promise Security is still, and have always been, the priority. And yes, we do store data unencrypted on your disk, why are you asking?


> And yes, we do store data unencrypted on your disk, why are you asking?

But don't worry, you need to be an administrator to open the file. What? your average person daily drives an administrator account? How should we have known that???


Don't worry, Microsoft has big ambitions and huge plans about how to truly present themselves as more safety oriented in the future.


AI Safety. Code is run through AI to check for vulnerabilities. Files are analyzed by AI to ensure they aren't malware. Every instruction is run through AI to ensure nothing maliciously is happening (mostly enforcing DRM :). Every pixel is output by AI to ensure you see nothing not intended for your precious eyes.


You joke, but (the DRM part at least) is the future I fear is coming. It could hit us from so many angles (not forgetting Chrome's Web Environment Integrity and Apple's Private Access Tokens), and with all the money and power behind it (big tech plus big copyright), and the complete apathy of the average user towards this, it seems inevitable.


The DRM part wasn't really part of the joke, just a sad truth that is being worked on more and more that sadly fit into the joke.


What Microsoft can provide are lots of nice stickers saying they conform to this or that security standard, making security folks in IT departments all warm and fuzzy.

At least that's how it appears from our POV, selling B2B applications. They don't seem to care that much about actualities as long as the security checklist passes.


To be honest, that sounds like every company that suffers from delusions of grandeur and wants to conquer the planet, one way or another.

What you're saying is equally true for Apple, Google, Amazon and most other public companies today. You're never gonna get "Use this Microsoft product" as an answer from Apple support/engineer even if that product would solve your particular problem better.


The conferences always included representatives from other large, public companies in the same or similar industries, e.g., Apple or Amazon, as well as other, different industries. But there was always something cult-like about the folks from Microsoft, their level of BS and (deliberate?) ignorance, that I never experienced with the others.

To be clear, I could not make the same comment about Apple or Amazon businesspeople. While they may exhibit their own stigmatic qualities, they are, IME, different. Nothing like Microsoft.

Microsoft does not suffer from "delusions of grandeur". It achieved grandeur a long time ago, and then became delusional. Currently, it is either #1 or #2 on the list of the world's wealthiest companies. Comments suggesting that the company has "changed", and such comments have been popular on HN in recent years, are quite amusing.


I'll give you a 4-letter word... Zune /s


I work in infosec, and this sounds like a communication failure on the whistleblower's part.

Contrary to what many people believe, the profits should be prioritized over security for the most companies, that's only natural (after all, they don't generate any profits themselves, typically). The key is finding the right balance for this tradeoff.

Business leaders are the ones that are responsible for figuring out the acceptable risk level. They already deal with that every day, so it's nonsensical to claim they aren't capable of understanding risk. InfoSec's role for the most part is being a good translator, by identifying the technical issues (vulnerabilities, threats, missing best practices) that go beyond the acceptable risk profile and to present these findings to the business stakeholders, using the language they understand.

Either the guy wasn't convincing enough, or he failed to figure out the things business cares about & present the identified risk in these terms.


This is framing the story as a simple interaction (or interactions) between Harris and business leaders at Microsoft. It wasn't. Microsoft has a team responsible for translating between security researchers like Harris and its product teams/leadership. That team dismissed Harris because that team's priority was to ignore or downplay issues that were brought to it. Harris went around them and was still ignored. It seems like he tried everything short of calling the press directly to get someone to pay attention. Even after the issue was made public by other security researches, MS did nothing.

What happened here was a systematic failure on MS' part to address a fundamental flaw in one of the most critical pieces of security infrastructure at the entire company.

Companies like MS (and everyone else it seems) need to get out of this Jack Welsh mindset of the only thing that matters is the shareholders. MS acts as the gatekeeper of the most valuable organizations and governments on the planet. Their profits have to take a backseat to this type of thing or they shouldn't be allowed to sell their products to critical organizations and governments.


I might be misunderstanding, but from Andrew's Linkedin it looks like he wasn't a security researcher at MS, he was actually the person responsible for translating between security researchers and the upper management:

> Evangelize security services, practices, products, both internally and externally.

> Leading technical conversations around strategy, policy and processes with FINSEC and DoD/IC executive staff.


>he was actually the person responsible for translating between security researchers and the upper management:

According to the article, the group in charge of taking input from security researchers and deciding which vulnerabilities need to be addressed was Microsoft Security Response Center (MSRC), and Andrew Harris wasn't a member of it.


Why not go even further? Why not say that the whistleblower was wrong and Microsoft business leadership was right? Maybe their profits from ignoring this issue have been fantastic, and the externalities from e.g. mass theft of national security secrets are not Microsoft's problem.


Well, because as a security person I can only evaluate his actions from the point of security. Evaluating actions of MS business leadership is beyond my expertise.

I highly doubt that the senior leadership would willingly accept this kind of liability. But you need to put it into right terms for them to understand. Politics play important role at that level as well. There are ways of putting additional pressure on the c-suite, such as making sure certain keywords are used in writing, triggering input from legal or forcing stakeholders to formally sign off on a presented risk.

Without insight knowledge, it's impossible to figure out what went wrong here, so I'm not assigning blame to the whistleblower, just commenting that way too often techies fail to communicate risks effectively.


During my Master's, security was one of the subjects I took. It started with an equation that related risk (how much you'd lose if something bad happened), the probability of that risk, and the cost of mitigating that risk. The instruction being, one tries to find a mitigation that costs less than the exploitation of the risk. And note here that "cost" does not refer to just money, but could be computational cost, energy consumed, etc.


For the MS size entities, the risk calculation is way more complicated. The 1:1 between cost of mitigation vs cost of exploitation only applies to opportunistic attacks, really. At the level where APTs get involved, the data / access might be so valuable that they'd gladly outspend blue team's budget by a factor of 10-100.


But wouldn't the value of data be reflect in the cost of exploitation? (By cost of exploitation, I don't mean to say the resources needed to exploit, but what a company would stand to lose if exploited). The values of the variables, sure, can be different. I don't see why the equation has to be.


Microsoft was specifically told by the US Cyber Safety Review Board that they cross the line of risk vs. profit earlier this year. https://edition.cnn.com/2024/06/13/tech/microsoft-president-...

I seem to recall from another article that Microsoft as told by the review board that they need to start focusing on security, rather than work on new feature.

A company like Microsoft shouldn't need a whistleblower to know to focus on security. It seemed like Microsoft was on the right track to becoming a better company for a good number of years, but for the past year or two everything seems to fall a part again.


”They saw it differently, Harris said. The federal government was preparing to make a massive investment in cloud computing, and Microsoft wanted the business. Acknowledging this security flaw could jeopardize the company’s chances, Harris recalled one product leader telling him. The financial consequences were enormous. Not only could Microsoft lose a multibillion-dollar deal, but it could also lose the race to dominate the market for cloud computing.”

There is something fundamentally broken about an organization’s culture when this type of thinking is pervasive in the organization.


"I was very interested in that question. And one of the places that I focused on was the MSRC, which is short for Microsoft Security Response Center. This center is like a clearing house for reports of security bugs, and it was Harris' very first stop when he began warning colleagues of the flaw that he discovered. But the issue is that the center itself was understaffed and underresourced. And one employee who used to work there told me that staff is trained to think of cases in terms of how can I get to won't fix. So this center also clashed with the product teams."

I used to work for the MSRC. He's right that it was understaffed and underresourced. It's one of the reasons I quit, same for many of my ex-colleagues. But I disagree with his characterization of us trying to find any way to get cases to Won't Fix. The fact is, we got many, many reports that were genuinely not vulns, and therefore shouldn't be prioritized for fixing from a security standpoint. Yes, occasionally reports may be incorrectly analyzed but that's not because we were trying to get them to Won't Fix. It's just people making mistakes now and then.

"And, you know, another big issue there is that they're clashing with the product teams that they need to fix the actual issues. So they would bring a security vulnerability to a product group. They'd say, you need to fix this flaw. But those groups were often unmotivated to act fast, if at all, because compensation is tied to the release of new products and features."

That's true in part, but it varies wildly between product teams. Some were incredibly responsive and knowledgeable, some were clueless about security, some just didn't prioritize it.

Sometimes the fix was insufficient. When I was there, MSRC wouldn't check if the fix did what it was supposed to do, except in occasional cases where we were explicitly asked to check or if it was a particularly risky case that needed the extra scrutiny. But like he says, we were understaffed and underresourced, we simply didn't have the time to do this for every case.


I think it's the nature of Microsoft.

They've done things quick instead of well. This has served them well, and during the time microsoft has been around, most of the competitors that have reversed that equation have gone by the wayside.

I vaguely recall they were in the same boat decades ago with the win 3.x and win 9x - windows was a virus and bluescreen laden garbage heap. I'm not sure what OS they started really cleaning up and validating the API calls. I think windows 2000 was a major step away from shared memory space to cure some of it.


Like any other private company? All choices are to maximize profit, even when they spend resources in security, it is to maximize profit.


I observed they chose profit over usability and user needs as well (the list is toooo long, I save all of us from pouring all here, let's say I am contemplating getting a completely different job where I do not have to run circles around the way Windows is corrupted), so this fits into the big picture afterall.


Not the most earth-shattering revelation. Given how they’ve always needed to be dragged kicking and screaming to adopt more secure protocols, that should be obvious. For example, it’s been known that NTLM is easily cracked for over a decade, but they’re only getting around to phasing it out now.


Execs should absolutely be held responsible, but the human factor is always there. Many times people will take the easy route and get worn down by security practices or roadblocks.

I think it's too easy to go "alright focus on security" and then expect it trickle down and figure itself out.


Is it possible that insurance companies will play a role in this endless succession of insecure, critical, software systems?

Can you get insurance against software system failure? If you cn, surely the providers of said insurance will take a keen interest?


> Product managers had little motivation to act fast, if at all, since compensation was tied to the release of new, revenue-generating products and features

Only if I get rewarded that way lolllllll


Business decisions involve profits against everything, not just security. Delaying shipments to make a product more secure can affect revenue targets.


Of course when Microsoft does choose "security", it's often in the name of securing things against users.


Will the whistle blower end up the same way as Boeing whistle blowers? "See something; say something."


"[Enter Company Name] Chose Profit over Security, Whistleblower Says". Any company, you pick.


Well, that’s what a company like Microsoft will always choose, so no wrong-doing here—business as usual


This comes off like a study being published that shows tobacco is harmful to the lungs.


Unless that other priority is laying people off. Then the layoffs are more important.


Shocked! I am SHOCKED....


oh lets put fonts in the user space rather then the kernel space what could ever go wrong? this not new its a major feature of how MS works


We needed an article to make sure this was clear.


Bear shat in woods, whistleblower says.


Security first, but security from whom?


> “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,”

corporate morality is a Potemkin village. It's all about the profit and appeasing the shareholder, baby!

is anybody honestly surprised at this point? The abbreviation of "M$" is well deserved despite small OSS contributions and attempts to PR their way out of previous history (ie, United States v. Microsoft Corp. [2001])



surprisedpikachu.jpg


Surely, a whistleblower is someone who reveals a truth that nobody knows?


I mean, I see the crappy level of security across the whole industry. And I am working at the bank.

I am not sure what exactly the reason - even if the profit aside - but I suspect that there are not many people who are actually competent developer and security engineers.


Yes, it's called investment capitalism - as long as the consequences of actions one demanded are never felt by oneself, due to limited liability of the financiers and shareholders, then such behavior will never change.

The solutions are well known - the corporate death penalty is a good one, which dissolves the legal and financial structures of the company (the real assets such as factories are unharmed by this, and may simply be sold to a new more reliable set of financiers and shareholders, or may be nationalized and managed by the state, or may be handed over to the workers who run the place to see if they can form an employee-owned company or not, etc.).

This isn't such a radical viewpoint, even many venture capitalists agree that this is the right way to go, e.g. on the airlines:

https://www.cnbc.com/video/2020/04/13/government-should-let-...


RE title: and somebody also said -- "The sun is still rising from the east". And after pulling their white beard, said wisely: "And water is still wet".


Obviously


Boeing chose profits over safety.

All companies choose profits over <literally anything>

That's unfettered capitalism.


Capitalism ate my face




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: