The quote bombshell here, and what hasnt yet gotten much attention since sysadmins the world over are busy dealing with fallout, is that the NSA and therefore the US government is directly responsible for the current global cyber-carnage. We developed the capability, we chose to keep it unpatched, we tried to keep it secret, we lost control of it.
This has similarities in type, if not in horror, to the development and subsequent spread of nuclear weapons. When we lost control of those secrets, it was a BFD [0].
I agree completely. People can blame MS for their insecure OS, or users who don't know any better for running outdated systems (or even for running Windows at all), but the stark reality is that all OSes have vulnerabilities because they are huge and complex and it is impossible to make them 100% secure.
But the NSA are - by definition - supposed to be security experts, so what are they doing letting themselves get hacked? They have effectively given away the nuclear football.
I'm shocked we're not seeing more blame in their direction on this one.
A bit rich from Microsoft to talk about hoarding when the patches they released over the weekend were all signed back in February... i.e. they are hoarding fixes to their own shit for their $$$ extended support agreements.
"The chaos surprised many security watchers because Microsoft issued an update in March that patched the underlying vulnerability in Windows 7 and most other supported versions of Windows. (Windows 10 was never vulnerable.)"
So I don't really know what you mean by 'hoarding the fix'. The patch was not initially released to some OS versions because they are NO LONGER supported.
I believe the "hoarding the fix" comment was in reference to the patches for Server 2003, XP, and Windows 8 that were released publicly for the first time over the weekend (but had been distributed previously to customers paying for custom support) [0].
The "$$$ extended support agreements" funded the development of those fixes. Why would anyone pay the agreements if Microsoft just developed and released those fixes for free? If organisations are stupid enough to lock themselves in to 16-year-old software and create more work for Microsoft I'd say they were well within their rights to charge.
I blame Microsoft, not for having a shitty OS, but for colluding directly with the NSA. Anyone who believes that Microsoft was not aware of the exploits in their system is naive.
Remember, head executives at Microsoft are essentially part of the "shadow government" as they were privy to 1984-style surveillance that even much of congress was unaware of until the Snowden leaks. People at MS knew and said nothing. Executives at MS are closer to the NSA than most of congress. Let that sink in.
> The problem lies in our defensive infrastructure and our ability to roll out patches responding to incidents.
The problem is corporate IT (or management) think they can create some sort of stable environment, driven by fear of having things break. Organizationally they need to accept that they are operating in a dynamic and hostile ecosystem and that the risk of worms is higher than the risk of some random app breaking on a windows patch.
Organizationally they need to accept that they are operating in a dynamic and hostile ecosystem and that the risk of worms is higher than the risk of some random app breaking on a windows patch.
Except it's not. The account used by the hackers has supposedly earned about 4 Bitcoins so far. Meanwhile, many people from home users to professional IT personnel can recall incidents where Windows Update has broken something that worked fine before. Up to and including installing a completely new version of Windows, force-fed to unwilling customers with intentionally-deceptive practices.
I'm a CentOS desktop user at work and Ubuntu at home. I love my Linux. Objectively, the parent poster is correct. For all MS's faults, I've had no less problems updating Ubuntu systems than I've had or seen with MS systems.
That said, CentOS is _rock solid_. The packages are old, but maintained by Redhat upstream and do not break on updates. The only thing I recall seeing break on a CentOS update, including point releases, are Firefox and Thunderbird extensions as Mozilla apps are updated eight version numbers from one ESL release to the next.
Mostly problems with the graphical stuff. More than once I've had to log in via a text console and mv ~/.kde somewhere else to start X, or move some ~/.Xfoobar file. Once some ~/.Xfoobar file filled up the entire /home/ partition due to some X error. I've also had problems with some network card driver on a new install, I can go through my posts on unix.SE if you want more detail.
I simply remember that Ubuntu should only be updated when I've got a spare day to fix any potential issues, whereas so far CentOS can be updated before each shutdown.
All this is from the perspective of a desktop user. I use both on various web servers and I've found both to be reliable. I'll use CentOS where I need absolute stability but on my cloud instances I'll happily use Ubuntu and get the latest PHP, etc.
This is a little misleading. The cost of the attack to businesses, governments etc is vastly greater than the laughable amount of money actually raised by the criminals.
A doctor who needs to look at an X-ray and comes up against WC is not going to pay up on her credit card. She will call the IT department to 'fix the broken computer'. But she still won't be able to look at the damn X-ray.
This is only a single particularly large attack, the same sort of thing happens to machines everyday on a smaller scale. The future potential for attacks like this also go way beyond the current attack.
I do agree MS needs to shoulder a lot of the blame here, but would they have acted differently if IT departments didn't block updates?
Unless the NSA reported it to MS back when XP was still supported, not much would change. People can (and do) reverse-engineer exploits from windows updates, and they could still take advantage of the large number of unpatched XP machines.
The NSA likely gave MS months of lead once they determined what SB stole. A patch was pushed out before the release of the vulns.
There's no reason to suspect that people wouldn't have reverse engineered the vuln from the patch and had similar timelines of unpatched systems being exposed.
In fact, we see exactly that play out over and over with security patches.
Well a US government agency in charge of US Security decided that keeping the US and its allies vulnerable. They definitively need to answer if the benefit was really worth it.
MS also has to share some blame here for updates that break things, and updates that restart at random times (such as when you're doing some really urgent work). This has trained a whole lot of users to believe that windows updates are a risk to their use of the computer, and now just click away any update prompts.
Honestly, though, updates breaking things are quite rare. Sure, I know of "a friend of a friend of a friend" who had something break, but honestly, aside from patch installations sometimes being started at inopportune times, I haven't personally encountered things breaking on any of my systems or those of friends or family. More things get broken because people hear that there are problems and try to stop the updates.
Complete BS. This is what happens when you have top class PR at your disposal to define the narrative.
Microsoft is responsible for their shit software getting exploited first and foremost. Seriously fine Microsoft and by day after tomorrow that 3500 security engineer number will jump to something realistic.
Instead what will happen is more tightening of the walled garden, overcharging of support/security contracts and propping up of another billionaire or two. I can hear the whisky glasses clinking.
Corporations do not get to set the agenda and the narrative. When they are allowed to, the results are very predictable - in this case Microsoft will make more than they loose. Who here disagrees that is going to happen? And who here believes that is right?
The answer is simple whether its Microsoft today or Facebook and Google tomorrow win-win should not be an option when such things happen.
Uh, except Microsoft had already patched the vulnerability, just not for XP that was still being run. Of course you can punish them and force them to support all legacy OSes forever, until that strangles the life out of them at which point large institutions still have to run the old OS because they have too much investment in computer controlled hardware with no forward migration. Now they are locked into an insecure technology stack with no vendor to take responsibility and no source code to even take on the problem themselves.
There's plenty of blame to go around to be sure, but giving the NSA a pass for developing zero days is batshit insane. These guys are playing god instead of helping make infrastructure more secure overall, and it will not end well, even if they outcompete the Chinese or whatever other bogeyman they cook up to justify their power grab.
It wasn't about fixing, it was about upgrading/updating. It takes people and money to upgrade large infrastructures - closed source or open source, doesn't matter. Thinking that irresponsible (or budged-constrained) organizations will somehow have a completely different mindset and or set of priorities when they switch from Windows to open source software is naive.
No it is about fixing, a design flaw allowed this to happen. SMB1 (hopefully) wasn't built thinking EternalBlue would be a fun feature.
No one expects perfect software; but this clearly happened because Microsoft's software was broken, the NSA found where, and horded and then lost control of that knowledge.
edited: I understand what you mean about people not patching and leaving themselves vulnerable. A lot of pain could have been prevented at that level.
So let's assume this was in Ubuntu, lets say, version Ubuntu 10.04.4 LTS (5 years of support), and the NHS decided that it didn't want to upgrade beyond 10.04.4 because some of their stuff broken...
Long term support ended in May 2013 for desktop. But Ubuntu patched the bug in March 2017 for all current supported versions of Ubuntu.
Then the NHS got his with the bug.
How does free / non-microsoft software protect against a shitty decision to not update / upgrade?
> How does free / non-microsoft software protect against a shitty decision to not update / upgrade?
By not bundling upgrades with what is essentially malware, and making them as inconvenient as possible.
If I am running Ubuntu 10.04.4, and I hear about serious malware that relies on a security hole that is patched upstream, I have the opportunity to patch it myself, and keep running Ubuntu 10.04.4 as long as I want.
That being said, it's disingenuous to compare unpatched Windows 10 with unpatched Ubuntu 10.04. It is totally unreasonable to think you are secure using an unsupported OS, but it is a lot more reasonable to think you are secure running Windows 10 just a couple months out of date.
Anyone can seek help on the open market to support Ubuntu 10.04 forever if they like. You can't go to another company if you don't like the price Microsoft sets for support for Windows XP.
This comment makes my blood boil. Please ask yourself:
1. why would anybody want to keep 10.04 alive?
2. do you think the type of people who stubbornly continue to use 10.04 would know/care enough about security to seek an alternative source for security patches?
edit: should maybe add why this pisses me off: just logged into a production server running 12.04, default install apache and updates _turned off_. the owner looked confused (and slightly bored) when I explained the problem to him.
I don't particularly care why an organization would want to maintain a piece of software indefinitely. That's not my problem.
I do think that's important to recognize that there is model under which an organization can. I'd even argue that it's a more "free market" than that of single-source proprietary software, too. If there's a market in maintaining non-proprietary software someone will pop up to fill it (even if it's just a lone-wolf consultant). With proprietary software that can't happen.
Whether or not an organization or individual chooses to maintain software is an orthogonal concern to the model under which they maintain it. Even when there is a free market for maintenance some will opt to eschew maintenance. Personally, I'd like those organizations to pay the cost by way of data loss, downtime, going out of business, etc.
I'm not overly worried about it. I think traditional regulatory and risk management will eventually catch up. Someday (hopefully sooner, rather than alter) businesses won't be able to get basic insurance policies unless they can prove they're doing IT maintenance, for example.
Whatever hardware that is running that 12.04 system can be upgraded, free of charge, for likely the next 20 years if the past 20 years of linux is anything to go by.
Even if you pay money for the windows 10, it is unlikely to even start on the hardware that XP ran on. Not only will the people have to go through the budget to pay for the software, but now you need a full upgrade plan.
To put this in a concrete example. If a hospital had a check-in system running 12.04 they could just take someone internal from IT and go and fix it. If it was Windows XP then they need to go through finance, then get a offers from competing companies, fitting the upgrading into the budget, and last have people installing it in each of the hospitals entrances. The first case has a project length of days and the other of months and in worst case years.
I understand the argument, but I think "just take someone internal from IT and go and fix it" is vastly oversimplifying the skills/manpower/time required for doing something like this.
I can only speak of my own experience as a sysadmin, but the more isolated the system is and the less critical it is for operation, the easier it is to delegate the job of doing a software update to coworkers and new hire. Especially if all the issues from doing an update has already been established on several other machines, in which case the update is more or less mechanical in nature.
It reminds me of the story about a thirty year old Commodore Amiga running the AC system for a school district. The district finally decided to modernize the AC for $2 million, but until then it was just cheaper and easier to continue paying a person to run it every year. Replacing hardware systems is expensive and political complicated, while continuing paying an employee is just status quo.
> Assuming these hospitals keep updating and do not get stuck at Ubuntu 10.04.
It's that simple.
If someone wants to continue using outdated software, they will want to keep supporting it. Free software lets them do that. Proprietary software specifically forbids it.
I think you missed the point being made by the person you replied to. The only 10.04 install you should encounter exists due to ignorance and not due to upgrade cost as with a non-open OS. XP/Vista/7/8/10 don't get upgraded due to them being proprietary and having a single point of support (concerning OS level exploits).
So, 1. because there is a community outside of a major corp who are active, so it isn't a burden on Canonical.
2. yes? see 1.
Should any IT professional not have upgraded from 10.04? No. It's free to upgrade, unlike Win which, remember, isn't a single upgrade, licensing is per user.
You too are missing the point. The same people who didn't upgrade from 10.04 would probably also not take advantage of the Microsofts offer to upgrade to win10 free of charge, because <reasons>.
I am so happy that win10 patches are mandatory despite all the whining. In fact, I want them to take it one step further and adopt the ChromeOS update model.
Hospitals run life critical equipment, Ubuntu is not suited for this. AFAIK hospital running linux choose debian for stability, or red hat for support.
XP is still in use for 2 reasons: cost and backwards compatibility.
For cost, CentOS, on it's own, is free. Support costs you of course, but the updates are coming down from RedHat for which there is enough money flowing in already, so support in this case means a sysadmin who understands CentOS and those are not that rare, not even that expensive.
Backwards compability is another topic, especially with the rise of systemd.
If the corresponding software is not included in any official or semi-official repositories (EPEL, for example), but is distributed with source, you may need someone to recompile it every 11 years, when you change mayor versions. I think this is reasonable to expect, though there might be issues for certain, especially if it involves Gnome3.
For those that are distributed without source code - well, that is the same problem as with XP, but usually it's possible to strace why it fails and fix/replace/dosomemagic with the underlying libraries it's depending on.
When this is not possible you can still create a container image with the old code to run it with.
With all the power out there even in the office workstations we could:
- install a base, damn stupid linux as hypervisor
- run windows in virtualbox with shared folders
- use btrfs for the shared folders and keep daily snapshots for a few weeks
If you get a virus, drop the image, get a new one, restore the snapshot, done.
If anyone is already using something like this, please tell, I'm curious.
> Backwards compability is another topic, especially with the rise of systemd.
User level ABI has had no important incompatibility since the glibc released with the kernel 2.6 (don't remember the version). That was some 15 years ago. Most applications didn't even break at that time, and core libraries promise more stability now.
That's nothing similar to the compatibility break between Windows XP and Vista. That transition broke most of the older applications, at the kernel level.
> That's nothing similar to the compatibility break between Windows XP and Vista. That transition broke most of the older applications, at the kernel level.
First i hear of this, so MS did a damn good job of papering over it.
The only Windows breakage on the software level i have noticed is the jump from 32-bit to 64-bit, and that has more to do with CPU modes than Windows internals.
But i keep battling crazy dependencies and odd breakages related to desktop software on Linux. Never mind that devs keep reinventing the wheel (how many VFS implementations have Gnome gone through now? 3? 4?).
> ... are good examples how much safer open source actually is.
Sorry, open source never equals free software (most of the time). Though what you said may be true for both.
And some day, we will surely know why free software is better than open source. It's only a matter of time. But by the time, it will be late, and out of control.
Eh, Never. Not even for open source. Once the source is closed, it is no longer open source (and neither free software).
For a software to be open source, the user should have a way to obtain the source code legally (That is, a stolen source code won't make a software open source).
For the software to be free software, the user should have the freedom to (modify and) replace the software with the user's version of the software (of course, source code availability is pre-requisite for this).
Say for example, your router, Android phone, TV, Car, or your espresso machine could be running Linux which is open source. You get the source code of those over the Internet or from the vendor on request. But you may not be allowed to change it. So you are always on the mercy of the vendor if something happen (like the one happening now). They are open source, but they are not free software. (GNU [A]GPLv3 enforces this freedom. Some like it, some don't).
A software can be free or non-free based on where the code is run, not just whether you get the source or not.
This is freedom 1 by free software definition:
The freedom to study how the program works, and change it so it does your computing as you wish.
Although it is often the case that free software (or libre software, or whatever you want to call it) is available at no cost, the term "free software" generally does not refer to the price but to the given freedoms.
>> large institutions still have to run the old OS because they have too much investment in computer controlled hardware with no forward migration. Now they are locked into an insecure technology stack with no vendor to take responsibility
Any company that locks themselves into a specific operating system, and then declines investing to upgrade with each new release is entirely at fault. I can imagine the executives at these companies complaining about how their one-time outsourced application made overseas cannot possibly be migrated. Even if built locally, clearly no money was budgeted to maintain the software or infrastructure. These companies get what is coming to them when their only priority is the current quarter's bottom line, with no planning for how the company will manage to keep operations up and running in the next quarter, let alone the years ahead.
You specifically mention lock-in due to "computer controlled hardware". The idea that companies build the core of their business on hardware that can be controlled with Windows XP but not Windows 7 or Windows 10 is laughable. How is that even possible? The backwards compatibility Microsoft provides means it's nearly impossible for any application to become unusable within a decade - or even longer. The application will need to be maintained with minor changes to make use of modified APIs, or to transition from 32 to 64 bit architecture, etc. - but the amount of work needed is nowhere near infeasible. It only becomes difficult if you spend many years ignoring required upgrades, and then try to perform a single massive upgrade covering half a dozen missed release cycles all at once. Even hardware ports going out of fashion (example: serial ports) is not the end of the world. Compatibility between the latest operating system and old port standards will always be possible, as those that need such things make it happen.
No sympathy for any company still running Windows XP. None whatsoever. It sucks when it's government that is affected, whereby taxpayers' dollars take the hit for the fallout. Still not a shocking, unexpected result. In fact, this is precisely the expected result.
I think MS wanted this in Win10 (hence the big push on the consumer side by giving it away for free). The problem is that plenty of people, even people who should know better, don't want to upgrade something they think is "working well" for an advantage they can't see.
Allowing XP to exist forever is not a good thing for security either. There are security architectures in place within Windows 10 for example that do significantly improve security.
At some point companies need to cough up the money and upgrade their technology.
Is there some philosophical principle under which you believe that companies must "cough up money" for services that they have already ostensibly paid for? That sounds remarkably like extortion.
If Windows XP is proven to be untenably insecure, anyone who bought it should receive a refund.
My car will break down at some point due to imperfect engineering and the realities of physics. Is Ford required to repair my car indefinitely or allow a refund on a car with 250k miles? No, when I bought the car, it came with a warranty stating if they messed up they would fix it within a certain period of time or miles.
When I buy Windows, I agree to a warranty of sorts. They agree to supply updates to the software for a set period of time. Afterward, it is on me.
Nobody can write perfect software, it will age and break down. Nobody can engineer a perfect car, it will age and break down. Demanding infinite warranties is ridiculous.
> Is Ford required to repair my car indefinitely..?
Never. But it would be wrong for Ford to stop others to fix your car by providing no information about the car, which I believe is what Microsoft is doing with their obsolete Software pieces (including OS).
As that is the case here, They (Microsoft/Ford) are just lending you something, you won't ever own it. Would you agree with that?
The car analogy a very poor one. Software doesn't wear out-- physical stuff does. Defects in software are present when it's created. It doesn't "age" or "break down".
(I am making no comment on the issue being discussed-- simply that this is a very poor analogy.)
Nearly all complex software will have problems and weaknesses not known at creation, much like nearly every car will have some kind of weakness that will wear out. While there are differences between the physical world and the digital one, I think my critique of the concept of demanding infinite warranties is still valid.
And yes, I do think software can "wear out", not in the same sense as belts get worn and spark plugs physically wear away, but in the sense of threat landscapes changing over time and our understanding of how these systems are used in the world. This is why we do maintenance on our software and systems, much like we perform maintenance on things in our physical world. When you fail to perform this maintenance, bad things happen. Computers get hacked, cars have brakes fail.
Software can indeed age. Go run Windows 95 on the public internet or an early version of Android.
I think the car analogy isn't that bad. New classes of security issues get discovered over time. Development processes which are considered "state of the art" at one point can become unacceptable 10 years down the road.
A decade in software engineering is a significant amount of time!
For the car analogy, what will happen when self-driving cars become the norm and the contained software becomes so important?
I'm going to be annoyed if my car becomes useless after 10 years because they dont have to patch it after that period. On the other hand though, can we realistically enforce lifetime guarantees? What is a car company goes out of business?
Pretty sure the endgame for self-driving cars is pretty much nobody owning cars anymore. It will become too convenient and cheap to rent one instead (possibly not in rural areas, not sure about that). Kind of like how very few people in big cities bother with owning cars.
As opposed to others, I would like to agree with you. One can make design decisions which allow for maintenance over a very long, or indefinite period. This would require using formal methods and a different hardware architecture. Unfortunately, in today's world, we are stuck with mantras like "move fast and break things", which entails running away from, instead of fixing, the complexity we leave behind.
Software does wear out. New languages/frameworks are developed which makes it difficult to patch older stuff. New threats are developed, and it may be impossible to patch older stuff.
I think the discovery of new types of exploits could be considered akin to wear-and-tear of physical things you buy. At the point of sale the software was safe, but over time problems were discovered.
When you buy a house you have a whole battery of inspections performed to make sure that you're buying somewhere safe, but over time the small things that got overlooked (like a small crack in a roof joint) or were considered safe at the point of sale become worn, or are discovered to be unsafe (locks susceptible to bumplocking for instance).
It's a tenuous analogy to be sure, but I don't think it's reasonable to think that Microsoft should refund people who bought XP. Are there any Linux distributions that back port all fixes to version 0.1?
Microsoft's support policy says they will only provide security updates for 10 years. Any company who wants more than that can pay them extra for the privilege. That's not extortion anymore than extended warranties are extortion.
Microsoft was a monopoly when they sold that contract, which makes it subject to much stricter guidelines on what is allowable in the product they sell.
Assuming we class XP as a defective product, at what point do we stop requiring recalls? If there is a safety defect in a 2001 model car, will it be required to have a recall?
Given that MS even made a patch (which is generally equivalent to a recall), I'm not sure that your suggestion will be given that much credence. I mean, if we say that XP is an unsafe product, the government could stop them from selling it and to remove it from the shelves, but MS stopped selling the product in 2008 (nearly 10 years ago) and has repeated urged its customers to stop using it because it is insecure. This is all that the government generally requires in this situation as far as I can tell.
Not all markets or products are the same. You're taking about software as if it were a rotten potato. It's not. It's an incredibly complex market for incredibly complex products. I agree that there needs to be a way to value the liability that software makers should face.
In the short term we need everyone to be better net citizens. That includes the businesses using this software to create the trillions of dollars of wealth on the global economy.
Microsoft still are supporting XP. Just not for the the general public.
Organisations with high value software that relies on XP still receive ongoing support from Microsoft (such as the US Navy and anyone else who wants to pay big bucks for it). The difference is none of these patches usually make it to the public.
For Microsoft to patch this current issue, there would have already been a pre-existing team working on XP patches, the only difference is this one was released publicly due to it's impact.
See, but you're making practical sense about the real world. That's not going to fly with people using this as an opportunity to push their favorite narrative.
I agree with the point on the NSA. there were surgeries cancelled in the UK. This materially impacted the lives of our allies. How is that supposed to work?
Luckily we've got a set of level heads running every branch of government these days...
> Instead what will happen is more tightening of the walled garden
You know what? I'm starting to get excited for the walled garden to get more walls.
Native desktop applications get far too many permissions by default - its crazy that any desktop application, once running can register itself at startup, see all my files (created by any application), register system-wide keyloggers, take screenshots of other applications and download my contacts list, all without my permission. We don't let web apps do that, because web app developers aren't trusted by default. We don't let mobile apps do that, because mobile app developers aren't trusted by default. Why on earth do we implicitly trust any executable file run on the desktop so much?
Telling users not to double click on executables is obviously not working. Even for experienced users I have no idea whether some random app on the internet is trustworthy. Its a reverse lottery. I also suspect ransomware like this one would have been slowed down if it needed explicit user permission to read & modify files on disk.
We even know what the sandbox should look like, because we have two working examples in the form of the web and mobile. And we have sandboxing support & APIs in most operating systems. We're just missing the UI part.
I'm imagining something like:
- All apps get signed by the developer (Lean on SSL? Not sure the chain here.)
- The app needs to request capabilities from the user, like on iOS. "App X by Y developer wants permission to read the files in your home directory". (/ Read your contacts / Register at startup / Take screenshots / Modify these files).
- Capabilities can be viewed and revoked at a system-wide level in the control panel / system preferences.
That's fine and dandy - I'm all for it, in fact, I configure my systems thus with 3rd party tools as much as I can. Android is mostly like this (with a less than perfect implementation)
But when people talk of "walled gardens", they mostly refer to the guardian at the entrance. Only Apple decides what runs on iOS, only Microsoft decides whats in the App Shop. That's NOT good for anyone (except Apple and Microsoft).
Sure, make users jump through hoops to install alternate stores, and warn them up the wazoo when they do that. But do let them, or general purpose computing as we know it is gone.
Security professionals are almost completely unanimous about how effective Apple has been with it's "walled garden". I'm not even an Apple fan, but what they have done is pretty amazing from a security perspective. Like it or not it has worked to keep people safe from many many types of attacks.
Sandboxing and tighter security are orthogonal to app stores. The same security policy should apply to every app, regardless of whether it was installed through the official store or from another source.
What the grandparent is suggesting is akin to UAC, which received much hate when it first debuted in Vista but has now become a mostly accepted part of the Windows user experience. It has been done before, and it can be done again, with every Windows app, not just apps from the Microsoft Store.
They are orthogonal in theory, but so far not in practice - all three appstores in common use (iOS, macOS, Android) have mandated sandboxing and security.
grandparent was suggesting UAC, but started with:
> You know what? I'm starting to get excited for the walled garden to get more walls.
Yeah I don't want the only distribution model to be an App Store. And I don't want to lose the ability to run things with root access.
But I strongly believe that right now apps get too much access by default (read, write all my files is crazy). And if they need anything beyond that they just ask for root. There needs to be much more granular permissions, with more restrictive defaults and nice informative dialogs.
It's unsexy, and inconvenient for developers. But it's the right thing for our users. It's how I want random programs downloaded from the internet to behave.
>You know what? I'm starting to get excited for the walled garden to get more walls.
Yep. What developer types don't like to admit is that for the average user, who doesn't use the features excluded by the walled garden anyway, the tradeoff is well worth the security gains.
Why do you think people would treat them any differently from the UAC screen of Windows 7? That is, just click OK to grant whatever permission it wants, or disable it entirely to avoid the annoyance.
And who do we fine for all the bugs in Open Source software then. The most serious vulnerabilities of late have all been in Open Source packages:
- ShellShock
- Heartbleed
- etc
Do we fine the person who committed the faulty logic, the reviewers, the entire community who "peer reviewed" it?
How many systems where actually compromised in an unrecoverable manner costing thousands or millions, maybe even billions of damage due to any of those Vulnerabilities?
All of them combined to not even come close to the damage that occured over the weekend
Shellshock, heartbleed were a inconvenience for some sysadmins and click bait for the tech press
Heartbleed and ShellShock were serious but nowhere near the seriousness of WannaCrypt. Don't let the top headlines of HN and a fancy logo be how you rate vulnerabilities.
I'd be happy enough to go with "you fine whoever wrote the invoice or cashed the cheque". You wanna sell it? Take responsibility for it. You scratch your own itch and give it away for free? Good on you.
What's your alternative? Are you suggesting we _do_ fine all the OpenSSL contributors? Or that we do not hold anyone except end users responsible for software/hardware security?
I'm not sure metaphors or comparisons between software and lemonade are entirely helpful - although they do push the discussion along, which is at least interesting... (So if I didn't _make_ the lemonade, but published my "4 lemons pulped, 1/2 a cup of sugar, and 2 teaspoons of rat poison" lemonade recipe on github - then you made it and got sick... Who's in the firing line then? What if the README says "this recipe is satire"?)
No system is perfect. Remember Heartbleed? Microsoft released a patch to correct this particular issue in March, however the IT infrastructure in companies is slow, the whole process is convoluted, yada yada.
The point is: the NSA caused this particular problem. Steps should be taken be everyone to ensure something like this doesn't happen ever again.
The NSA did not cause this particular problem. The NSA may have identified the vulnerability, however there is certainly an argument that their other responsibilities outweigh any responsibility that they might have to act as a free security investigation team and report a security vulnerability to an outside corporation.
If Russian government intelligence agency security researchers found that bug first would you say that they have a responsibility to disclose it to Microsoft (notably a United States company)? Would you be surprised if they felt and acted differently?
> however there is certainly an argument that their other responsibilities outweigh any responsibility that they might have to act as a free security investigation team and report a security vulnerability to an outside corporation.
Yeah, a shitty one. Free? No they're funded by tax payer dollars. I do think we need to argue about priority of responsibilities. Was this exploit used to spy on allies?
Don't know, and unlikely to ever find out. If so, it was likely very targeted to avoid detection on modern systems. Was it ever used to spy on Iran's nuclear enrichment program?
> I do think we need to argue about priority of responsibilities.
Ok. What responsibilities does a US government agency have to disclose vulnerabilities? Should they be required to disclose all vulnerabilities found in software and equipment from US companies? Since a lot of that technology is used around the world, are you on with the corollary of it being harder for the US to spy on anyone using modern equipment?
How about disclosing problems found in tech products used by US companies? Should the NSA do that as well to keep those companies safe?
The US provides a fair amount of funding to organizations focused on finding and responsibly disclosing security problems, notably CERT[1] and US-CERT [2]. The NSA is a completely separate thing.
Just because the media (including Microsoft) tagged those projects (that were maintained by small groups of core develops and are free (unpaid) software) with fancy names - those problems weren't anything like the massive, global impact of just one of Microsoft's ticking timebombs due to poor software design and lack of emphasis on security in their products. OpenSSL doesn't and didn't have the PR powerhouse of Microsoft and people didn't pay for their software let alone fund its development.
> Microsoft's ticking timebombs due to poor software design and lack of emphasis on security in their products
I assume you know nothing about software with flippant comments like this.
Completely securing software is an incredibly difficult thing to do and merely throwing resources isn't going to change that. It is just as likely to affect well designed software as it is poorly designed. Especially given that all of us rely heavily on third party libraries and underlying infrastructure.
>Microsoft is responsible for their shit software getting exploited
This is an absurdly naive viewpoint. How are they responsible? What is their responsibility? How is it their responsibility when a state-funded group/actor targets their software and finds an exploit?
At some point you have to realize that 0days will always exist. It is an impossible task to expect software developers to ship perfect software.
Fair enough but, like others have said, who do I fine when my Wordpress site gets pawned or for Shellshock, etc? I wish this problem was a simple as blaming/fining MS or Google.
I think without: 1) Open Source software AND especially 2) significant financial incentives for finding and reporting bugs, it will be business as usual for the foreseeable future.
Agreed. The NSA has known about the shadow brokers beach for 4 or 5 months. They should have warned industry sooner.
They ostensibly maintain their capability to protect us, but this is a clear example of them failing to protect us. The focus on offensive posture is all macho and typical military industrial bluster. My point is that the offensive cyber capability is more about dick length than keeping the country safer.
Nevermind that the internet is a global shared resource that works best when we work together.
Also, MS haters are doing some pretty fantastic replays of the hits in this thread. I get that you don't like them, but "kill Microsoft" isn't the answer. Maybe there needs to be a model for assigning cost to vulnerabilities like this...to Microsoft and the NSA. Make them account for this in monetary terms and you will see change.
Another point which hasn't yet gotten much attention is the valuable role Wikileaks played as an early working system. Without Wikileaks it is likely that Microsoft wouldn't have had the chance to release a patch ahead of any attacks.
How is that a surprise ? Warnings have been around left and right since before Snowden about this kind of scenario. Has happened before, will happen again. What's new here is that a suspicious group named Shadow Brokers went public about it and the release while this usually happens in private.
We would be in the exact same situation if the NSA had immediately disclosed the vulnerability to Microsoft. Old software wouldn't have been updated and someone would have exploited it.
I am not exactly sure what dark powers were US government using to force people to leave 3389 on public facing ips. I thought that CIA mind control experiments failed.
According to CERT [0], the original infection vector is still unclear but possibly phishing, however once a computer is infected, any vulnerable machines on the local network are targets.
Yep, too many companies still use the Ring Fence approach to security, in a day where Mobile Devices and laptops are moving all the time this is very very very bad.
All it takes is one infected machine to get behind the permitter defenses and it is game over.
One of the reasons why such attack was possible is poor security in Windows. Port 445 that was used in an attack is opened by a kernel driver (at least that is what netstat says on WinXP) that runs in ring 0. This driver is enabled by default even if the user doesn't need SMB server and it cannot be easily disabled.
Most of services in Windows are run under two privileged user accounts (LocalService or NetworkService). Many of them are enabled by default and are listening on ports on external interface so the potential attack surface is large.
Microsoft uses programming languages like C++ that is very complicated and a little mistake can lead to vulnerabilities like stack overflow, use-after-free, etc.
Microsoft (and most companies) prefers to patch vulnerabilities with updates rather than take measures that would reduce attack surface.
Oh, and by the way Linux has similar problems. In a typical Linux distribution a program run with user privileges is able to encrypt all of the user's files, access user's cookies and saved passwords on all websites, listen to microphone and intercept kestrokes.
The thing is there really isn't a production ready alternative.
Rust in ring 0 isn't production ready -- a lot of language features needed to run in ring 0 are nightly only.
There are no widely used microkernels. Ironically, of the widely used operating systems in the world, Windows does the best job of running drivers in userland.
SMB server doesn't need to run at ring 0. It doesn't need direct access to hardware or physical memory. The most safe option would be to run a copy of SMB server under user's account (but it still would allow to encrypt all of the files).
Yes, but as free software, it inherently has better solutions.
Using a proprietary operating system is like driving a car only the manufacturer is allowed to fix. You don't get to fix the flat tire, and when the manufacturer drops support, you have to buy a new car. If you don't, these situations leave you stranded.
Why do you claim C++ relates to poor security? OSX and iOS are primarily C, C++, and assembly, (objective C at the higher levels). And linux of course is C and assembly.
Are you saying all of the major operating systems have poor security because they use "vulnerable" languages?
By that definition, pretty much all software has "poor security" regardless of language. I don't think your definition of "poor security" is proportionate or useful.
> By that definition, pretty much all software has "poor security" regardless of language.
My definition of "poor" is that it must have a babysitter to maintain and patch it. Whether or not this is the case depends on the attack surface, which of course depends on the complexity of what it does. A system that has no attack surface can be very buggy without having poor security. But an internet connected machine with modern windows/posix OS that does some useful work will likely need a security patch already within the first couple of years - and that I consider pretty poor.
I think C++ is very compicated, it is difficult to write memory-, thread- and exception-safe program in it and it is easy to make a mistake that can be exploited.
Another lesson learned: don't bundle your security updates with your cool new features nobody wants, Microsoft. This will aggravate the problem as more people/companies will defer updates.
I don't know why I'd choose a operating system does that.
It pushed some telemetry updates, which arouses some privacy concerns (only after Microsoft's aggressive attitudes about Windows 10 promotion, before that I was OK with its telemetry updates. I'm aware sometimes telemetry tracking means good.)
And much more.
All of these behaviors make me think that I'd rather lose my data than suffer from these "features".
Yes, Microsoft is converting an operating system to a web page where they can track your usage. Have you tried with tweaking software like http://winaero.com ?
Were people not updating to more modern OSes because they didn't want new features or because they didn't want to spend the money on new licenses and testing software compatibility?
And how sure are we that they didn't install security updates out of sheer laziness or hubris?
People who run systems that store sensitive information and systems should take computer seriously more serious than the people on Hacker News. I would never allow my my smartphone, let alone computers and servers to run unpatched software. Why is this acceptable for people who have critical systems and data?
I also wonder how long it will take before the shiny new anti-piracy instruments will be abused by a member of the intelligence community, a low-level politician or perhaps embedded into desktop OSes. http://pimg-fpiw.uspto.gov/fdd/50/148/096/0.pdf (You are not the owner of your files)
Frankly speaking, Microsoft has gone too far into abuse, lock-ins and presumptions.
As a personal comment, I have an old Windows 7 laptop I use with some win32 software, I do not have the slightest intention of upgrading to Windows 10 (not for laziness or hubris, but because IMO the product is not worth the price). And if it was a critical system, than Microsoft Windows would not really be considered among the options.
I'm not advocating for using Windows for critical systems that store tons of user data, but I am advocating that if you do use it, you should use versions that are still supported and make sure you patch it ASAP.
But should Microsoft be expected to back port patches to old OSes in perpetuity?
The issue is not the upgrade per se, but the "imperfection" of the upgrade process (wanted euphemism) and the fact that many consider W10 a worse os if compared to W7.
I would personally use an enterprise Linux distro for something like health records and other critical data, but you can Windows 10 similar to how you use Windows 7, and it's a faster OS. You just need to spend some time to get your settings in place.
I was in the same camp of you as Windows 10 vs 7 until I saw how much Windows 10 sped up an old machine of mine.
Users don't want to upgrade, many I know would rather use linux or macs. Microsoft should acknowledge the thing and fix what's wrong. IT departments these days are trying to convince the people they work with.
OS editions
- 10: Home [wipb + cb], Pro [wipb + cb + cbb], Education [wipb + cb + cbb], Enterprise [wipb + cb + cbb], Enterprise LTSB [ltsb], S
- 8: Core, Pro, Enterprise, RT
- 7: Starter, Home Basic, Home Premium, Professional, Ultimate, Enterprise
vs
- Debian: unstable, testing, stable, old-stable
- macOS: developers beta, public beta, released
- BSDs: current, stable, release, old-release
I am unsure if the Windows mess can be considered a "naming scheme", the single thing I have very clear is that there's something terribly broken (maybe the whole marketing fuss thing).
Just to mention two alternative ways to get data: bug reports, product feedbacks. You can ask for logs, system diagnostics, backtraces... One may have less data, but probably of a superior quality.
Outsmarting sysadmins, developers and users is not the first need.
If one is not gathering enough data because many are not able to find the tools and/or the website for the reports, that's a usability issue and that is what should be solved.
I seriously think telemetry is the wrong solution for the matter.
New versions of Windows also work slower (especially on old hardware), require more disk storage, contain spyware (telemetry) and advertisement that user cannot disable. And contain no new useful features. No wonder people don't want to upgrade. I think Microsoft should have stopped developing new OS with Windows 7 and release only security and bug fixes.
I disabled updates on my Windows 7 last September when I feared that I'd wake up to a Windows 10 machine like my wife did when her laptop updated to Windows 10. Unfortunately I can't seem to resume updates and fear that I may be vulnerable to WannaCrypt. (Some recent updates succeeded but I don't know if i patched for it)
> Disabling updates is the worst possible solution
If so, more blame lies at the feet of those that make it the only solution.
> Just click no on the windows 20 upgrade dialogue
Would that it were so simple. But Microsoft chose to mean "yes" by the "close this [annoying] window" button, with Windows 10; who knows what they'll come up with for Windows 20.
> (or just upgrade, it's pretty good)
For you, sure. Some people like to make their own choices.
> Refusing to patch
For most people that disabled updates, it wasn't a "refusal to patch", so much as a (read: the only) relief from annoyance.
Disabled the SMB services yet? Win + R -> services.msc
I routinely disable services (until things stop working and I have to figure where I went too far) and luckily I'd disabled this one on my Win7 gaming box, even though the updates came through as well (I just manually vet updates, and have a bunch of them blacklisted for adding telemetry).
Disabling services is good, but beware that they may be re-enabled during a software update. Once a service is disabled, you have to monitor that is remains so.
Thanks I've been searching for this but with no luck. Do you have a link?
(I've disabled SMB V1 as has been suggested in this subthread. I've also run MS Defender with latest virus sigs and so far it hasn't reported anything)
b) I'm worried my fairly nicely working Win7 environment will not work so well after updating to 10, as much as I want to get current with some genuinely useful features.
I'm generally a Microsoft "fan", but this is one of the many reasons I hate on them as much as Linux fans.
I never "upgrade" from one Windows OS to another. Always done a clean install. I postponed the upgrade because it's literally a couple of days' time project for me to get my dev environment up to speed. I also planned to purchase a new SSD before doing the new install (kill two birds with a single stone.)
Unfortunatelh I've been so busy with project deadlines that I haven't had a weekend I could dedicate to the new install and set up.
There's a lot of blame being thrown around, and I think it's all merited, but an inordinate amount needs to be on the users. I don't know how many times I've heard things like: "I don't think I'll update to Windows 10" or "That update has been nagging me for months" or even security advocates saying "Windows 10 is a privacy nightmare, I'll stay on 7". Being on the latest secure upstream isn't a nicety, it's what you have to do if you want any semblance of a secure environment. If you don't like upstream, jump to another.
It's definitely not end-users either. There's a grocery store that just went up nearby that I saw Windows XP splash screen on when one of the cashiers rebooted. No joke, new store, Windows XP computers that handle money. Microsoft may have cultivated this nightmare, but it seems everyone wants to live in it.
> Being on the latest secure upstream isn't a nicety, it's what you have to do if you want any semblance of a secure environment.
Windows 7 is in extended support to 2020. So as far as I know security wise still up to date.
> There's a grocery store that just went up nearby that I saw Windows XP splash screen on when one of the cashiers rebooted.
The cash register may be even running with a user interface written in VB6. Don't attach it to an external network and it will work just fine. No need to invest in new hardware/software when you can get it old, working and cheap.
> Windows XP computers that handle money.
In what way do they handle money? A computer virus isn't going to steal paper money and the device operating the card reader should have been sufficiently separated to begin with.
Do you really think that the machine does not handle credit cards a well? Provide a daily management report? Report inventory? Provide a Facebook interface between customers via the big blue E icon?
> Do you really think that the machine does not handle credit cards a well?
I don't know about the U.S., but as far as I know were I live these card readers have to be almost completely separate systems. The connection between these two should only exist to a) set the price to pay and b) confirm that a payment was made.
> Provide a daily management report? Report inventory?
No longer managing money directly, so the possible abuse for financial gain is quite restricted. You could argue that someone manipulates the reports in order to skim some money for himself, however that would be a rather targeted attack with someone on the inside profiting and could be detected when the physical goods no longer line up with the reported values.
> Provide a Facebook interface between customers via the big blue E icon?
* Predicate the commercial viability of your software on the basis of technological illiteracy
* Blame the technologically illiterate 'luser user' when things go wrong
* Try and profit from it even as you blame said 'luser user'
The best lesson for Microsoft would be if it incurs a tremendous loss to its reputation, and more importantly its bottom line, because of some issue like this.
It is strange to see people talking about how they took an exception and released a patch for Windows XP this time. Generally, such an exception is the very definition of CYA. If not, why don't they do it for all patches? Read: if the security hole can be used as a way to convince the 'luser user' to pony up more money, don't release a patch. But if the issue is so high profile (for example linking MSFT to a three letter organization), then better issue a patch and CYA.
No one in the UK seems to be tying this attack to the Conservative Party's desire for backdoors everywhere, which is a shame because it's a nice example for the public of how the government have got this very wrong.
Reddit is all over it although it has turned into something suitably reminiscent of Alex Jones' material. Jeremy Hunt is apparently directly responsible for running XP on all NHS equipment and pulling the plug on the support contract for post-extended-support causing the deaths of thousands of people while he rolls around in the dust of the crushed skulls of all his victims.
I would rather see it used to leverage an opinion against back doors and surveillance culture but alas this is merely administrative incompetence and failure to either upgrade or airgap systems which have had a clock ticking on them and plenty of notice from the vendor to sort. The buck should stop at the trust IT directors as this was entirely avoidable with a properly managed estate.
One scary thing about these security holes is that it's almost impossible to check if your system is affected.
There are at least 50 different releases of Windows 10 alone, and it's hard enough to find which is actually used.
The "System" dialog Shows "Windows 10 2015 LTSB".
"Winver" on the command line shows "Windows 10 2015 LTSB build 10240" - but there are several releases of that and only the latest ones, e.g. from 10240.17236 and up have the patch - But I can't seem to find which one I have.
I don't doubt I have a patched version, but out of curiosity I'd just like to double check.
1. Search for "windows smb server vuln" in Google.
2. "Microsoft Security Bulletin MS17-010 - Critical"[0] is the link I'm looking for.
3. Search for your version in the list. Mine is "Windows 10 Version 1607", listed in the table with 4013429 (right next to the Windows version, not in "Updates replaced"). That's my update number.
a lot of people kicking sand in MSFT's eyes for having such a vulnerability.. but come on, the code base for windows is enormous. The feat of engineering that is microsoft windows (and its many iterations) is pretty amazing when you really look at it. Yes, plenty of flaws, but show me some other software which has endured?
Further, all of the major infections are based on Windows XP. Windows XP mainstream support ended a full year before the first gen iPhone was out! It's seriously ancient and there are very few excuses for people to have this crap on a network in 2017. For the folks who dont run XP, but got infected because they didn't patch? No excuses.
If I booted a RedHat (5.2 came out in 2009ish) or FreeBSD machine from 2009 without patches, and put it on the internet, I'm pretty sure it'd be hosed just as bad (shellshock, heartbleed, ?). the difference is, everyone would tell me I'm an idiot for putting a machine online from 2009.
> If I booted a RedHat (5.2 came out in 2009ish) or FreeBSD machine from 2009 without patches, and put it on the internet, I'm pretty sure it'd be hosed just as bad (shellshock, heartbleed, ?). the difference is, everyone would tell me I'm an idiot for putting a machine online from 2009.
As a tongue in cheek (but totally true) correction, FreeBSD from 2009 would NOT be vulnerable to the shellshock vulnerability unless you explicitly install `bash` and make it the shell used by apache-cgi.
> It's just that expecting perfection security-wise from complex systems is a fools errand.
I think that may have been the OP's point. Bash is more complex than sh has to be hence because FreeBSD choose the simpler option they avoid the inherent security implications of complex systems.
Exactly, FreeBSD uses the simplest solution for the task, in the name of security. FreeBSD isn't "secure from Heartbleed because they don't use Bash" but rather, FreeBSD is "secure because by default only the most basic, necessary software is installed" which happened to be sh instead of bash.
Should hospitals such as UK's NHS and other such organizations use dumb terminals (or chromebooks) instead of Windows? That way data is centralized on servers where it is easy to backup and harder for hackers to hold to ransom.
If instead of desktop win32 applications they had used web applications none of this would have happened.
Servers are much less vulnerable for a number of reasons:
1) People managing and configuring them are more security conscious than the vast majority people. Come on, nobody downloads an email attachment or connects an USB they found in the parking lot to a server.
2) It's much cheaper to keep a server updated than a thousand Windows clients.
3) Like whitefish pointed out, even in the worst case scenario you can restore a backup and keep on truckin'.
Let's be clear on this. No matter how secure the operating system initially, if it stays unpatched then over time it will become more and more vulnerable as uncovered exploits go unfixed.
The reason a machine might go unpatched is because it might support some critical hardware (eg medical) for which there is only one or two vendors and only a particular combination of HW and SW are supported (eg due to a specific custom hardware driver).
To lay the blame for this at a single vendor's feet is naive.
There are very few free/open-source operating systems that get security patches for as long as Windows does.
Major versions of OpenBSD are only supported for 5-6 years. Most Linux distributions only get 3-5 years. Red Hat promises 10 years of support, the same as Windows 7/8/10. None comes close to the 13 years that Windows XP was supported for.
So you're gonna have to update anyway, at roughly the same interval if not more often, as if you had used an enterprise edition of Windows.
Major versions of OpenBSD are only supported for 5-6 years.
I thought that security updates are only made for -current, the current stable release, and the previous stable release. So, 1 year of support, not 5-6.
A cursory look at the errata seems to confirm this.
Most of the time, upgrading from one minor version to the next is painless. If you installed OpenBSD 5.0, you are expected to keep updating all the way to 5.9. (For some reason, OpenBSD always makes exactly 9 minor versions for each major version.)
Most Linux distros don't even make any fuss about minor versions, using them only as an opportunity to build fresh installation images. New minor versions are security patches for the major version and all previous minor versions.
> It'd be a good start if they just didn't use Windows.
I hear tell that server wise NHS IT will also support OpenSUSE, and their record of keeping that patched is almost as good as their record for doing so with windows.
Yes, they really should. Some important facility should not use window anymore because it is too open to the public to hold an attack. Or the hospital's computer should not be connected to the internet. Most of the time the computers within a hospital are just doing local task.
It's not actually like that. They have a heavily restricted backbone and lots of little isolated networks hanging off it. This is lots of independent cases of idiocy causing infection.
Policy controls, poor patching and user education are the root cause of the NHS problems.
> We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits.
This whole incident is really raising the profile of the creation of "cyber weapons".
They aren't like physical weapons with physical controls -- they are digital, controls and costs to copy/distribute are more like digital music than anything a Goverment organization is used to.
One thing that strikes me with this malware is that it hits pretty much every single country. Don't hackers try to follow the proverbial "don't shit where you eat" proverb? They have nowhere to hide if they are identified now.
I don't know anything special that leads to this conclusion, but let's pretend that Russia is not only shadowbrokers(and I don't know that) but that this ransomware was also released by Russian government(I really don't know that...)?
In that thought experiment, what could be the possible reason for attacking themselves so hard? Well, to give themselves more plausible deniability(and the whole attack would be done as an attempt to discredit the NSA)... but also to justify an agenda of technological sovereignty. Russia is in a tug of war with American corporations over where data is stored and they've even blocked the Microsoft owned linkdin. It's impossible to find an alternative to Windows(considering Russia is such a big PC gaming country), but who knows in 10-15 years.
This malware was first released as part of a massive spam campaign, and then from there wormed its way onto other systems. It was definitely released on purpose.
For those who think that using free software would be similar (naming ubuntu or even centos).
The real question is why a hospital is still running windows xp even though it's not supported by its own vendor.
The answer is vendor lock ins. The upgrade is not a matter of simple command. Upgrade cost involves more licenses and hardware upgrades (which is not needed as old hardware is fine, but this is how things work between microsoft and hw vendors) it's like you need a new buy watch to apply dst summer time.
Also mirosoft and old school desktop software vendors used to make sure switch or upgrade cost is really high ex by using non stanard formats.. to lock users from switching to mac or linux
If you remember active x and internet explorer specific vbscript...
If you use free software from an expensive but decent vendor like redhat you can upgrade software on same hardware
And if it software was expensive you can switch to centos, scientific linux or pay anyone to handle that for you are fair rate. There is no vendor lock in. Every thing is stardard and no vendor lock in.
I see three areas where this event provides an
opportunity for Microsoft and the industry to improve.
Fixed version:
I see three areas where this event provides an
opportunity for Microsoft, the industry, and
government to improve.
To be fair, he does go on to point out how this is partly the fault of poorly conceived government policies, namely the NSA's foolish practice of stockpiling exploits. But Microsoft and the industry should keep the heat on the government about this at every opportunity, because the horrifically bad and analogous idea of having government master keys is still being pushed forward.
And what about the lesson that software should be mortal, and should one day die? By what metric is, e.g. Windows XP, subject to evergreen updating to mitigate (prevent or reduce impact of) this exact scenario, forever? Does Microsoft have the right, and even the obligation, to remote detonate all Windows XP in existence on a certain date?
Perhaps EOL should be literal. The software kills itself and does not function.
The lesson I'm getting is our software can become malicious, and that malice can spread like wildfire. Is a company obligated to patch any wildfire type of bug forever? Is that a cost of proprietary software? Or is setting a date for its death the cost?
I think aging proprietary software has a much greater chance of becoming a weapon than it does becoming inconveniently obsolete. So forcing a company to release the code as free and open source software upon EOL date, I think just enhances the chances that it gets weaponized. There's a greater incentive to find exploits than to fix them, in old software.
Another lesson is most people really shouldn't be using Windows. If you can't afford to pay Microsoft to keep your software up to date, then use something that's FOSS and is up to date. (Same rule applies to Apple, if you can't afford new hardware in order to run current iOS/macOS versions that are being maintained, then don't buy stuff from Apple anymore.)
How did MS know to patch a month before the exploits leaked? Did they get advanced notice as a courtesy from NSA, or someone else, that the exploits leaked?
Lesson 1: don't use Windows.
Lesson 2: be it a web resource or your pc, make sure you can restore all your data/sw from clean/current copies.
Lesson 3: test lesson 2 periodically.
If Windows were open-source, would the situation have been any different?
Would organisations with very conservative attitudes to upgrade paths or a requirement to run an older OS version have suddenly been patching nightly?
Would the exploits used have been identified and patched prior to their malicious deployment?
Would organisations with a vested interest in stockpiling exploits have elected to immediately notify projects' maintainers?
The answer to these swings wildly between 'maybe' and 'probably not', so the eventual endpoint is likely largely the same. It's a compound issue brought about by a chain of decisions made by disparate organisations, and using it as a stick to beat Microsoft or proprietary vendors in general with is missing a very important point -
Security is the responsibility of everybody involved, from vendors and the government, all the way down through to the people innocently opening infected attachments.
That has been the case for over a decade, and it has been getting worse over time.
The reason I recommend a free operating system is not because you are allowed to read the source (although that is a bonus), it is because you have the freedom to control your operating system.
The problem with Windows is that "updates" are done in the most inconvenient way possible, and with no control by the user. They often include changes that the user does not want bundled in with security patches. To contrast, a free operating system gives you options (liberty). If I just want an old stable version of Debian with security patches, I can get it.
The issue here stems from using proprietary software in the first place. Proprietary software is controlled by the company, not the user.
I think the lesson is to have less uniform, opaque bloatware controlled by disinterested parties whether through proprietary technologies, walled gardens, OR paternalistic update policies. Have some diversity in the network, let people really know and choose what they want on and off, and have the minimum of what is needed for the job turned on by that endowed choice, and half of these problems go away.
How to prevent an attack from internet is really a big problem. More open the system is, more dangerous the system maybe. like this attack, the macOS and Linux is safe. Maybe just because the system is not that open and malicious program cannot get some access to do something bad. And usually the update to prevent some kind of attack is later than the attack itself.
It's not just a question of people not keeping their computers updated. I have bought a few second hand computers with windows 7 the last few months and they have all had problems when updating. I doubt most people even notice this and think they are updated.
The real question should be: Can Microsoft write an OS that does not have to be constantly patched, month after month?
We know they have written such things as part of research. But still they continue to release software that is unfinished.
They have trained their users that failure to update is fatal. No doubt, if they are using Windows.
They also like to conflate "update" with "upgrade". They use these security problems in Windows to scare people into upgrading.
Windows 10, whether they like it or not. As others have noted, by design the new versions are not safer than the old ones.
Retroactively fixing reported issues does not make a new version more secure by design. They could just as easily fix the issues in the older version.
Can this company get anything right the first time? Will they ever design a system that is secure?
Do they have any interest in doing so?
Are they incapable?
There is nothing wrong with releasing something simple, secure and finished.
Does MS believe Windows users are not worthy of a secure OS?
I think Microsoft Research have contributed to development of L4 systems that run on baseband.
Do these systems have the same vulnerabilities as Windows?
Fixing problems after they occur (past problems) is admirable but other free opens source OS written by volunteers accomplish the same thing. The question is whether the design of the system is such that future problems are avoided.
Does Microsoft believe Windows users deserve more security? Can Microsoft deliver it?
All indications suggest the answer to both questions is no.
With no viable alternatives, no one can blame Windows users for sticking with it despite red flag after red flag, but it makes no sense to defend the Microsoft approach to security for Windows users. The company has no respect for Windows users.
Being responsive to a constant stream of reported vulnerabilities is an improvement from 1995 but as we can see it is not enough. Their software is still full of mistakes.
They need to prove they can make something that is secure by design and that they are willing to do so for users.
(Truthfully, they probably do not need to do anything.
Quotes of 80% of Windows installations being tied to purchases of hardware are probably not far off the mark.
There is no selection of OS by most computer users.
A majority of users still get Windows pre-installed on the computers they purchase.
Microsoft could completely ignore users and it would not hurt their business, as long as they continue to maintain relationships with hardware manufacturers.)
Pretty sure this is a highly targeted piece of PR designed to shift the blame from Microsofts appallingly poor operating system design especially when it comes to security. Are the NSA a deceptive, anti-humanist organisation that performs atrocious acts against people - yes - I absolutely believe so and they play a HUGE part in this, but Microsoft - they are the irresponsible software vendor here and do they reimburse people that have PAID for their software? No.
> Finally, this attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017. We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen. And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world today – nation-state action and organized criminal action
Did the Microsoft President just confirm that NSA develop the vulnerability which led to the attacks on hospitals this weekend?!
> An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen
This is a bad analogy. The solution to people stealing your Tomahawks is to guard your goddamn bombs. A better analogy would be the U.S. military seeing Al Qaeda has a bunch of Tomahawks and doing nothing because they might be aimed at ISIS.
The NSA hoarding / leaking aspect of this vulnerability has been reported by most major news outlets. Even the mainstream ones. Albeit most haven't expanded on that point to the level that Microsoft did here.
This (as far as I know) was one of hte first reports of details of the malware and clearly mentions it, and other analysts haven't said otherwise, which by now they would have if they disagreed: http://blog.talosintelligence.com/2017/05/wannacry.html
>A month prior, on March 14, Microsoft had released a security update to patch this vulnerability and protect our customers. While this protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally.
They stopped supporting Windows XP years ago, including with security updates.
There are still around 100 million computers around the world running XP.
It seems irresponsible to just leave them to hang out to dry when there are that many machines out there running it. A virus seems inevitable if they do. And shifting the blame onto the customers is not reasonable when there are still 100 million customers who are "doing it wrong" by not upgrading to a later version of Windows.
This entire article pertains to directly shifting the blame onto their customers, and the governments of the affected countries (!)
>The fact that so many computers remained vulnerable two months after the release of a patch illustrates this aspect
Again, XP systems are the most affected, and there was no patch released for XP. This is extremely irresponsible of Microsoft and this article shifting the blame onto everyone but themselves is reprehensible.
I'm generally much in favor of holding vendors accountable when they abandon users, but Microsoft always had clearly communicated timelines for support, and in the case of XP even extended them later (due to Vista being crap). Windows 7 was out for 5 years or so when XP supported ended, as everyone knew beforehand. It's known you wont get support, it's known Windows XP is going to have security issues, what do you expect to happen when you don't take appropriate measures?(options include: replacing it, increased network isolation, virtualization, ..., depending on why you're still running it. Even just a really good backup strategy makes a difference right now).
Customers like this is why we now have Windows 10 where you're force-fed updates and the OS will change under you instead of the change being an upgrade to a new major version that you can delay for years. (Which I'm not happy about, but I can see its benefits on that scale)
The best argument for Microsoft doing wrong here might be that they limit their (expensive) super-extended support to large organizations. Since they do the work, keeping a few boxes with special hardware patched should be an option for smaller shops as well (and is IMHO easier to defend than keeping a large network full of XP desktops running because ?)
How long should Microsoft be required to support XP? They extended the original support period TWICE. Why are customers entitled to support when they were informed prior to purchasing the product that support expired on a given date?
Maybe newer OS do not have any useful features for those customers? Maybe they are even worse for them because work slower, are not compatible with old drivers, contain spyware (telemetry)?
Is a company obligated to sell a product with features that you consider useful? Intel doesn't make pre-ME CPUs anymore. Apple doesn't make Power PC iMacs anymore. And Microsoft doesn't make Windows XP anymore. In all these markets, there are consumers who would prefer to purchase the discontinued product. So what? Products get discontinued.
Consider a discontinued product from another industry, like a car or an appliance. When the product is discontinued, the manufacturer only creates replacement parts for existing machines for a limited time period. After some years, it's difficult for a consumer to maintain their copy of the discontinued product because it is difficult to find replacement parts.
The point is, mass produced engineering products have lifecycles. Microsoft clearly defined (and extended) Windows XPs lifecycle and provided patches for the entirety of that lifecycle. It's hard for me to understand how that doesn't fully meet their obligations to be fair to their customers.
While you are right, there is a difference that you can drive a 20-30-year old (if repaired) car on modern roads but you once you connect a PC with 20-year old OS to the internet, it will get infected. And 20-year old browser will not be able to display modern websites.
Maybe when cars will become more computerized(?) and connected, they will become unusable faster.
While there are more than 100 million users of it they should continue to supply security updates for it. Otherwise a widespread virus like this is 100% inevitable.
They didn't leave Xp users out to dry. Remember those forced free windows 10 updates they pushed out?
The xp support schedule was available from day one. These companies knew exactly what they were getting into. Microsoft even extended the support period for xp on several occasions. It's galling that we as software professionals see this as malfeasance by the entities running xp still. They've had close to a decade to upgrade. Software is not a durable asset, it comes with an expiration date on the box.
This isn't just about security patches there are pieces of xp that fundamentally insecure, which is partially Microsoft's fault, but on the other hand the driver model which is one of the weakest parts of xp is the thing that kept many of these companies from upgrading.
Microsoft is feature and sales oriented not quality oriented. Security is an aspect of quality. So if you voluntarily like to put yourself at risk, by all means use their products.
Their product design doesn't emphasize security. For example, remember the extremely convenient AUTORUN.INF feature? That has probably resulted in billions of dollars lost and that number continues to grow every day.
Rendering fonts on the kernel... fantastic idea! What's the next great Microsoft idea? Continue to buy their products and figure it out.
This has similarities in type, if not in horror, to the development and subsequent spread of nuclear weapons. When we lost control of those secrets, it was a BFD [0].
[0] https://en.m.wikipedia.org/wiki/Atomic_spies