Another current thread is https://news.ycombinator.com/item?id=25409416. Both discussions are good, but I don't think we'll merge them, because this one is oriented around the more specific information in the fireeye.com article.
"In observed traffic these HTTP response bodies attempt to appear like benign XML related to .NET assemblies, but command data is actually spread across the many GUID and HEX strings present. Commands are extracted from HTTP response bodies by searching for HEX strings using the following regular expression: "\{[0-9a-f-]{36}\}"|"[0-9a-f]{32}"|"[0-9a-f]{16}". Command data is spread across multiple strings that are disguised as GUID and HEX strings. All matched substrings in the response are filtered for non HEX characters, joined together, and HEX-decoded. The first DWORD value shows the actual size of the message, followed immediately with the message, with optional additional junk bytes following. The extracted message is single-byte XOR decoded using the first byte of the message, and this is then DEFLATE decompressed. The first character is an ASCII integer that maps to the JobEngine enum, with optional additional command arguments delimited by space characters."
Dang, that's pretty sneaky. This is one heck of a hack.
This paper sheds some interesting perspective on the development and detection challenges attached to this sort of signalling, although it's not about computing as such.
https://www.nature.com/articles/s41598-018-22926-1
Maybe because exfiltration had an established meatspace usage before it was applied to data, and for a lot of people, applying it to data might be an extension (though a straightforward one) of the usage they're accustomed to to. Probably less necessary on HN than it would be in a forum that isn't explicitly technical.
Once they find the software, they can reverse-engineer it. Finding it the first time is the difficult part. That can happen based on a tip-off (e.g. every security team in the world will be checking their network for these processes, domain names etc. now), a lucky find (e.g. if the backdoor triggered weird crashes and someone took a look), or because it's linked to some other bad activity (e.g. someone finds some form of known malware and investigates how it got there).
In this case, it's likely that they somehow found the backdoor in SolarWinds' network and realized what was happening that way.
The vast majority of these finds come from honeypots where any unaccounted packet is already 100% an attack. Honeypots often run alongside legitimate systems and may have several different levels of firewall to sieve different vectors or assess the depth of the attack.
I frankly don't know how anyone finds any of these sort of burried attacks anymore. When software systems were simpler, it was already difficult to know enough about a system to detect or observe abnormal behavior and that was with simple systems with a fairly deep understanding of how things should be.
Anymore, so many systems are some tower of SaaS APIs mixed with commercial on prem software, then mixed with internal developed software spread across multiple teams, developers with high turnover rates, focus on functional software over specification/documentation for anyone to compare to, etc. that it seems like running over these issues are flukes discovered either by dedicated teams looking only for these issues (security auditing teams) or developers maintaining systems than happen to run across an abnormal behavior in the process of normal maintenance.
Right. If given the prompt "this system likely or definitely has a backdoor - find it", this won't be all that hard to find. Especially if you already have a PCAP that you suspect has some of the malicious traffic.
But it could go unnoticed and disregarded as completely normal, benign network traffic for years, or perhaps forever.
Thanks. This is voodoo magic to me. I understand the basic of computer programming but this is way more fascinating than the stuffs that I know and touch. I doubt this stuff is taught in schools, maybe the basic like network programming etc., but definitely this is highly sophiscated. Kudos to whoever designed the scheme and who found it!
Random aside: the Windows Error Reporting system (aka Dr Watson) was primarily a tool to help people write better code. Crash reports got sent to Microsoft, referenced against symbol files and aggregated into call stacks that crashed by frequency. Companies could sign up to get summaries of the reports and improve their software based on real world usage. At the time, this was a big deal.
Then someone realized it was also a good early warning system for new viruses, as many viruses would crash their host process in novel ways that were unlike the usual software-induced errors.
WER reports also could do other things. Sometimes bizarre, impossible crashes would happen. Microsoft would investigate some of these by showing a popup to the user inviting them to participate in analysis. If the user consented, they were put in contact with a Microsoft engineer. Turned out a lot of people were running unstable, overclocked hardware sold to them by vendors who had fraudulently misrepresented the hardware.
The telemetry that is out there is amazing, but not as amazing as the secrets it can reveal.
The author is Raymond Chen, and the blog is probably the single most influential blog on windows internals. He has decades of amazing posts that are well worth a read.
That’s really interesting to read about. I always recall an early case in my career, where a customer’s storage device crashed, leaving a unikernel core file. They suffered data loss so it got a lot of engineering attention. This model was old even circa 2001 and ran a DEC Alpha processor. After a week of full-time investigation by our best engineer, the conclusion was that the processor...took the wrong branch. That was it, it just failed like a broken machine. Which I guess is what it was!
If you're interested in this stuff, "Countdown to Zero Day by Kim Zetter" is a fascinating read. It's both lightly written but, not light on technical details and provides a very detailed account.
My CS program included a class that required us to exploit compiled code. Phrack Magazine and loads of other public resources probably have ideas like this for concealing data. Kids I knew in Junior high and high school were writing password stealers for Windows that would just iterate over every HWND (or whatever Windows 9x called handles) looking for inputs of type password, and concealing the app and the results.
It doesn't take a great deal of sophistication to come up with some of these things, just a bit of cleverness and exposure to the possibility of cleverness.
Odds are, if you're a programmer, that you'd have come up with a very similar scheme, given knowledge of the kinds of messages the software is expected to send or receive. I.e. leave the envelope plausible-looking and stash the payload in the random-seeming bits.
Although not a professional programmer, I do agree with what you said. But the whole scheme also includes execution on other fronts (e.g. how did they plant the payload).
Remember that a group of Minecraft players managed to reverse-engineer the seed of a map based on a single low res screenshot using, among other things, the shape of clouds.
Exactly. Human ingenuity at scale can figure out wild things. I'm reminded of back when I played MMOs. No matter how hard the company tried to 'balance' the characters, it would only take a couple of days before players figured out optimal solutions that the company often didn't take into account.
This only works for casual untrained observers. A proper infosec analyst looking at this would first wonder why there is HTTP traffic related to .NET assemblies at all (this sounds weird). Then comparing two requests or so would likely show a suspicious pattern of all the hex changing randomly while the rest of the payload is cookie cutter the same.
Unless the outer layer is a legitimate service that's actually in use, this kind of thing only fools people who'd dismiss this traffic as "something I don't know about, but which is probably benign". Then there's the whole figuring out what IPs this is going to part, which would raise more alarm bells.
Cute, but I think you could do better. Hiding from someone looking at your traffic is very hard. The more important part is how well you hide from dumb automated tools that people rely on for initial detection.
Then again, a huge portion of the auditing/"infosec" market nowadays are untrained random people running automated scanners who actually have zero reverse engineering or proper security research experience, so I'm sure it'd work well against those.
> A proper infosec analyst looking at this would first wonder why there is HTTP traffic related to .NET assemblies at all (this sounds weird).
This isn't accurate. If you look at the Snort rules used to block it[1], it is masquerading as traffic to .solarwinds.com (ie, the vendor) to URLs looking like: swip/upd/SolarWinds.CortexPlugin.Components.xml
Unless you knew the software isn't supposed to do that, it isn't suspicious at all.
Hitting a vendor web server on plaintext HTTP? That right there is a massive red flag. If this were legitimate traffic, that'd be enough reason to drop the vendor right then and there.
If this were HTTPS then it wouldn't need the obfuscation to pass by undetected. And then there isn't much you could do at that point to find it via traffic analysis, assuming the uncompromised app makes similar HTTPS connections, other than perhaps going deeper into traffic pattern analysis if you're lucky.
Once the threat is identified some other way, it might be possible to develop blocking rules that work at the ciphertext layer (e.g. from packet size patterns exhibited only by the backdoor requests).
I wish that were the case, but unfortunately some credit card processors still send some credit card processing payloads over http in some circumstances.
> This only works for casual untrained observers. A proper infosec analyst looking at this would first wonder why there is HTTP traffic related to .NET assemblies at all (this sounds weird). Then comparing two requests or so would likely show a suspicious pattern of all the hex changing randomly while the rest of the payload is cookie cutter the same.
A proper analyst working in a well-funded clean environment with carefully defined legitimate traffic patterns. That is not a majority of organizations and it would be very easy to miss something like this in the other 99% where they’re understaffed, dealing with the noise of routine malware, and what appears to be a poorly written vendor application doesn’t stand out as much when you have hundreds of them.
A big takeaway here is that telemetry is a security threat. If your application is pushing opaque regular traffic out for not adequate reasons you are providing a good cover for malware activity.
Even a fresh Windows install these days has so much regular network activity for telemetry and other services, it's trivial to hide bad behavior.
Many detection tools do leverage the fact that if there is spike in NXDOMAIN DNS responses, it could be a malware. However such signals are also exhibited by infra issues. This is sort of analogous to cancer detection. Cancers cells are really good at disguising themselves.
For those following at home, these are caused by Domain Generation Algorithms that try a ton of (deterministically generated) top level domains. The attacker need only register one of them, and if the domain gets frozen they can often just register another -- until the generator algorithm is integrated into the registrars' blocking system, which takes time and work.
Personally I'm surprised we don't see more malware using the various DNS replacements that sit on top of Ethereum. They were using Namecoin for a while.
probably harder to reliably get to Ethereum than to reliably get to any DNS server. for indiscriminately-targeted ransomware Namecoin etc. would make sense, but DNS is a better choice for internal networks.
Also, security conscious companies would flag a connection to Ethereum as suspicious in itself, so your effort to hide yourself could in fact draw more attention.
I know; it really depends on the kind of network the malware is deployed into. If it's some quant trading group on wall street, or the Treasury department, using EDNS for CNC would be outstanding camoflauge. If it's some single-purpose server farm then yeah, of course, stands out like a sore thumb.
What about Tor hidden websites? Or would the fact that the URL is a public key (essentially) prevent deterministic randomness without embedding the private key?
Domain-generation algorithms work along the lines of: seed a random number generator with the current date. Spit out a handful of domain names, and try contacting each of those a couple times each day. This way, the malware isn't tied down to a specific domain or set of domains; the bot herder just has to register any one of the domains in the current day's set, and if they lose the domain, they can repeat the process the next day.
The downside is that all the bots will generate a lot of DNS traffic checking for those domains. That's what's being detected here -- the NXDOMAIN responses don't carry any desired data, but the malware can't avoid generating them while it's looking for its owner.
Network security monitoring and encryption have always been at odds. One day, due to encryption, all that will be left to monitor will be metadata (to, from, date, time, src, dst, bytes, duration, etc.)
Deeper heuristics on the deviations in that telemetry from normal could also have provided a signal that something was off -- like the temporary file replacement activity.
Without telemetry that kind of activity could likely continue undetected for longer.
It seems you missed an opportunity (BTW, is that what they call "content marketing"?).
Had you left your VM running a little while longer, you might've noticed something else in your Netflow data: yet another service calling home to Canonical twice a day.
I wish I were joking about this but... the (likely) "primary" reason for this is so that they can show ads to users when they log in -- ads for things like TV shows or software for other operating systems or, sometimes, just a "fun fact".
I can't really say it's the only reason, however. With all of the system information they're getting off of each host, there's now several different things they can use it for and, of course, that provides them with even more motivation to keep doing it!
This isn't a new thing, though. In fact, people have been expressing their "unhappiness" about it to Canonical for a few years now [0,1,2].
Unfortunately:
- it's still installed by default
- it's still enabled by default
- they've actually increased the amount of system info that they're collecting [3]
- it's installed as part of the "base-files" package -- an "essential" package which cannot be removed!
- "it's a 'feature', not a bug" (so we absolutely should not expect it to get removed)
- apparently, a 42 line MOTD [4] is perfectly acceptable
I suppose there's one "positive" thing I can say about it: at least they're using TLS to encrypt the data over the wire. :/
--
If you want to disable this spyware on your Ubuntu hosts, change "ENABLED" to "0" in /etc/default/motd-news:
$ sudo sed -i -e '/^ENABLED=1$/s/1/0/' /etc/default/motd-news
When executed, /etc/update-motd.d/50-motd-news checks if "ENABLED" is set to "1". If not, it exits without calling home.
50-motd-news gets launched by motd-news.service which itself is activated -- twice per day -- by motd-news.timer. Since I like to be thorough, I recommend running a few more commands (as it's impossible to know that Canonical won't change any of this in the future!):
That should do it. If you're rather not have to worry about something like this in the future, you may want to consider replacing Ubuntu Linux with Debian GNU/Linux.
Thanks for the thorough post! I'm currently switching to debian but I have a few long-running cloud VMs still on Ubuntu. I'm finally gonna neuter the MOTD ads.
> An intruder using administrative permissions acquired through an on-premises compromise to gain access to an organization’s trusted SAML token- signing certificate. This enables them to forge SAML tokens that impersonate any of the organization’s existing users and accounts, including highly privileged accounts.
Your quote seems to be describing what Microsoft observed from the attackers on the compromised machines with SolarWinds Orion not how the software was compromised in the first place.
> In actions observed at the Microsoft cloud, attackers have either gained administrative access using compromised privileged account credentials (e.g. stolen passwords) or by forging SAML tokens using compromised SAML token signing certificates.
> Although we do not know how the backdoor code made it into the library, from the recent campaigns, research indicates that the attackers might have compromised internal build or distribution systems of SolarWinds, embedding backdoor code into a legitimate SolarWinds library with the file name SolarWinds.Orion.Core.BusinessLayer.dll.
> the attackers might have compromised internal build or distribution systems of SolarWinds
It should probably become a requirement (for both open and closed source software) that any updates be not just signed but have their hashes available in a Binary Transparency log[0].
When you first install a piece of software, you might need to calculate the hash locally and manually search for it in a log's web interface, but after that, its software-update routine should check that the new version it is downloading has had its hash published in a known place. That way, software publishers can check an append-only independently-run log to see what has been signed with their keys.
I suppose there is a risk that an attacker could prevent users from receiving security updates by DoS'ing the transparency logs, but that should be harder than just DoS'ing the servers that host the software updates themselves. Large organisations could also maintain mirrors of these logs on their internal networks, which would help with privacy/latency/availability, and the logs should ideally be available as Tor hidden services too.
For non-critical updates, the log checking routine should require that the update's hash had been in the log for a certain period of time, long enough for the software publisher to notice and raise the alarm to their users. Updates marked as critical should default to stopping the software from running until the necessary period had elapsed, for which the workaround would be a fresh install of the newer version by whomever has the admin privileges to do that.
I’m not in favour of having public client lists, especially when you’re a critical software vendor — but this list is just terrifying. There are a lot of big there, and I won’t be surprised to hear of more incidents in the coming days.
“ More than 425 of the US Fortune 500
All ten of the top ten US telecommunications companies
All five branches of the US Military
The US Pentagon, State Department, NASA, NSA, Postal Service, NOAA, Department of Justice, and the Office of the President of the United States
All five of the top five US accounting firms”
What’s the opposite of security through obscurity?
Any fortune 500 company that's been around for more than a decade probably has one of every enterprise software product running somewhere. When I worked at a big bank, when we acquired any company, large or small, the software stack they used usually just got bottled up where they were, and the client list on the vendor's website just got updated to the new company name.
I mean that company list has "smith barney" which doesn't exist anymore.
SolarWinds’ comprehensive products and services are used by more than 300,000 customers worldwide, including military, Fortune 500 companies, government agencies, and education institutions. Our customer list includes:
- More than 425 of the US Fortune 500
- All ten of the top ten US telecommunications companies
- All five branches of the US Military
- The US Pentagon, State Department, NASA, NSA, Postal Service, NOAA, Department of Justice, and the Office of the President of the United States
- All five of the top five US accounting firms
- Hundreds of universities and colleges worldwide
Partial customer listing:
Acxiom
Ameritrade
AT&T;
Bellsouth Telecommunications
Best Western Intl.
Blue Cross Blue Shield
Booz Allen Hamilton
Boston Consulting
Cable & Wireless
Cablecom Media AG
Cablevision
CBS
Charter Communications
Cisco
CitiFinancial
City of Nashville
City of Tampa
Clemson University
Comcast Cable
Credit Suisse
Dow Chemical
EMC Corporation
Ericsson
Ernst and Young
Faurecia
Federal Express
Federal Reserve Bank
Fibercloud
Fiserv
Ford Motor Company
Foundstone
Gartner
Gates Foundation
General Dynamics
Gillette Deutschland GmbH
GTE
H&R; Block
Harvard University
Hertz Corporation
ING Direct
IntelSat
J.D. Byrider
Johns Hopkins University
Kennedy Space Center
Kodak
Korea Telecom
Leggett and Platt
Level 3 Communications
Liz Claiborne
Lockheed Martin
Lucent
MasterCard
McDonald’s Restaurants
Microsoft
National Park Service
NCR
NEC
Nestle
New York Power Authority
New York Times
Nielsen Media Research
Nortel
Perot Systems Japan
Phillips Petroleum
Pricewaterhouse Coopers
Procter & Gamble
Sabre
Saks
San Francisco Intl. Airport
Siemens
Smart City Networks
Smith Barney
Smithsonian Institute
Sparkasse Hagen
Sprint
St. John’s University
Staples
Subaru
Supervalu
Swisscom AG
Symantec
Telecom Italia
Telenor
Texaco
The CDC
The Economist
Time Warner Cable
U.S. Air Force
University of Alaska
University of Kansas
University of Oklahoma
US Dept. Of Defense
US Postal Service
US Secret Service
Visa USA
Volvo
Williams Communications
Yahoo
For those at least you don’t have to install SolarWinds code on your server to use them. They’re endpoints for syslog. As long as your logs don’t contain secrets (they shouldn’t) then it’s not great but not terrible.
Well I don't see real practical reason for keeping it secret.
If you look at operation model of threat actors, even with current hack, they have their targets and no one is going to say "hey they have solar winds let's hack them". Threat actors have their budget, limited time and goals. They could also find this information by other osint means. Even if they have it on that page, they still need to make their research.
Even if SolarWinds would not have a list on their page they are so big that you can count them as interesting target anyway. It is the same with Google and MSFT you can safely assume if you hack them, some of your targets will use some tools from those companies.
I mean security by obscurity is fine, but I don't see what kind of value it would bring in this scenario.
> Well I don't see real practical reason for keeping it secret.
Generally, you have to get a company's permission to use it's name or logo as an endorsement. That agreement has stipulations, such as being revoked if the association could bring disrepute or reputational harm to the endorser.
I'm sure none of the companies on that list want their investors calling the IR to ask about whether this event is a material issue for the company.
I'm not a security person, but my first thought is that you're not trying to avoid "hey they have solar winds let's hack them," but rather "Hey, I want to attack Large Co., and a quick Google search says they run software from these 14 companies, so compromising any of those might get me in."
General reminder that your funny 404 page becomes instantly unfunny the second your tech department both publicly and catastrophically shits the bed: https://i.imgur.com/kNbScVH.png
If you see the range of offering, it makes more sense, and doesn't sound as scary (or not more than if you would see a list of Microsoft customers for example).
What is SolarWinds, and why are all these organizations using it?
Clearly, it’s a giant single point of failure, but other than that, I’ve never heard of them.
Their marketing says they do network monitoring, etc. Do they have a legitimate product, or is this just another case of enterprise checkbox security theater gone awry?
Solarwinds is systems management that runs the gamut (router config managment, network monitoring, systems monitoring, logging, etc) for gui ninjas (read: people who use microsoft way too much) and execs who listen to sales people (and some gov orgs). I've used it plenty in the past, not all their products but many/most, and it's actually not that bad on the surface, but dig deeper and you begin to see the flaws quickly. I almost aways advocate against it if I ever see a proposal for even a single one of their tools pop up.
The thing it does for many orgs is become a "one stop shop" for the array of products a "modern" IT stack needs... and if you thought splunk was expensive...
I think there are probably scales SolarWinds products work well at and then places that they do not.
I have definitely found some deficiencies in SolarWinds products I've used that feel like they should've fixed long ago. But their products are also leaps and bounds better than tools I worked with prior.
The most alarming bit is the "our software needs to be exempt from antivirus scanning and group privs" ... so people probably just run this thing as root the whole time
Almost every install document I've ever read says disable antivirus and firewall. I actively disregard every such instruction and rarely have a problem.
If your antivirus product is any good, it will work fine with legitimate software. But that is entered solely as a support disclaimer for "I can't guarantee anything if anything else is running on your system".
In short, it's infrastructure monitoring software. Stuff like CPU, Network, Memory utilization, is this process running, etc.
It's used to set alarms that will go off if "process XYZ is not running on server 123" or "CPU Utilization is over 95% for 15 minutes on server 456" that kind of thing, as well as dashboards.
The special thing about Solarwinds is that it's agentless, meaning you don't have to install an agent on the boxes you want to monitor, I think it uses ICMP to ping the instances you're monitoring.
It's terrible software (compared to say, Datadog) and I've been saying I want to short it for a year. Obviously I should have put my money where my mouth was.
Very common in public sector, think of just a fancy graphite/splunk that deals mostly on SNMP and creates reports that fits 99% of the network/sysadmin needs for gov/compliance.
The tool that was compromised is a SolarWinds product, Orion. Orion is essentially a network configuration manager. It allows you to collect data about your network switches, including pushing and pulling configs. You can back them up and diff the configs as well. It is not inherently a checkbox product, but it could be. In many cases this could be a really useful product.
You have a lot of answers, but to shed some more light on it, they're also a massive MSP software provider. I'm in a small city in Canada and we have a half dozen IT shops here that are shifting from traditional IT (walk-in, break-fix, running cable, selling hardware, whatever) to a Solarwinds powered business model. The thought process is scalability. What that works out to is millions and millions of endpoints all around the world with Solarwinds agents running on them.
It's one of the bigger names in the network monitoring space. Most network and systems people that work for a big enough company will have at least heard of them.
They own Loggly, Papertrail, and Pingdom, amongst other products/companies. Most of their products are around software and server monitoring, in some fashion.
Network monitoring is their biggest product. Compared to other solutions they do have just about the best product if you need a web GUI to point to and say this is whats going on right now.
I'm sure 100% of small ISPs use them as well as anyone that runs a decent size Network ex. Schools, Universities.
You got your answer from those many replies informing you that Solawind is Ok because they have a big well known brand.
Nobody's ever been owned^H fired for choosing Solarwind, I guess.
Really impressive! It'll be interesting to see how the malware got signed by SolarWinds...
I wonder what the cost of an attack like this is. It doesn't feel impossible for a small group (< 10) of very smart and motivated people to maybe achieve this in 12-18 months of work?
Of course they could be working on behalf of some nation state, but compared to let's say an ICBM attack, this is probably not out of the realm of non-nation-state actors to pull of.
An entrepreneurial crime syndicate would probably have the resources to do this (or outsource to some mercenary attackers).
They used a an on-premises compromise to gain access to an organization’s trusted SAML token- signing certificate. This enables them to forge SAML tokens that impersonate any of the organization’s existing users and accounts, including highly privileged accounts
I think that just means they hacked a machine running in the offices, not that they were literally physically on-premises. There's a rather large opaque black hole in the middle of this assertion: how are the attackers obtaining administrator credentials to the SSO systems? There seems to be a pretty big leap here from "get code inside the firewall via Orion" and "oops, now your SSO ticket private key is pwned". They say administrator credential compromise, OK, that's a pretty general class of techniques. The details would be good because otherwise there seems to be a missing link here.
Right. When the implication is spies, lockpicking, laser limbo.... The more likely scenarios are a CFO that thinks they are also a CTO, a sr tech that was given admin credentials to fix a problem that kept coming up for some reason, or some inconsequential service bot account was given admin privileges in development and never fixed later.
Considering someone at SolarWinds uploaded FTP credentials to their public Github and left it exposed for almost 2 years, I could definitely see that happening.
Most of the SolarWinds tools run on Windows only. Some of the obfuscation they use reminds me of how complicated PSD is to parse.
Finding that you've been infected with this malware should trigger a critical response. I wouldn't even feel safe until I've nuked all the servers and rolled as many credentials as I could. And even then who knows how much data the attacker could have already exfiltrated. Frankly this level of sophistication is terrifying because you really never know for sure how deep they managed to get in.
Sorry for the aside, but I really find it truly mind-blowing that such a major, publicly traded company with products that cost hundreds of thousands of dollars a year to use can be a "Windows exclusive" in the realm of network security/technology infrastructure.
Basically any company that isn't a software-first company is using Windows. It's clearly not a hard rule, but it's definitely a trend.
Healthcare, Insurance, Automotive/general manufacturing, Professional services, logistics, etc, etc are Windows primary orgs. For many of these companies, internal software development is a relatively recent trend (compared to their company history) so they have IT departments that hold everybody on Windows.
I hear this, but these are mostly static companies that don’t grow and don’t release new products. Microsoft has this problem where they really aren’t part of the future or new ventures.
Why is that so mind blowing? Microsoft has a gigantic market share in basically every vertical. There are tons of software companies who make Windows-only products. The world is bigger than (and in fact is largely not) FAANG and startups.
> Microsoft has a gigantic market share in basically every vertical
What's a "vertical" in this context? The article uses it as well, but also doesn't explain. This is the first time I've heard the word used this way, and it's a tough one to search for.
But here it's actually overspecified. Parent probably wanted to simply say that Microsoft has a gigantic market share in basically every industry sector.
Yeah, I suppose that's true. It just seems like a company that focuses on another company's products seems... unpoetic. Not sure how else to phrase it. ¯\_(ツ)_/¯
But that's literally the entire software industry.
A large chunk of people making iPhone apps is basically tied into the feature set given to you by Apple that you leverage to make a product on top of. Same with Android.
> Yeah, I suppose that's true. It just seems like a company that focuses on another company's products seems... unpoetic. Not sure how else to phrase it. ¯\_(ツ)_/¯
The word you're looking for is probably "prosaic".
Realize, it doesn't matter. They could have just as easily made their attack run on Linux with nobody the wiser once their infrastructure was compromised and build process on top of that.
Instead there's clearly a large market for them to be comfortable with being Windows only.
Damn, this seems extremely sophisticated. Someone or a team went through a ton of thought and work on this.
It be would interesting to know how it was originally detected/discovered.
The knowledge of how it works and the level detailed shared in this particular case is also interesting. It’s almost like they are burning this one on purpose.
I’d also love to know how long it took to plan. Was this a years in the making kind of thing? Or just months? Was there a perfect storm that presented the opportunity and so they seized it or was this a concerted and determined effort to infiltrate some specific agency no matter what it took and this happened to be the weak point they could exploit?
It sounds like this is how FireEye were compromised last week. Though I don’t see that referenced in their write up, the WSJ article seems to imply that’s the case, and that this has been weaponized against the federal government.
SolarWinds has a major presence in the "MSP" market with their "n-Able" product. Lots and lots of PCs and servers have the n-Able agent installed. I am aware of MSPs that use their Customer-facing n-Able installations for the management of their internal systems. Credential harvesting potential alone would be massive.
If somebody can backdoor one of their products it would stand to reason they can backdoor others too.
Had a chance to look at an n-Able / n-Central installation this morning. Turns out that the "Probe" component (installed on machines in remote networks to act as a centralized data collector within that network) does use the affected Orion DLL, but the version appears to be older than the malicious version.
If this hadn't been spotted sooner I wonder if the affected component would have ended up being bundled into n-Able and being shipped that way too.
The question I have is how does something like this end up undetected in the officially signed update? I would imagine internally there would be a source control system with auditable commits, and the commit hash would be signed with the executable — is that not standard practice?
While it’s absolutely not a requirement, my first guess would be an inside job. They hire for locations all over the world, and once you’re in it would be much easier to spend your time finding and/or creating the scenario for this functionality to get into the build without detection.
No way. So far there have been zero APT attacks traced back to inside jobs. That's not how modern spy agencies work anymore.
Code signing hardly means much. All you have to do is compromise CI to make an organisation sign a binary that isn't based on reviewed code.
How hard is that? I think for the vast majority of us we have to admit our CI would be very easy to compromise. Especially if you can get administrator credentials as easily as these guys apparently can. But even without that, CI executes and runs arbitrary code that's intended to only come from trusted individuals.
Step 1. Find a third party package or library that is being downloaded by a package manager onto a CI box.
Step 2. Compromise the open source dev's laptop by phishing them or something else really basic.
Step 3. Put a back door into that package.
Step 4. Wait for it to be downloaded to and executed on SolarWinds CI.
Step 5. Insert a rootkit into CI.
Now that rootkit can just systematically insert the final back door into any build of that file it sees using link time editing. Every build will have the back door and the owners will eventually sign the binary and push it out to production.
In many orgs CI can actually request signing of any binary it wants, so you don't even have to do that. You can just hack the CI machine, grab the credential used to authenticate to the HSM, sign your preferred binary and then ensure users are downloading it.
> So far there have been zero APT attacks traced back to inside jobs.
That means we're not finding them, not that they don't exist.
Considering how easy it is to create plausible external attack vectors from the inside, I would be surprised if it didn't exist.
As you said, intelligence agencies have had people on the inside of important organisations since forever. That's what they do. It's not like everyone decided inside access is useless, it's more likely it has become easier to not burn insiders by them having to work hands on.
And so perhaps the question that should be keeping the rest of us up at night this week is this:
What if this exploit came through a compromised third-party dependency (eg npm or pypy or deb or rpm or docker image) that made its way into SolarWinds CI system?
The attack you describe is plausible but very visible (your backdoor code will be visible on the package manager and generally not easy to take down). I doubt it's the kind of thing that is likely to be used as part of this kind of attack, which wants to remain invisible for as long as possible.
You just have to make it look like a plausible error. There are enough ways to construct accidental RCE bugs in most languages that it's not an issue. And let's face it, how many people are reverse engineering compiled binaries on repos to look for back doors? It doesn't matter if it gets detected a year later when people go looking, as by then it's achieved its purpose already.
Nope. Theoretically it should be happening but it rarely happens. In fact, given the sophistication of this particular adversary, they would have just compromised the build server(and resign the binary) and I doubt anyone goes to the length of verifying the build server builds against some reference.
Maybe we should be, in response to knowing the viability and use of CI hacks?
Ensure you have a reproducible build, then randomly build on a different machine and compare file signatures of the results. Do it every so often with a “clean room” machine. Probably no need to run parallel infra that’s just as likely to be hacked.
Maybe I'm being dense, but the article left me confused regarding how the "sunburst" backdoor made it into a piece of software signed by the vendor. Can anyone explain?
> SolarWinds.Orion.Core.BusinessLayer.dll is signed by SolarWinds, using the certificate with serial number 0f:e9:73:75:20:22:a6:06:ad:f2:a3:6e:34:5d:c0:ed. The file was signed on March 24, 2020.
The “Delivery and Installation” section covers this. It’s a very short section, the subtext of which is that there’s basically no defense for malware delivered with a valid signature from a trusted vendor.
It’ll be pretty interesting to find out what happened at SolarWinds in the coming days: whether this malware was smuggled into the update via employee collusion with attackers or a hack of SolarWinds itself.
Thanks. I had read that, but I figured I must be missing something. I assumed that if the vendor was genuinely signing malware, that would be headline of the story.
It strongly implies that the vendor was thoroughly compromised, in order to insert backdoors into their software (possibly amongst other attacker actions).
Yeah, seems like a buried lede. SolarWinds was owned pretty badly and the attack that lead to that isn't described. Once they had access they have free reign to send malware to any one of their clients masquerading as routine patches. Sounds like they went the extra mile to deliver an extremely subtle exploit to avoid detection.
Are you the guy doing youtube vids on submarine stuff? I like those, but it's unexpected to find you here, snarking over people's spelling. If you're not the guy, I recommend looking him up if you're interested in this sort of thing, it's a nice way to spend time online. :)
are you the righteous indignation police? sometimes a simple “i found it amusing/ironic/delightful/serendipitous “ comment is just that, you know. i’m not sure why you think my comment was snarky, or holier than thou, or worthy of a criticism.
the sub guy was amusing. as my own comment was intended. i’m honored by the comparison.
Exactly my thoughts and confusion. It was a valid signed component of the Solarwinds application... so.... how did the attackers manage to sign it and publish it as a legitimate update? I used to work at a bank and any software deployment had multiple automatic and manual checks. Multiple senior engineers and managers had to approve the publish.
Was someone being blackmailed? Or a malicious actor managed to gain employment?
Solarwinds is not your average software company. Look at their customer list. They would have had ISO certifications and independent 3rd party audits done a billion times. Those type of companies don't even speak to you if you don't have those credentials.
You think you are supplying software to all the armed forces branches and "forgot" to update a Jenkins plugin?
Yes, absolutely. Oh, the amount of outdated software that your average corp runs is mind blowing. Not only that, but if you have a tight IT department, the amount of shadow IT that happens because of the too onerous processes put in place by IT will leave so much infrastructure that is critical but not managed correctly.
As a security professional, getting people to upgrade simple software is difficult enough, upgrading critical infrastructure used 24/7 by the development team... forget about it.
No, just hacked. Code signing keys get abused all the time. It's why Microsoft now insist they're held in hardware devices whereas they used to allow them to be free-standing files, but it hardly helps.
If you think about it for even a moment you'll see that for code signing to be meaningful requires a completely locked down software supply chain, including controls that trace through developer laptops and third party open source code that's pulled in to your application. The typical app developer combines components from a huge number of sources of unknown reputability and security strength, which are then all executed on laptops that have permission to push arbitrary jobs to CI clusters.
Occam's razor says that Solarwinds just... signed it. Just like Juniper shipped breakable VPNs and RSA shipped a bad random number generator. No need to look too far for why.
This looks like the real nightmare scenario for a supply chain attack. How long has SolarWinds has been breached? Were the attackers after a small number of US government targets + Fireeye or will we (more likely) discover extensive breaches at other companies worldwide as a result of this disclosure? Interesting times for those working in security.
Signed malware always indicates a very high quality of craftsmanship, but it looks like SUNBURST takes it to another level. The larger story here is how SolarWinds was just the stepping stone. The actual pivot was/is the entire USA.
I know why orgs install central management systems (ease of maintenance at scale). But when one system can be used to compromise all the nodes in an org, is it really worth it? Diversity is good for security.
I understand that Solarwinds is a reputable company with all the right security auditing and compliance credentials. That's why they are used by US federal and state government agencies.
Perhaps the criteria (MBA/Audit/Management driven security) that governments use to judge vendors is useless against real world attacks? If I wanted or needed to be compliant, I would buy that software (like all the other gov agencies), but if I wanted to be secure, I would not.
> I know why orgs install central management systems (ease of maintenance at scale). But when one system can be used to compromise all the nodes in an org, is it really worth it?
If you think about it, the modern cloud-native approaches are logically aiming for the same. You have a centralised management system (the cloud provider's engine), with programmatic configuration (infrastructure-as-code and indeed everything-as-code). We even call these modern best practices. Just instead of a webgui management console, we have git repos, and in place of two-person change protocol we have mandatory code reviews enforced by tooling.
A single point of failure is one side of the coin. The side you're not looking at is single point of control.
Quick question -- If you can't certify what exactly your computers have been doing, can you actually say that you are secure? It seems like not using NMS is just fast tracking yourself to become one of those orgs that gets owned for months and never even knows about it.
Given that there’s evidence of the attacker replacing their tools after gaining credentialed access and how long the supply chain has been in place, it’s going to be difficult to rule out compromise if you’re a SolarWinds customer. Logs that would record relevant activity of the trojan likely won’t go back all that long.
I wonder if this will be a "Reflections on Trusting Trust" situation; it's easy to spot malicious code in source control, but if your build machine was compromised to insert malicious code at compile time that would be easy to overlook.
Get ready for a bunch of new accusations about the election. Dominion Voting systems uses SolarWinds according to their login pages, while the hack was still unknown. I shudder to think of the crazy conspiracy theories that are coming from a certain someone on twitter now... smh.
Why would it be crazy for a state level black hat actor to want to hack votes in the USA’s elections? I’m not saying that happened, but it’s not prima facie crazy. In fact I recall hearing that Russia somehow hacked the election for the past 4 years. Motive and now means is there. Creating opportunity is within a state actor’s capabilities too.
But I mean, it's totally different that a hack to Solarwinds and FireEye that was before the election and just assume a correlation knowing that all these counties and election systems suppliers use SolarWinds but only might be contributed to Russia should be considered the same as the Russian disinformation campaign carried out on Facebook ads in 2016.
I mean, one would be election shattering but there is no proof, and the other we are pretty sure happened even though nothing came of it at all.
I hope this results in a massive class action lawsuit against SolarWinds. Companies need to be held accountable for their security practices. The fact that the binary was signed and distributed through their official channels undetected for this long seems like malfeasance.
The other top post on HN right now is full of commenters making fun of the attribution to “sophisticated” and “with resources of a nation-state.” Oops.
As for the write-up itself, I honestly can’t believe they published this much detail. The feds and FireEye must be very concerned that folks are exposed to this right now and those folks don’t know it.
This a common take among the uninitiated and certain individuals in the "security community". Lots of people only witness bots breaking into sandboxes and installing bitcoin miners and then think they've seen the entirety of computer security.
Targeted, sophisticated, nation-state threats are real and have been documented in the FLAME, DUQU, DARKHOTELs of the world. Supply-chain attacks are not new either, Juniper was compromised 4 years ago and backdoored into their firmware, NOTPETYA compromised a legitimate accounting software company, and ransomware has been delivering through MSPs for quite some time.
Professionals look at indicators posted and techniques leveraged and come to their own conclusions.
This is not standard malware. This was not a script kiddie.
I'd prefer if we would call these things aggressive, rather then sophisticated. What we are witnessing is that adversaries are upleveling and they dont even care if they get caught or not anymore. It's pretty much like warfare in the open field soon.
High profile security vendors and national security officials have a real history of describing script kiddie jobs using months or years-old public exploits as "sophisticated" or "nation-state level", one instance that turned out to be a real supply chain attack that actually requires serious resources...doesn't really change the overall picture. In the Boy Who Cried Wolf fable, we don't fault the villagers for failing to expect that there would be a wolf this time.
Can you cite specific examples of national security officials misattributing a script kiddie as nation-state level sophistication? This is a common take which I've never seen substantiated anywhere. In comparison with hidden cobra, olympic destroyer, or the canonical stuxnet which were all clearly not script kiddies.
I don't have a more recent example, but this article[0] identifies a component of the RSA SecurID attack which utilized Poison Ivy. In my own experience experimenting with PI as an underaged teen in high school, it was a very popular and very lethal trojan and by most definitions, users were "script kiddies".
The old school concept of a script kiddie was someone who had a limited skill set that consisted of downloading exploits and trying everything to see what worked. Traditionally, a script kiddie didn't develop any of their own exploits.
Nowadays these people are known as Network Security Consultants and they are paid very well.
I think it comes from the same place as “I could build that in a weekend”: people who know what most of the words mean but aren’t experienced enough to understand the difference between knowing in theory how something could work and successfully running a production-grade implementation.
Security has a certain cachet which makes people want to sound like experts and one way to do that is to minimize others’ accomplishments, implicitly saying that they aren’t challenging to you.
In one of the threads about the Zodiac cypher, which had been unbroken for 51 years despite being given to the NSA, FBI, and the crypto community at large, there were several people who remarked how simple it was and how easy it should have been to crack.
Dunning Kruger doesn’t cover the entirety of what’s happening, but there’s some pathology at work here.
Meanwhile, the Zodiac story also demonstrates you don’t need to be a nation state to craft something sophisticated. Obfuscation and de-obfuscation aren’t on a level playing field. (This comment isn’t specifically about this attack.)
Nihilistic, cynical contrarianism is indistinguishable from intelligence unless you know a subject.
Example: "Continents move around", "time slows if you're fast", and "turtles, all the way down" are all similarly ridiculous unless you've studied these subjects. People see the first two receiving lots of applause[1] and start making claims that to them are just as believable.
[1]: Or they see footnotes being correlated with "serious science" and start adding footnotes.]2]
[2]: For details on how references make people appear smart, see Feynman, 1968: Cargo-Culting. Also scientism
[3]: A heuristic that in our experiments succeeded in identifying 0.65 of the lower 20-quantile of wannabe basement-warriors is to count the occurrences of the terms /threat (model|actor)/, /sigint/, /retcon/, /psyops/, /opsec/, and the phrase /everyone does it/. Exclude any >= 1.
This has several factors that lead people to make such comments:
Tech nerds like to describe things as boring, obvious, and easy to do, it makes them feel smart.
Certain politics lead people to assume that government is incompetent and therefore obviously it was an unsophisticated attack that breached them.
A lot of IT workers think every instance of a hack is an example of underfunding the IT department, or the IT department wastes the budget they do have.
>>Certain politics lead people to assume that government is incompetent
I would say history leads people to assume that government is incompetent. We have plenty of evidence on which to form this conclusion
>A lot of IT workers think every instance of a hack is an example of underfunding the IT department,
Again this is also drawn from a place of experience, many of us have seen first hand organization refusing to put in the money needed to properly secure systems until AFTER their is a compromise, and then the money is only allocate for a few months to resolve the exact compromise that impacted the organization never really changing the security posture of the company / organization.
This pattern is repeated over and over and over again
Honestly HN seems to have a very contrarian streak on these matters. I wouldn't necessarily say it's partisan but the prevailing opinion seems to be whatever is against the mainstream view (Examples include Election Fraud, Assange, Litvinenko etc.)
> making fun of the attribution to “sophisticated” and “with resources of a nation-state.” Oops.
This annoys me.
1) I'm equally breached whether the hacker used a sophisticated attack or not.
2) A "nation-state" attacker is more than willing to use "unsophisticated" attacks to accomplish their goals.
In fact, a "nation-state" attacker is probably less likely to use "sophisticated" attacks as they will want to save those for the cases where they really need to get through security.
That’s a really narrow view. Solar winds Orion is a widely used, popular tool for managing switches and routers and backing up network device configuration. It’s been compromised for months.
This could be a stuxnet type attack looking for a particular facility, or something far worse at a time when the US is most vulnerable.
I believe it was state sponsored, but that covert channel isn’t an example of sophisticated or of something that suggests nation state resources. There was better tech than that documented publicly over 20 years ago.
Why oops, at least in relation to parent's comment? The process, while perhaps complicated to detect, is not particularly complicated to design from scratch. The technique as described by OPs post could probably be designed by a single individual in a day.
Just from the target I'd suspect it is not someone doing it for the lulz, but I didn't see anything in OPs quote that indicates that it must be a nation state.
This is the danger of these kind of write-ups; people will look at it and go "oh that's simple, anyone could do that". It's a great example of the cognitive bias called the curse of knowledge.
Edit: a quick counterfactual here. If this is so easy and so valuable, why is it so rare?
Because there is so much more than what the OP quoted. I was disagreeing that you could characterize what OP quoted as being indicative of a nation state. There was far more to this hack than that.
> The technique as described by OPs post could probably be designed by a single individual in a day.
Designed, as in back-of-the-envelope chicken scratching? Sure. You can reduce almost any exploit to "compromise system or org, leverage compromise to go after another system or org" and any stealth measures reduce to something like "hide your own efforts among legitimate signals, and use cutouts to make them harder to trace back".
I'm not sure what you think counts as "sophistication", but thinking up the specifics of multiple stages, and executing each stage without errors that would expose the other stages to detection isn't easy, and takes time and patience by many people with diverse skills working in concert.
Sure, each specific link in the chain may not look difficult, but those were just the solutions that worked in isolation when tried in whatever testing environment(s) the threat-actor operates (not to mention funding the "blue team" that operates it, because you don't want your unsuccessful attempts to tip your hand), and then deployed in sequence.
Or were you under the impression that this whole effort was created de-novo and worked (correctly, might I add) without being detected the very first time each component was tried?
How are they so sure it's Russia?
It always seems like there is another country with much more interests in these companies. Given partnerships and the global economic status of Russia. It does not pass the sniff test.
Attribution is indeed always the part that make me doubt about entire security related articles. Each and every time I looked into how such attacks were attributed to this or that country the evidence, if not the reasonning, were severely lacking at best.
There is too much at stake in those affairs that it is safer to assume that only fabricated stories and counter fires are told about it all.
The part you’re missing is that they don’t tell you how they did the attribution, and there are good reasons for this. You’re assuming that you know what they know.
Speaking with confidence is not enough to be taken seriously on this topic saturated with marketing, politics and mythomaniacs. Especially when a quick Google search is enough to find plenty of such attributions:
List would go on and on, and this is only for Russia.
And yes I'm aware those sources are easily discarded as "non serious enough", as expected from the top results of a search engine I guess. Do your part and provide us with better sources.
No, source IP address country is never the basis for attribution, and contrarian lay people always assume that’s how it works for some reason. It isn’t, at all.
The NSA regularly launches cyber attacks using the signature/SOP of other countries in order to frame them, so attribution is usually useless without evidence.
Unfortunately, evidence is never provided beyond hearsay.
Please don't attack other users this way. The damage it does to the community is not worth the chance that you're right. It's much better to follow the site guidelines, which are designed to sustain an interesting community. In this case that means (a) respond with a better argument and/or (b) if you feel the comment breaks the site guidelines, flag it.
> I’m not in favour of having public client lists, especially when you’re a critical software vendor — but this list is just terrified. There are a lot of big there, and I won’t be surprised to hear of more incidents in the coming days.
Software filters are tuned more strictly for new accounts and kill some comments based on past behavior by spammers and/or trolls. We review the comments that have been killed that way (most of them, at least) and unkill the good ones and mark the accounts legit. But it takes us time to do that, which is one reason why user vouches are helpful.
To flag a comment your account needs a minimum score, I'm not sure, but I think it is 50 (or 100 or something).
To flag something you click on the timestamp on top of the comment (typically something like: <x> hours ago). This opens a view of only that comment and replies to it. If you have enough reputation/points/karma, you can now see a flag link on top of the comment. If you click flag it ends up in a moderation queue. If multiple users click flag it might bypass the moderation queue and become flagged immediately without further intervention from mods.