Can someone explain how the money just "disappears"? Do the thieves send it to a bank account overseas and the receiving bank doesn't cooperate and send it back? Wouldn't there be a log detailing exactly where the money was sent if they bounced it through multiple banks? Could someone wire it to a bank, withdraw it as cash, then deposit it somewhere else?
You can simply re-wire money between different jurisdictions. Lets say you send money to Phillipines -> Gibraltar -> Caymans -> Russia -> UK -> Panama -> Switzerland
Because each country is on different continent, following different rules and laws, accessing data is extremely hard.
Many countries from different geographical locations wont be happy to give access to banking information.
Before they can track the money, it will be deposited to shell company. With amounts like that there are often bigger powers that will help make them disappear, like politicians and bank executives.
I personally have been dealing with banks on higher level and their willingness to ignore the rules and regulators is enormous. Thats in EU, I believe in countries like Gibraltar or Cayman Islands must be pretty easy.
It reminds me of Wolf of Wall Street and the Swiss banker. If I was ever involved in such a situation I'd be expecting to call the banker (who's complicity laundering) and hear "what money? you have no money with us).
Yes there is a log, and legal compliance about revealing this log, except where there isn't. Which is why the hackers wired the money to banks in the philippines and used it as casinos in the philippines as the casinos were specifically excluded from anti-money laundering legislation in that country.
Great write up and it certainly sounds like an inside job. I mean, it seems the authors of the malware had an incredible amount of insight into the entire SWIFT platform anyway.
Gaining access to swift.com through either "proper" registration or phising will get you access to the SWIFT SDK as well as tons of other material.
So no sorry but to me and to anyone else who's even remotely familiar with how shitty SWIFT and usual banking internal security is, it doesn't sound like an inside job, just a job well done.
Did I read it correctly that the malware had the name of the printer embedded in it? The sentence 'The PCL language used specifies the printer model, which is "HP LaserJet 400 M401"' seems to suggest it did.
This strongly points to an inside job, but does not exclusively prove it.
No it only shows they've done their homework.
Allot of the "security" of banking processes is based on human verification often via phone, fax, printing etc. so it's not surprising that they've compromised the printing process because that would flag the transactions immediately as the process mandates for the printout to be reviewed (and usually singed, and then filed with compliance).
If you can present an all clear signal to the bank staff at all the immediate human readable interfaces no one would notice, at best the fraudulent transactions would be detected 30+ days down the line when the banks perform account consolidation.
This is a good post about how money moves around between banks https://gendal.me/2013/11/24/a-simple-explanation-of-how-mon... it doesn't go into payment/messaging systems (i.e. SWIFT) too much which is a good thing but it does explains how bank handle and settle transfers.
SWIFT is a glorified messaging service for banks, that's how it started, today allot of "out of the box" applications have been developed on it but in it's core SWIFT is just a trusted network that enabled it's members to securely transfer messages between each other, these messages often end up being used to facilitate transactions but they aren't what actually moves money around.
I was thinking the same thing, then started wondering if the knowledge could have been gained by watching transactions and investigating code once you had access to the system.
However they gained the knowledge, it is incredible.
SWIFT is a very widely used (global) system, used by many many institutions. I imagine keeping the details of how it works secret is basically impossible.
It's not kept secret the documentation is available more or less to the "public", so is the SDK.
You can also register for training, I underwent SWIFT training and I never worked for a bank, I'm not sure how it was arranged but I doubt it that strict of a process.
There are 100's if not 1000's online and offline providers for SWIFT training from training bank employees on how to use the software as end users through IT specialists which deploy the software to developers which build applications on SWIFT.
To get a swift.com account you need to have details, name, institution code (BIC code for banks) and some additional details.
SWIFT doesn't handle the authorization the institution you are registering on behalf does which means it's probably even more easier to phish/social engineer yourself in.
Gaining access to swift.com credentials also shouldn't be that hard, especially if have already compromised the bank's network or any of it's employees.
To get access to the SWIFT documentation and other materials you also don't have to compromise your initial target which makes it even easier.
Registration to SWIFT's "Mystandards" is open to the public completely which both gives you access to quite a bit of information as well as allows to you participate in discussion groups which may lead to a few more vectors for social engineering.
tl;dr - the recent SWIFT / Bangladesh heist has been followed from outside by BAE systems of all people and they analyse some malware. It reminds me little more than what I would expect a (malicious) set of scripts developed by a good inhouse IT team to look like as they solve some MIS problem. It's that custom-built.
The main .exe replaces the eponymous two bytes in the swift system, preventing it from executing code if a check fails (presumably a swift authorisation check to access the underlying Oracle DB). This is a JNZ instruction in the target application and even I remember this one.
Then there is code dealing with SQL statements, so it can both delete malicious swift instructions from the local database and inject its own(?) and even tampers with the local printer to delete confirmation messages (where presumably hard copies of each transaction as printed). The actual printer model is it seems hard coded in the attackers toolkit.
This has several lessons, firstly if you have something valuable someone will really work hard to attack you specifically. Second there is really no excuse anymore not to move every OS over to randomised memory location access, and more. But even so I am not convinced this would help here. The specificity of the attack is incredible.
Lastly, Already modern software development seems to be about duct taping together other people's code and stopping once it "works". The cost of developing secure systems is way beyond the cost of developing "works on my machine" systems, and that cost needs to be raised at a business level as an insurance premium. Then we can make sensible trade offs. Not sure there is a 961M dollar trade off but still.
Second there is really no excuse anymore not to move every OS over to randomised memory location access, and more. But even so I am not convinced this would help here.
It might make it harder... but in this case couldn't an attacker search the entire address space for the location of the library? ASLR protects against buffer overflows as an attack vector, but here the attacker already has access.
To be honest the attack space for 2 malicious bytes in a system of phones, routers, servers and applications consisting of what, trillions of bytes of code, is so mind bogglingly huge that even the experts arent experts.
At some point we need to go back to secure kernels only a few thousand lines long and they ddole out permissions and access - making all attack vectors ridiculously harder.
Can we do it? Will Facebook hand over its billions to the project? Will anyone?
> Can we do it? Will Facebook hand over its billions to the project? Will anyone?
At some point the cost of not doing it will exceed the cost of doing it, I'm not sure what it would actually take, shit security already costs tens of billions a year to the global economy and people just accept it.
Wait, is this write up by BAE systems, as in British aerospace engineering?
If so, I'm scratching my head, as it appears they do have knowledgeable infosec people, but their security is laughable. Anyone want schematics and parts lists for anything they make? There's an email address you can send a message to that will respond with them. All plain-text, no validation.
The attitude that good security costs trillions and is therefore unattainable is all pervasive.
How valuable are the designs to most military parts though? If I am a government looking to buy cruise missiles (for example), do I really care if lots of people have the design? What I care about is whether a company will sell to me, how much it will cost to get the number I want made, whether they can continue to support them and, continue to produce them at the rate I want.
Sure, there are some technologies that I want to not be leaked to allies where significant parts of how they work aren't public knowledge (stealth material/coating composition maybe?) but, it is only those that I would actually care about being secret, if I was a government/military buying hardware. If it isn't hard to figure out, I'd rather it was easy to access the knowledge so my engineers/technicians can get access.
BAE Systems are a significant weapons manufacturer. Sorting security 'problems' like the one you are describing isn't that expensive so, if it was bothering their clients, you'd think they would do it. It is more plausible that they just don't care that the this information is so easy to access because it is easier either for them or their clients.
> Anyone want schematics and parts lists for anything they make? There's an email address you can send a message to that will respond with them. All plain-text, no validation.
This seems like an odd complaint. I can do this with my car and dishwasher, too. Unless BAE makes classified military stuff they're exposing in this manner it seems less like a security hole and more like a useful thing for people maintaining the stuff they manufacture.
This is a write up by BAE Systems Applied Intelligence (and possibly not an official one, not sure), previously known as Detica, who were acquired by BAE a few years back. They have a consultancy side and a security side.
> Anyone want schematics and parts lists for anything they make? There's an email address you can send a message to that will respond with them. All plain-text, no validation.
You serious? Have you never heard of XTS-400 and SAGE guard? XTS decends from first, secure system to be evaluated called SCOMP. They've dropped assurance a bit due to market demand for features over assurance but it still has more security than most OS's. Also, SCOMP had an IO MMU before that phrase was invented and fashionable. :)
Google SCOMP, XTS-400, or SAGE guard to know what kind of INFOSEC goes back to 80's in military before firewalls were invented or security went mainstream.
Mostly run on IBM mainframes. Some run on Windows. Only thing that saves former is investment required to learn to hack it plus arcane controls/procedures encoded in it. It's why attacks are often inside jobs at least partly.
Serious question: what OS should it run on? I agree that Windows is a bad choice but other options are not much better as far as security (for this kind of attacks) is concerned... Are they?
The short answer is to only run signed binary's and or not allow new code at all. How to manage that while maintaining flexibility and minimizing costs ends up getting really complex. But, really there is a long continuum between say a personal blog at one end and an ICBM at the other. IMO, the banking system is at the high end of that scale.
This is some real life swordfish shit right here. This is not some run of the mill trojan keylogger. Whoever wrote this had plenty of access and time to get the malware correct.
Pure speculation, but impressive as this is, it feels just amateurish enough to not be a nation-state. They made some mistakes like typos in the sending address of the wire recipient, and also made the attack discoverable the night/morning after, when an employee noticed SWIFT reports weren't printing properly.
I think they were probably just in it for the money.
Ok, fair enough. I could have seen it being either way, but I've mostly only read the big reports about nation-state actors, so I'm probably biased to see it that way.
As a separate note, state actors can be very amateurish. The Chinese did things, as disclosed by the Mandiant APT1 Report, like taunting users, using very non-native English phishing messages,and leaving plaintext signatures as a means of bragging.
"Many pieces of the puzzle are still missing though: how the attackers sent the fraudulent transfers; how the malware was implanted; and crucially, who was behind this."
Well, we have a clue for piece #2:
"... sends result to attacker domain over HTTP"
How the hell does that happen in this day and age? You trust any traffic coming out of your network??
A basic blockchain system would not necessarily change the feasibility of this attack, but it would probably make it easier to trace the funds. The crux of this attack is that the transacting and reporting computer were compromised. This means that fraud transactions were initiated and then notifications of those transactions were removed from the reporting machine. With a blockchain system I could see a similar outcome if the transacting and reporting machines were compromised. Of course, with a blockchain system there are many ways you could improve security to decrease the chances of this attack:
1. Multi-signature transactions could require a hacker to compromise multiple machines possibly on separate network segments.
2. Multiple reporting and auditing machines could be employed on several separate networks to again increase intrusion requirements.
I suspect SWIFT already allows for or could employ similar methods on their network to mitigate these types of scenarios as well.
Trace the funds to where? an Internet cafe in an ex-Soviet nation where they were split up into smaller amounts to be put in cold storage? then what, when those smaller amounts start being used to buy gold 5 years later? Isn't the whole point of Bitcoin that any transfer is irreversible, and it's hard for authorities to interfere and regulate, unlike say USD?
The question was about blockchain not bitcoin, so I'm assuming the correlary is a private shared blockchain used to replace SWIFT between a consortium of banks. Bitcoin would not make sense as a SWIFT replacement because it's completely public. Banks have no interest in showing their balance sheets to the general public.
We're doing something like this with lykke.com. Fiat money are implemented as Colored Coins on top of Bitcoin. It has a range of advantages over pure fiat - tracking of issuance publicly, lower settlement times, integration with smart money capabilities of Bitcoin, and more.
Makes sense, but as you noted, none of the security benefits you mention are related to the blockchain and could just as easily be implemented by SWIFT or the banks. Nor is it clear why it would have been easier to trace the final destination within the network.
In fact it seems the whole situation could have been avoided if the bank had followed the recommended practice of having a secure wire room with a computer that's not connected to any network other than SWIFT. And if they don't do that they can have the same problem on a private blockchain network.
Yes. Cryptocurrencies remove the needs for banks in the first place, so you're not vulnerable to a banker/politician running windows, inflating your currency, etc. Vulnerabilities of internal bank networks are rare, most of the risk lies in the e-banking frontend. Fiat banks running on a blockchain indeed could gain from the security properties in many ways.
Control flow guard (http://research.microsoft.com/pubs/64250/ccs05.pdf [2005]) could have prevented this attack. I think most modern programs use stack guards, so this should not be too hard to include either.
"CFI requires that, during program execution, whenever a machine-code instruction transfers control, it targets a valid destination, as determined by a CFG created ahead of time."
That wouldn't do anything for this. If you can change arbitrary bytes of the binary and have it execute, you can rewrite the whole thing, including patching out all these extra checks too.
> This malware was written bespoke for attacking a specific victim infrastructure,
Nothing in the post indicates that it was specific to a single victim.
The JNZ update was something everyone used to do to get rid of nag screens in shareware. I'd be very surprised if there wasn't a generic utility for extracting the locations of the right bits at this point, but either way it's a simple process for finding them.
Going after the print files just indicates familiarity with SWIFT and PCL.
It doesn't actually look like a particularly clever attack to me, at least not from what's in this article. Very basic security mitigations would have prevented it. (Like validating oracle files)