Hacker News new | past | comments | ask | show | jobs | submit login
Online databases dropping like flies, with 10,000 falling to ransomware (arstechnica.co.uk)
237 points by SQL2219 on Jan 7, 2017 | hide | past | favorite | 149 comments



I feel like this is a new golden age in being a blackhat. Back 5-10 years ago there was no IOT and all databases were password protected by default. Now we have:

1. IoT with basically no security

2. No(Auth)SQL.

Also, dev time has become so expensive, the InfoSec teams in the companies I've worked at have had shockingly low head counts for all the responsibilities they have.


It's been the golden age for script kiddies ever since fuzzing and injection became the most effective blind attacks. Who needs a database password when you have %27 ?

It used to be you had to actually break into a system to exfiltrate all its data. Now you just make an HTTP query. Owning a big system was really important because bandwidth and server space was expensive. Now you rent some VPS space with bitcoin you made from spam or DDoS-for-hire using someone else's botnet that had been sitting around with a default password, and use it to distribute pirated media like it's text files. Mass-scan for SQL injections, inject some malware you found on a forum, and amass a botnet to play with. What a time to be a script kiddie.

I'm not sure about the modern age, but to me the golden age of blackhats was pre-2003, when nobody was really watching their networks or systems and advanced techniques were everywhere with zero defenses. Metasploit and the age of shitty webapps and packaged malware ushered in the dumbing-down of blackhats as a general concept.

(Get off my lawn!)


I'm not sure if security didn't actually improve dramatically. Web frameworks seem to have SQL injection and cross-site scripting more or less under control these days. Cloud setups and containerisation should also help (servers are more easily rebuild and therefore more likely to be fully patched, infrastructure-as-code is easier to review, most servers/containers are only running one service etc.)


Patching is not easier now, and wasn't difficult before. There's more steps now, it's not easier to review.

Look at it from the blackhat perspective. I don't care what the hell loops you're going through on the backend. If there's a 0day, i'm going to use it and it's going to work, because you don't even know about it yet, much less have a patch built into a binary pushed to mirrors that your Docker image building box needs to update to before you can rebuild your image and push it to your servers and do a maintenance window to switch to the new services.

It doesn't matter if you are running in a VPS in a container in a virtual machine in an emulator, because I can still use an SQL call to dump everyone's passwords, or get voting records and account details for one of your political parties. If I execute code you're still going to get owned because you don't have security patches in your kernels and you don't use signed binaries (none of which are new technologies, btw).

I would even go so far as to say you would never in a million years find exploited code in one of your systems because you somehow believe container apps are immune to basic vulnerabilities. Containers are just fancy chroots.


I've run java apps on boxes that are several years out of date OS patching wise and have many bugs, explain to be how you'd exploit those?

You'd need to:

* Get through the CDN to hit origin * Get through the patched internet facing rproxies * Exploit the JVM to get code execution, while your target area is something like spring * Get a code exec on that box as the appserver user _THEN_ use your j33t exploit..

If an attacker is capable of doing that, then absolutely nothing you can do is capable of stopping them anyway.

Patching is only important for your internet facing stuff, depending on your environment.

I WANT to patch such systems, we all agree that it's poor practice to do so; sometimes life is unfair though and in the grand scheme of things I don't see any significant risk in this scenario..


I worked at a company with that kind of app. It got owned three different ways. A user reported it after they saw it listed in a forum somewhere.

It was mainly app vulns, not platform, but there were certainly things that could have avoided it. The system was just not hardened at all, and the devs were sloppy.

Luckily they had basic network security best practices and so it was confined to frontend, but blah blah SQL blah blah MITM, still not a good situation to be in.


> If there's a 0day, i'm going to use it

Sure, but with the number of unpatched systems out there most black hats aren't paying (in time/effort or straight cash) for 0-days except for exceptional circumstances. 0-days are great for targeted attacks, state actors, and hack-for-hire/contractors, but for the everyday dude who wants a botnet? Why bother...

Look at Mirai - that wasn't anything close to a 0-day and it had a huuuuge impact. Anyone running a similar botnet could be extorting hosts and providers to this day if it hadn't been used in such a public way.


I completely agree with you and I find the ignorance in our industry mind blowing. The industry has pushed for sloppy dev practices (Lean/Agile) and has time after time claimed that all of the "extra" stuff like strict access controls and discipline "aren't needed features."

Mongo DB security didn't come until version 3. VERSION 3. It was assumed you could just firewall it off and life is good.


Everyone uses Lean/Agile now so calling effectively the entire industry sloppy is a bit stupid on your part. There is nothing in the project management process that forbids you from including security as a high priority or making sure the software meets certain quality criteria.

Also where did you get the idea that security didn't come to MongoDB 3 ? You're talking nonsense.


> Everyone uses Lean/Agile now

HN bubble generalization.

Also, was the stupid part necessary?


I disagree with your assessment for two reasons:

1. Breaking up my app (eg, across containers) increases exploitation difficulty, because it's harder to chain weaknesses between parts of my stack. A weakness in the API layer doesn't let you run against the DB directly -- you either need to find a weakness in the interface between containers or use a container jailbreak exploit. You also have fewer system libraries to use, since the container only has those needed by either the API or DB, not both.

2. The frequency of rebuilding means that a lot of system components get patched faster. Sure, you can burn a 0-day on my app, and there's relatively little I can do about it, but all of my builds are <1week old, so you're using new exploits, not easy-to-get legacy malware.

The point of security is always economics. So containers (and more generally, frequent rebuilds and virtualization) provide security by raising the complexity and freshness of exploits required.

That's a non-trivial gain.


Abstraction has never been a big barrier to chained vulnerabilities. And hackers do not rely on a dynamic linker or shared libraries to deliver their payloads. (Script kiddies do, because they're using some paint-by-numbers malware package) Any actual hardening of a system will more than secure it from "the excess of unnecessary binaries" that some people fear.

So you're running brand new code on your production systems on a regular basis? Fantastic! I'll find a new 0day in the latest releases since those are the ones that have had the least review. If you're using an old stable build, the 0days keep working until one gets leaked, and then you just find another one.

How do you figure containers raise complexity of an exploit? The exploit is of the app or the stack, but the container isn't part of the app stack. A container is literally just a way in which you run your app. If I can execute code, the game is over, in all container systems that don't prevent traditional exploits. The only thing containers make easier is namespaces, which is a white hat mitigation. Like locks on doors, they only keep honest people out.

By the way, exploits don't really need to be fresh. Many exploits getting patched these days are years old, and some were patched years ago and a recent update re-instituted the vuln. You can resist 99% of attacks using 10 year old technology, but nobody implements it. They just say stuff like "I put it in a container!" and your boss is happy so everyone goes on with life.


Containers make things a bit harder from a persistence view. There was a great CCC talk on pwning architectures which are just based on aws lambda this year [1], worth a look.

1: https://app.media.ccc.de/v/33c3-7865-gone_in_60_milliseconds


Security is not that much improved to be true.

Docker is just a fancy chroot, a motivated attacker can escape from it, it remains far less secure than a freebsd jail or a solaris zone.

Except maybe some somewhat safer compiled languages (rust, newer versions of C++), and a few interesting tools (like AFL), there was not much improvements in security mechanism in late years.

SELinux is still so annoying that it's commonly the first thing disabled in many install.

A large amount of sysadmins and developers still do a chmod -R 777, not fully understanding what there are doing.

And in the area of micro services, the traditional 3/4 tiers architecture can easily be replaced by a clusterfuck of containers deployed in a flat network where it's impossible to monitor what is speaking with what. And it's much easier to make a mistake like putting a database on the Internet with no authentication whatsoever...

What is more, because of containers, infrastructures tends to be far more diverse than ever: python, node, php, ruby, go, java, each with various version, from various distribution, with various libraries. Each of these environments must be properly monitored, but when you deal with thousands of containers, hundreds of technological stacks, with 3 or 4 versions for each layer, it becomes a nightmare.

In languages that encourage bundling, Java with war webapps for example, I've rarely seen developers properly monitor security issues of what they ship. Too often I've seen only opportunistic updates. And in some extreme cases I've seen products shipping version of libraries 10 or even 15 years old. Containers being a way to generalize bundling, I don't see it being an improvement in that regard. Quite the contrary in fact.

However, to be completely honest, new tools like chef/puppet/containers and new methodologies like Infrastructure as Code somewhat helped to be less frighten about updates as it reduces the risk/impact of something breaking in the process, no more 'don't touch it, it will break comething'. Even containers can help in that regard, it's easier to do AB testing, avoid downtime or put servers in maintenance with them.

Fundamentally, some things are better, some things are worth, and in the end the security level has not really changed.

And honestly, it's a truly frightening view. Software/Infrastructure security is becoming more and more important as sensitive operations are increasingly done through websites (ex: banking operation), private data is increasingly stored in the "cloud" and an increasing number of devices are now connected.

Security flows are still as numerous as before, but their impact, once exploited, tends to be far greater. If it continues like that, something really bad will happen, exactly what is difficult to predict (massive data leak? major industrial disaster caused by a hack? an election being manipulated?), but it will happen.


The OS and even the lower level libs are rarely the first target anymore unless there are some nice pre-packed pwns on msploit or whereever; sure -- there are exploitable bugs in such things but even getting to these hosts are hard now we're in the age of CDN's and ELBs and so on.. Also NX, ASLR, nonexec heap, tools like selinux etc etc have made pwning these as 0day a much harder thing.

So; the noobs have gone upstack. Why bother trying to break out of an ASLR'd JVM behind a CDN with selinux and etc just to get a shell on a box if your goal is to read a .properties so you can dump the db out? Instead you can probably pull the data one rq at a time with some bad form sanitation or whatever? The latter also means you're targetting code the actual target has written which is more likely to have easy to find issues as no one but the target is looking over it than say, openssl which has rather a few eyes on it..

There's a flipside to config mgt from security tho.. Tools like chef/puppet, while enabling you to maintain a _basic_ level of state actually tend to be great attack surfaces and so overall reduce your sec. How many ops teams out there would notice if you pwned one of their lappys and replaced some puppet handler code before they push it into CI and it gets run across their entire estate? Over-Using config mgt (like using it to orcestrate app deploys) with even mid skilled ops folks generally makes an attackers job much easier than say, rolling out immutable containers..

Most places I've seen without much thought have ended up with superpowerful CI boxes that by owning you get absolutely EVERYTHING and they'll happily give out keys to all the devs while not letting them near prod... >_<

If you have a shell account on a CI and the jenkins taskrunner user has access to ssh in anywhere you want to be, then you're just a commit away from having those rights too...

Just my $0.02

(edit: actually, if you just have commit rights to any ./script that is running on that jenkins, you can do everything that jenkins can do without even needing a shell...

Your jenkins builds the artefacts for your app and ALSO does the deploys? Bet you can get into prod with two lines of nc and push)...


> How many ops teams out there would notice if you pwned oneof their lappys

Why are their lappys on the internet? Aren't we discussing online vulnerabilities?

After all, if you pwn an admin-credentialled computer, you can do all sorts of damage in any but the most locked-down shop. Puppet has little to do with it per se - it's the privileges an ops laptop will have that matters.


Why is a laptop on the internet? I mean, it's much easier to pwn an employee laptop and then use that to get everywhere then it is owning breaking into an infra over the internet...

Even if you can't spearphish 'em; several beers at a meetup or three followed by a subtle rpi install when you're somewhere useful usually has better results.

So I've.. Been told.

(edit: The point was though that in almost all targets, all you need is commit rights to a single repo to own everything; so you don't even need a sysadmin just a dev... Hell, PM's and BA's sometimes even have commit rights...)


Googling %27 didn't find much - can you elaborate?


URL-encoded single quote character (for SQL injections).


3. Bitcoin


Indeed, bitcoin is the true enabler.

But the dangers are pretty high. You never know how ten years from now the digital trail you left comes back to bite you.

Cashing large amounts it's not trivial. Sure, you can meet in private locations with local bitcoin buyers, but when you have $1 mil to sell it gets tricky, there aren't that many buyers in any particular area. And then you have the problem of justifying how you suddenly have one million.


What actually happens when you try to get out of bitcoin? Let's say I put in $5k a few years back which is now worth $100k. I have to go on the exchange and nominate a bank account then sell then they transfer say USD into my account? At this point is this "capital gains" taxable?

Assuming I'm willing to pay the tax if it's due has anyone had trouble with authorities questioning your new cash pile say if before this you had no real money and lucked out on Bitcoin?


You have proof of this as the bitcoin transactions are public. The taxman can verify that you bought your coins five years ago and sold them this year. Multiply those by the BTC exchange rate five years ago and now, and it should be obvious that you put in 5k and got out 100k. You could've bought stock, or other currencies, but you bought BTC and it skyrocketed.


> The taxman can verify that you bought your coins five years ago and sold them this year.

How? The public block chain only contains records of how coins moved from one wallet to another. It doesn't have any information about who those wallets belonged to, or what the terms of the transaction were. Maybe the coins were sold for fiat currency, or maybe they were compensation for goods and services. There's no way to know just from the information in the blockchain.

[EDIT] Let me make this more clear: it is easy to anonymize BTC. It is so easy that the technique even has a name (bitcoin tumbling) and companies that will do it for you as a service (e.g. https://bitlaunder.com). (I thought this was common knowledge around here.) In the face of these facts, how is the IRS going to enforce the tax code against a someone who tumbles their coins?


Even if you received the BTC for goods and services, and then held them for 5 years and they 20x in value... you still owe capital gains. It's like saying: 5 years ago I sold goods and services for $100, bought stocks with the $100 and now that stock is worth $2000. When you sell the stock, you pay capital gains on $1900; you also should've reported that $100 in revenue from 5 years ago.

As for anonymity... you lose it when you associate a bank account to get liquidity (as mentioned by GP: "nominate a bank account"). Of course, this assumes you can't get liquidity in some other way... but that's non-trivial with large qty of BTC.


> As for anonymity... you lose it when you associate a bank account to get liquidity

Yes, that's true, but that not enough to enforce the tax code. (See the update to my OP.)


> Yes, that's true, but that not enough to enforce the tax code.

Uh, yes it is. Money came into your bank account, and you'll need to explain its origin if the IRS audits you.

Do you honestly believe the IRS would just give up on enforcing the law because you used a tumbler before you converted the bitcoins to USD and put the money in your bank account? The fact that you have the money in your account at the end of that process is what really matters.


In Canada I believe (don't own any bitcoin) you can withdraw from a few dedicated bitcoin ATM's, as far as I know you don't need to also use a regular bank ATM card in the process.


Sure. But the enforcement is said law of near impossible if you're trying to avoid it and you find a willing BTC purchasing person.


A good chunk of tax law is unenforceable if the evader is really clever. That's like observing you can murder someone and get away with it if you leave no evidence. But it's a big risk. You wanna risk jail to keep 15-18% extra? I know I don't.


If you want to be legit you'll take care of these aspects. For example, I bought bitcoin by moving money through a bank wire to a well regarded bitcoin exchange (Bitstamp). The same for cashing out of bitcoin. So I have a clear money trace.

If you mined or gained your bitcoin by buying it through local bitcoin, the taxman might give you some trouble, but generally, the presumption of good will and non-guilty still applies, meaning that if they don't agree, it kind of their job to prove that you are guilty of something (ie: you got your bitcoin through ransomware)


> If you want to be legit you'll take care of these aspects.

Well, yeah, obviously. The issue is not how to deal with honest actors, it's how to deal with dishonest ones. (See the update to my OP.)


Actually, I think you lost track of the start of this thread which was explicitly about honest actors and whether they get swept up with the dishonest ones.


Yes, you're right. :-(

Sorry about that. I would go back and edit my OP but it's locked already.


The point is that if your records are a mess, then it's your problem - once the taxman gets note of a part of "cashing out" (either by seeing the BTC->dollars transaction, or seeing whatever large you bought for these dollars), they don't need to find out if you earned these bitcoins or at what price you bought them.

They can simply ask you to provide the evidence yourself, and if you cannot, then tax the full rate on the full amount.


The same way as you deal with all other kinds of money - you enforce at the transaction points. If you're spending more than you're earning, or your bank account has unexplained cash, the tax man will ask you what's going on.


"how is the IRS going to enforce the tax code against a someone who tumbles their coins" TL;DR - in the same manner as they enforce the tax code against someone who gets income in under-the-counter cash.

If you obtain large amounts of money, then you (presumably) will want to use it to obtain large amounts of goods and services. You can hide your income, but it's harder to hide your spending.

If you spend it all on food, booze, drugs and minor items, then they don't care about you, since the amounts aren't that large.

If you want to buy mansions, flashy cars, high end jewelry and ownership in companies, then they have evidence of you spending much more money than you have declared income. At that point, it becomes your problem - carefully tumbling your coins simply gives the IRS evidence that instead of treating your situation as "forgot to declare that income" (fines) they can prove that you took explicit steps to hide and disguise that income, which carries a risk of jail for tax evasion.


It's entirely unclear to me why bitcoin tumbling provides any systematic anonymity. Sure, it makes it more difficult, but it's just more transactions which need to be tracked. There is nothing that makes it impossible.

Indeed, in many ways they make things worse for users who attempt to gain anonymity by using it. Once the addresses which are used by the tumbler service are identified it is pretty easy to identify other suspicious transactions.


> How?

In practice, what happens is that they require you to keep records tracing your assets from when you get them to when you sell them, and assume a basis of 0 (i.e. capital gains gets taxed on the entire value of the coin) otherwise.


I think you missed this part of the gp post:

>Assuming I'm willing to pay the tax if it's due has anyone had trouble with authorities questioning your new cash pile say if before this you had no real money and lucked out on Bitcoin?


Yes, someone else already pointed that out:

https://news.ycombinator.com/item?id=13347335


At some point the wallet will correlate with meat space.


Yes, but if you want to enforce the tax code against dishonest actors it is not enough to correlate "at some point". You have to correlate an entire sub-chain and show that all of the intermediate wallets were controlled by the same entity. (See the update to my OP.)


Actually don't rely on my advice but I am pretty sure if you can't prove when you originally bought it and for what price you have to pay capital gains on the FULL amount. The burden is on you not the IRS.


Yep. See the second page of the tax form instructions here: https://apps.irs.gov/app/vita/content/globalmedia/4491_capit...

"If taxpayers cannot provide their basis in the property, the IRS will deem it to be zero."

That means 100% of the sale price of an asset is treated as gains.


> You have to correlate an entire sub-chain and show that all of the intermediate wallets were controlled by the same entity.

That is not true. In fact, the IRS does not care about the vagaries of bitcoin wallets, or the blockchain. The IRS cares that you got money that you didn't have before.


The tax man can verify that bought coins were bought and sold, but not who initiated the transaction. I can buy 5K worth of bitcoins, print them out on a sheet of paper, sell you the paper for 5K. When the bitcoins hit 100K, you sell them. You're the one with a 95K gain, not I.


The government doesn't give a shit, as long as it's legal and as long as you pay the taxes.

Bitcoin isn't the only thing in the world that goes up and down in value, so it isn't new from the tax perspective.


To be fair, the IRS doesn't care if it's legal, as long as you pay your taxes. There are rules about how you report your illegal gains (and deductions on them).


Or you could set up a corporate bank account in the Caymens and cash out via btc-e (which is Russian) and not pay any tax.


Yes, and get popped for tax evasion when you move that money back to a US bank to spend it. Although maybe not for 95K.


I guess the real question is: why on earth would you do that? IRS wont get you for using an offshore card to pay for... well, everything?


A fresh Coinbase account will let you sell $15,000 a week, and I assume that this increases over time.


Just keep the documentation for purchase and sale, and for any costs incurred while holding / transacting your BTC.

Speak to a local tax advisor about CGT.

100k is a significant amount and I would advise you don't intentionally break the law "cashing out".


That depends what country you're in. In my country there is no capital gains tax.


Or you could, you know, use Bitcoin as money instead of trying to convert it to Dollars all at once.


Earn Bitcoin as blackhat, then use it to order pizza to your real address. Seems like a solid plan.


I tried tracing Bitcoin transactions once. Not being a security professional in any way, you should take my opinion with a grain of salt, but what I found was that if you send the coins through a mixer, it's probably impossible to figure out where they went afterwards. There's just way to many ways to obfuscate where they ultimately end up, with even a bit of effort. With proper precautions, I would not expect to be tracked down through the coins themselves.


Also anyone who can trace bitcoins just won't let themselves be in such a position to begin with.


Go to Alphabay and purchase some drugs. Buy MDMA at $9-20/g, sell it at $80-100/g. Buy LSD at $1/blotter, sell it at $10 a blotter. Buy Xanax at $0.90/pill, sell it at $2-5/pill. Obviously you need to make some connections for this to work.


The idea is to put yourself at less legal risk, not more. The penalties for drug dealing are pretty harsh in most of the world.


You would first have to use a coin tumbler (that you can trust) for this to work?


Get an airplane ticket, fly to China or Russia or Cayman Islands, cash a gazillion of bitcoins into any currency you want or gold bars or pebbles. Done!

Naturally, you'll have to shell out some "transaction fees" to some authorities, but that's just standard laundering thing.


The major plot point of Neal Stephenson's novel Reamde (2011) was people paying off ransomware in a MMORPG with in-game gold which was easily changed for real money. One of the characters makes the comment that ransomware was not possible until anonymous payment online could happen. It would be interesting to know when Stephenson became aware of bitcoin.


Before bitcoin and even now there are options such as liberty reserve and web money.


It reminds me of the days of everyone running open WiFi or even wep


I run open WiFi at my home, I still pretend to live in a world where sharing your Internet connection is just basic human decency. I inspect my router's logs from time to time to check for huge blobs of traffic not coming from my devices, and that's about it.


it's not the huge blobs of traffic that you should worry about, but illegal activities... you can get yourself into a serious trouble this way, and trying to prove later that it wasn't you who downloaded child porn or hacked some institution is I presume not the nicest experience.


I'm pretty sure they have to find the downloaded files or traces of them on your machine. Just the fact that your IP did something is not enough for prosecution (though might be enough for a search and seizure)


In Germany it's enough to just have your IP. They might not get you for distributing CP without traces on your computer, but you're still liable ("Mitstörerhaftung").

But even if you're legally totally in the clear and don't mind the huge trouble of having all your gear confiscated for who knows how long, people have lost friends and jobs over accusations of CP or other crimes before.


> In Germany it's enough to just have your IP.

I might understand why you should be responsible for an open Wi-Fi but "enough to just have your IP" is ridiculous. It is easy to hack a (D-Link [1]) wireless router. Why should you be responsible if somebody uses it for illegal activities?

Botnets use millions of devices of unsuspecting ordinary citizens.

[1]: https://hn.algolia.com/?query=d-link&sort=byPopularity&prefi...


IANAL, but I believe you're responsible for doing "reasonable effort" things for keeping your network secure. That might or might not include not using stuff that is trivially hackable, depending on the court and how expensive your lawyers are.


Isn't Mitstörerhaftung a concept of civil law, applicable to stuff like copyright infringement but not to crimes like CP distribution?


Which they'll do over, what, six months? Meanwhile, you're having to re-buy all of your equipment unless you're willing to twiddle your thumbs for that time.


Right. But given that the risk is minimal, and the cost is not 'you go to jail forever' but instead something like $500 for a new harddrive (or a couple thousand if you have a laptop and can't just get a new harddrive), it's really not worth worrying about.


I'm guessing you live in an actual house. If you don't mind my asking, how big is your lot? My parents retired to a house on a quarter-acre and they pick up a couple neighbors' networks with good signal.


No, actually I live in a block of flats/apartments. Up until a year ago I used to live in a block consisting only of studio apartments, so I had lots of neighbors, since then I've moved to a "fancier" area and to a one-bedroom apartment and I only have two other neighbors on my floor. I don't think I've ever seen a non-encrypted wifi connection among my neighbors for at least 3-4 years now.


It's really nice that you share, and encourage an open community, people like you have certainly helped me out in the past.

Open wireless routers were far more common 10-15 years ago, when ISPs first started pushing them. Most people hadn't warmed up to the technology yet, and security was barely on the radar; similar to what we're currently seeing with IoT.

Maybe you have no enemies and trust your neighbors, that's awesome. But, anecdotes aside, someone could cause some very ugly problems with your open wifi, without much skill or effort -maybe just for the lulz.

I feel like cheering you on for giving people free internet, but at the same time, I want to pull you aside and say "hey crazy person, please put a password on your WAP." Because, it's a bummer when bad things happen to good people.


>> No(Auth)SQL.

Absence or presence of auth is irrelevant. You're database servers, message queues and other infrastructure shouldn't be accessible from the internet. No auth protocol can protect you from this.


If the default settings are dangerous, then the product is to blame, not the user.


The stakeholder who blames just one layer of security for a breach is gonna have a bad time. Truth is, the same reason why people don't change the default (no security) also explains why the server ends up too close to the border. They're cheap and/or ignorant.


Everyone is ignorant of a product until they build experience with it. No baby is born with innate knowledge of how to configure properly a Mango DB!

If a product is misconfigured by default and it takes expertise in the product to not leak data, then the product is unfit for purpose, it will burn anyone who wants to learn it.

What if you learned that Linux had a massive security vulnerability that leaves the OS open for remote code execution. What would you say if a Torvalds would laugh at its users, saying that if they didn't change that low level kernel security setting, the users were ignorants and deserved their troubles?

I think no one can pretend he understands all of the settings in the hardware, firmwares, drivers, kernels, many other OS layers, database, etc. We rely on having safe and secure default settings, and it is the only way an insanely complex machine like a modern server can be usable.


> What would you say if a Torvalds would laugh at its users, saying that if they didn't change that low level kernel security setting, the users were ignorants and deserved their troubles?

That choice of example is particularly weak, given that Linux developers are explicitly working on hardening the kernel's internal security: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Pr...


Many infrastructure services don't have any auth at all. This doesn't make them bad. This just means that these products have been developed for trusted environments. Even if MongoDB would have been configured properly by default it shouldn't be exposed on the internet anyway. And you can't blame devs just because somebody doesn't know how to configure iptables.


A couple of things:

Person develops 2 or 3 apps, setup 2 or 3 databases and thinks he/she is a professional.

Many businesses and managers care less about security and more about getting the deliverables into production.

Many CEOs and managers lack the understanding that once you launch (app, store, website etc) it doesn't end there, instead, moves into maintenance.

My company just hired an external company to assist with our IT infrastructure. I was asked to meet with the person that showed up to begin the take over. He was not interested at all in understanding what we do as a business. If you don't understand your clients, their interests, their responsibilities and obligations then, simply put, they are fucked!


I see this sort of nonsense ALL THE TIME. Most of the work I do/have done is at non-tech companies, so no one above the 'team leader' level is even remotely technical. Any sort of security that costs time or money becomes either a joke ("You want us to spend money on what?!"), or is just ignored entirely.

In my experience the issue isn't so much that C's and managers lack understanding, it's that they refuse to acknowledge that they lack understanding and refuse to listen to people who do understand because it's "Just IT". This mindset is something I've seen backfire many times over the years, and in the end it always ends up costing more than just doing things properly in the first place.

/endrant


I know a company that got hit by this. Through some mistake in configuration, they exposed their mongodb. What I understand is that the ransom request is a total scam, they didn't download or encrypt any data, just ran the drop command and inserted the ransom message.

But they didn't hit the oplog/journal, fortunately the full history (a few months of data) was still in the journal, so they were able to replay it (minus the drop commands) and restored their data.

Certainly scared a lot of people and (hopefully) taught a lesson about double-checking what's exposed to the internet.


Absolutely ridiculous that MongoDB is this insecure by default.


"Recent" versions aren't. If you install .rpm/.deb versions you're also covered

(And anyone who deploys any service on an internet-facing server should know what they're doing)

See https://blog.shodan.io/its-the-data-stupid/


> (And anyone who deploys any service on an internet-facing server should know what they're doing)

I agree that they should. History has shown us, however, that they very often don't.


insecure defaults are a key driver for increased adoption! It's worked for, at least: mongodb, redis, jboss, and elasticsearch


You expected a database sitting on the public internet to be secure? A very young one, no less?

Who is downvoting this? People who hate common sense?


How about expecting that DB doesn't bind to 0.0.0.0 by default and to force passwords to be set during the installation? Is that such an unreasonable thing to expect?


Yes. Expecting anything to be secure by default is unreasonable. It would be nice, but I would not just expect it, I would confirm it.

Expecting it to be secure on the internet is even less reasonable.


I'm not talking about it being "secure by default". Just not being obviously insecure by default is more than enough...


+1 Ideally, databases should never have internet access


Tell that to frontend javascript devs who think they don't need a back end system.

Without a public facing DB they're pretty much dead in the water.


The more things change, the more they stay the same. I'm thinking of people who back in the 90s/early noughties didn't want to bother with a middle tier application, and instead talk directly to the database from their fat clients(1) because it was easier.

(1) This, btw, is what many SPA are: fat client apps running within a browser. When I came back to web development four years ago one of the biggest surprises was how much like desktop development it had become in many ways.


If you are using one of the main cloud provider, the database they provide will almost always do.

For azure at least, they protect it with a firewall that block all IPs by default. But a database with no internet access is unrealistic.


i can't downvote, but your question seemed like a nonsequitur.


If people RTFM it would never have been an issue.


Defaults matter. Been shown time & time again... and not just for computers (look at organ donation!).


If people constantly misuse the design, the people isn't to blame but the design itself.


No they shouldn't, that's not how secure engineering works.


This is Hacker News not Fox News. At least be intellectually honest.

It's not insecure by default. It just binds to all interfaces. Apache and Nginx both do this and we don't consider them to be insecure. Should a database be doing this ? That's debatable since it's a tradeoff between security and ease of use.

But that said if you are running an internet facing server without a firewall then you will have bigger problems than just your database.


I've always thought that the biggest service openbsd did, was teach people to remove unneeded stuff and turn off unused services. Remember when people used to sneer that X years without remote root in the default install was no wonder, because the Base install didn't do anything useful?

I also don't get this "firewall" idea. Why make something listen for everything, and then place a system outside to restrict it? Why not just whitelist what you want to listen to in the first place?

Note, I get that binding an application to localhost and then letting a dedicated proxy do the heavy lifting to link up with other systems (eg stunnel or haproxy) - but what does packet level filtering really gain you?

In general I see firewalls as just adding complexity - one more source of bugs and potential mis-configuration. (Say the fun when ipv6 exposes the soft inner network that everyone thought was "firewalled" when in fact it just had broken connectivity due to chappy NAT borne from scarcity of routable addresses).


If you don't necessarily know what's going to be running on a machine, a firewall gives you control over what's allowed in or out. If a lazy dev installs some tool that listens to everything on a machine that's on the internet, a firewall will protect you from their laziness.

In an ideal world, everyone would care about this stuff (and have time to properly set these things), but we're not in an ideal world.


Right. I would rather fix the broken developer once, than paper over systems with a firewall. Perhaps I'm too idealistic. (IMHO proper devops does this - helps give devs a proper view of system administration by sharing knowledge and responsibilities).

I'm also pessimistic enough that I think allowing development to install back doors (eh, "useful helper daemons") willy-nilly in production systems is a bad idea ;-)


Who said the devs have access to the production systems? :) You can still lose valuable information with the loss of a testing server.

But you're acting as if you know which dev is the one who is going to do 'it'. I'd rather have a firewall that is largely set-and-forget than keep tabs on teams of devs that go through hiring cycles. There's already enough for ops folks to do without having to psychologically evaluate developers... besides, I've been through a few devs who agree sincerely not to do $bad_thing, and then caught them a few days/weeks/months later doing it again.

It's sad, really. I'm not even a security zealot, but I have overheard the folks in my small company tell each other not to let me know that they've signed up to a SaaS with a weak password (company name + digit).


Oh, I suspect all devs, I just think a sound process around small, cross-disciplinary devops teams is the preferred approach.

I get that a firewall can sometimes help fight broken practices (eg: bind on all interfaces, no password by default). But if your devs end up deploying password auth in general (rather than key/cert based) - with weak passwords in particular - your firewall is unlikely to help in the case where a service is supposed to be exposed.


> People who administer websites that use MongoDB should ensure they're avoiding common pitfalls by, among other things, blocking access to port 27017 or binding local IP addresses to limit access to servers.

Misconfigured mongodb servers are the issue here, not firewall. Default mongodb shouldn't listen blindly to any connections though.


No excuse for not backing up an online db at least daily.

An irrecoverable disk crash could hold your db ransom for $Inf.


Does anyone know of a "security checklist" one could follow for mongodb?

I have not used mongodb in any production environment but it would be nice to know what one should do to make it secure.


I don't use MongoDB but other, generic recommendations apply and would likely go a long way towards preventing this:

- deny (all) incoming traffic by default

- permit only desired traffic (to specific ports) from specific hosts

- avoid binding (listening) to interfaces you don't need to

- set up / verify authentication is in place

In addition to the link to the security manual that cpolis posted, there's also a MongoDB Security Checklist [0].

[0]: https://docs.mongodb.com/manual/administration/security-chec...


Thanks! I appreciate your recommendations!


"Does anyone know of a "security checklist" one could follow for mongodb?"

Firewall, on the local machine ALL ports except for the ones that you expect to be accessed remotely.

This is for all hosts - even your laptop. Never mind mongo.

There is no reason at all to leave inbound ports open for requests you don't expect to service.

Further, and I know this makes peoples heads spin and they foam at the mouth, but for ports you do need open, but don't serve the public (ssh, for instance) set up a port knock. Now it's invisible and you don't care about the 0day for that service.[1]

[1] Stop. Take a deep breath. Re-read the above post and realize that I did not say to remove your login passwords and keys and rely on only the port knock for security. Take another deep breath. It's going to be OK.


https://docs.mongodb.com/manual/security/ is a good start(not being glib).



The trick is to never assume that anything you're running is secure. Because nothing ever is these days.

So the usual rules apply: (1) have a firewall with only the bare minimum ports open, (2) make sure everything you are running is on unusual ports especially SSH, (3) VPN, jump hosts or port knocking if you need remote access, (4) use something like Fail2Ban or Sentry.


The unusual ports thing is just a total waste of time. If someone wants in they are not going to brute force your ssh password over the network unless you've use stupidly simple passwords. They might get a targeted attack via reused passwords, which an unusual port won't stop either. If you can't control that then use 2FA or force use of ssh keys.


True, but it doesn't stop people (and worms) from trying endlessly and filling you logs with tons of rubbish that makes it hard to spot the real threats.


Fail2Ban helps there.


Only allow SSH login using keys, never passwords.



Step 1: Uninstall mongo


Step 2: Spend thousands of dollars / hundreds of hours rewriting your entire app just because you didn't think to have a firewall on your server.

But seriously thank you for that stellar contribution.


The interesting part is the relatively low ransom amount.

I understand it needs to be low enough to make payment an "attractive" option (at least compared to other means of recovery, if any...). But 200 USD is significantly less than the 500 USD ransom extorted from private PC users.

Should we conclude the extortionists expect the database content to be worth less to a company owning it than a private person is willing to pay for his/her pictures, music files and documents?


If the data is valuable presumably it would have a backup.

Many of these could be caches, or rebuildable from other sources.

Many could be disposable.

Since you can't really know which database is valuable and which not, you sort of average the price, since this is volume game.

Or maybe he want's to give the impression that this is not very profitable, to keep others from doing the same.


"Promises to restore the databases in return for a ransom payment are dubious, since there's no evidence the attackers copied the data before deleting it."

I guess the high risk of getting nothing in return is affecting the pricing.


They should do a tit-for-tat data release. Pay 1/10 of the ransom, get 1/10 of the data.


Hypotheses:

- A/B testing of ransom prices will happen

- the person paying the ransom is not the company but an employee afraid for their job

- companies more likely to be able to restore from backup


It would be really hard to run an A/B test of any meaning since the value asked and the data would both vary; to do a true A/B test there may only be one variable, the populous must be very similar, etc.


The "data" (the unknown variables) always varies. That's why you need n >> 1 in A/B tests. And you are only testing one variable: the asking price.


Thanks, though what does "n >> 1" mean?

(Ask since to me it reads as gibberish and there's no way to Google it myself.)


Usually this notation ("x >> y") means "x much larger than y," so you need way more than 1 person to A/B test.


Thanks. Normally need more few thousand to do a valid A/B test; meaning to me the comment makes no sense.


You need more than one person to A/B test against, so n in this point is sample size.


"much greater than"


Correct. Running a e-commerce store each buyer buys different items. You need a big n.


What is a "big n" mean?

(Ask since I'm used to running valid A/B test on groups in the 1000s.)


Once you agree to pay, they may come back with a higher offer. No reason to A/B until they have a live one on the line.


For the sake of exhausting all alternatives, it could be a not-entirely-evil criminal activist who's not trying to maximize their gains at all.

The "tough love" approach to raising awareness about security.


I assume that's the case. Securing mongodb isn't rocket science, it's not all that different from any other database, so I can't imagine a business with any value has unsecured mongodb instances.

What I mean is, it's pretty ignorant that just because authentication isn't on by default you don't turn it on at all.

Even if you don't want to or don't think to configure mongodb itself setting up a firewall also seems to be common sense.

Thus, the only reason they'd be unsecured is they're either for random tests or hobby.


It might just be as simple as $200 being a lot of money to the attacker.


Maybe, but the data in some of the mongo dbs listed on shodan didn't have anything good. Can't say all, but most of them were just random shit.


Several people below mention the ransomware aspect of this is a scam and no data is ever returned.

This is ironically a good thing as it poisons the well for 'legitimate' ransomware. The less people expect paying up to restore their data, the less people will pay up and the less viable ransomware is as a business model.


I am not familiar with MangoDB but if 10,000 MangoDB are "misconfigured" then perhaps the defaults are to blame, not the users.


I think there is definitely a need for some work in this space.. but the fact is, there are a LOT of databases open to the wild. These are just the ones that didn't bother to set an admin password. They also didn't setup any firewall rules.

    0. upload client public key
    1. Setup SSH auth by cert/key
    2. Move SSH to non-standard port
    3. Enable passwordless sudo
    4. Disable password auth
    5. Setup firewall to only allow the new ssh port
    6. Setup port knocking
Those are the first few things I do on a server... As locked down as I can get before doing anything else... of course, I'll also put the new ssh port and an alias in my ~/.ssh/config ...

Most cloud providers offer the option to have a "private" IP space... just having a proper firewall ufw/ipchains/iptables, etc config can go a long way towards helping lock things down.

Of course, that only goes so far when you aren't using passwords or TLS for client/server communications. But it's better than leaving the front door open with a sign saying as much.


Blackhat hackers attained product-market fit in 2015.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: