Hacker News new | past | comments | ask | show | jobs | submit login
A Linux botnet is launching crippling DDoS attacks in excess of 150Gbps (pcworld.com)
89 points by testrun on Sept 30, 2015 | hide | past | favorite | 84 comments



Rate limiting on password logins ought to be the default, not some optional extra that only the security savvy know to install.

Defaults matter: not implementing rate limiting by default in sshd has left an open door for these people to walk through & attack the net. Sure, you can blame people for choosing poor passwords but, given the reality of the millions of machines out there, as programmers we know that some of them will have poor passwords because people are people. Failing to code with that expectation in mind is doing our users a disservice.


Totally agree with your sentiment but in this case I don't see how rate limiting helps, or at least not much.

If you're building a botnet you don't need to crack any specific machine so you can distribute your attempts across more hosts. Eg. rather than hitting one machines with 1,000 pwords/sec you target 1,000 machines at 1 pword/sec each (or whatever rate you're limited too). There's no shortage of badly configured routers.


The number of people that have more than 10 machines at their disposal is much smaller and would have stayed much smaller if it weren't for this poor default in the first place.

I'd go further and just argue that password based login should do a password strength calculation by default and estimate how long it will take you to get cracked.

"You've entered the password this machine will be cracked in roughly 5 days. Would you like to set a different password? (n / Y)"


I agree that defaults are crucial, but it's weak passwords not rate limiting which is the fundamental issue here. Rate limiting doesn't help if you get in with the first try of admin:admin. Passwords should be generated from a secure random source and specified on a label on the device.


I agree with both of your arguments. From my experience ethically cracking accounts, rate limiting, non-default usernames, and non-default passwords should be the #1 priority for Michelle Obama or the next first man/lady. It is a very serious problem.



One of the first things I was taught in network security was

"When installing SSH always set it to key based login only NO Password login allowed"

Passwords are not to be used here they can be broken by automated distributed attacks, strong cryptographic keys won't.


A lot of things can read ssh key files, including, sometimes, internet ads: http://www.zdnet.com/article/mozilla-urges-users-to-update-f...

Passwords can't really be gotten that easily.


Which is, among other reasons, why you encrypt your key with a passphrase.


Why not just use the passphrase in the first place? Or do you think an offline attack on a key passphrase is going to be more difficult than an online attack to a server with rate limiting configured?


2 simple lines in /etc/ssh/sshd_config fix the problem:

  PermitRootLogin no
  PasswordAuthentication no


I know I am preaching to the choir, but ANY login for any port or protocol should have rate limiting and non-default passwords.

Burp suite can easily crack web logins if you allow hundreds of logins per second for a single user.


I always set up two things on all the servers that I am in charge of: fail2ban and disabling ssh password logins. There are still people who insist on having passwords and using them to access remote hosts, but thankfully there are fewer every year.

I also used to run http://www.symantec.com/connect/articles/slow-down-internet-... on some of my hosts. It was fun for a while but caused me to have one too many lockouts.


IMHO: keys should be the default. Password only used on a factory-reset device, and only usable to set a login key. Period.


To make matters worse, the debian openssh .deb, that i installed to get the client, configured an autostart for sshd, without asking.


Debian's policy is that if you install a service, you intend for it to run by default. Since services aren't installed by default, this is a fairly defensible position, though it's been commented on many times.

It's possible and fairly straightforward to disable autostart of services through either old-school SysV init or, I suspect, systemd.

Upshot: running complex systems isn't entirely trivial.


Yes, a desktop computer is a complex system, that's why I leave these not entirely trivial plumbings to the system programmers. The downside of this is the exploited systems that potentially everyone has to suffer.


This story doesn't check out; I don't see a package named "openssh" (on Jessie), but there is a package "openssh-client" which only installs the client.

But regardless, starting sshd on installation shouldn't matter because proper passwords and firewall should be the first step.


Welp, the metapackage's just called ssh, which is also the name of the command the client provides. Another case of "only two problems in computer science ..."

Due diligence on my part was lacking, agreed.


Rate limiting on everything should be a default.


Why doesn't Linux do it by default, by the way? It wouldn't hurt desktop Linux either.


It does. PAM (which is a central piece of software handling authentication on Linux) makes you wait 1 second after each failed attempt.


"Attackers install it on Linux systems, including embedded devices such as WiFi routers and network-attached storage devices, by guessing SSH (Secure Shell) login credentials using brute-force attacks."

I'm seeing that - endless attempts to log in as root over SSH. It's apparently aimed at random IP addresses - it was hitting a newly installed server that wasn't even in DNS yet.

Here's what the attack looks like:

    Sep 30 00:06:39 s3 sshd[29144]: Failed password for root from 43.229.53.44 port 44450 ssh2
    Sep 30 00:06:42 s3 sshd[29185]: Failed password for root from 43.229.53.44 port 11229 ssh2
    Sep 30 00:06:42 s3 sshd[29188]: Failed password for root from 43.229.53.44 port 12106 ssh2
    Sep 30 00:06:44 s3 sshd[29185]: Failed password for root from 43.229.53.44 port 11229 ssh2
    Sep 30 00:06:44 s3 sshd[29188]: Failed password for root from 43.229.53.44 port 12106 ssh2
    Sep 30 00:06:46 s3 sshd[29185]: Failed password for root from 43.229.53.44 port 11229 ssh2
    Sep 30 00:06:46 s3 sshd[29188]: Failed password for root from 43.229.53.44 port 12106 ssh2
    Sep 30 00:06:48 s3 sshd[29201]: Failed password for root from 43.229.53.44 port 33446 ssh2
    Sep 30 00:06:49 s3 sshd[29212]: Failed password for root from 43.229.53.44 port 34570 ssh2
    Sep 30 00:06:50 s3 sshd[29201]: Failed password for root from 43.229.53.44 port 33446 ssh2
    Sep 30 00:06:51 s3 sshd[29212]: Failed password for root from 43.229.53.44 port 34570 ssh2
    Sep 30 00:06:52 s3 sshd[29201]: Failed password for root from 43.229.53.44 port 33446 ssh2
    Sep 30 00:06:53 s3 sshd[29212]: Failed password for root from 43.229.53.44 port 34570 ssh2


I don't think I've had a box up in the last decade where I haven't seen this. The brute force attempts against ssh, and scans for vulnerable services are pretty constant. Commercial / datacenter / residential ip doesn't seem to make much of a difference. Unless you have a really bad password, its generally not an issue as the rate of the scan is pretty low. You could always run fail2ban or denyhosts to keep your logs cleaner, firewall the port off to known ips and/or set up port knocking, or change the port it listens on. (Note: Doing any of those is only a trivial increase at best security wise, its more to get rid of the log spam.)

At one point I had a script that parsed logs and added any ip with >= 10 failed attempts to an iptables chain which dropped all traffic. It would generally grow by a few hundred hosts every week.


That address is an APNIC /8 (Asia) and depending on who's whois you trust its in Japan. One of MANY defense in depth steps is to toss out all sshd traffic other than certain trusted ip ranges. Unless your server is in Japan or you live in Japan there is little reason for anyone in Japan to ever be able to ssh into your machine.

Not entirely unreasonable to only run SSH (and other things) on a VPN connection. Again, defense in depth this is hardly the only step required.

Another amusement is port knocking. No need to get all crazy and send 512 bit keys before a script temporarily opens the ssh port, remarkably little is usually enough. TCP SYN packets to SSH are only permitted for 5 minutes after a simple telnet to open TCP port ABC or whatever.


I totally agree with putting SSH on a whitelist, such a great strategy.

The VPN is a little trickier, because you're basically just exchanging your exposure from SSHD to VPN (unless you're referring to something else). I'm not sure whether that's a good idea or a bad idea, but it's something to be aware of.

I've found that white-listing SSHD (and silently dropping instead of rejecting other traffic), along with Key-Login + disabled root login has been the best I can do for preventing anything.


SSH over VPN does a couple things. First it reduces your attack surface. Surely you must have a VPN, and you must have remote SSH access, but now the internet can only poke (directly) at the VPN and no longer (directly) at the SSHd.

Assuming someone centralizes and audits and updates authorized employee lists on the VPN server, there's no way J.Random.Admin who just got fired could have some ssh keys on some random server; well, he could, but IT chopped his VPN access as he walked out the door, so they're inaccessible.

Its very difficult to set something like openvpn up such that all someone needs to know is "password123" and he's in. Fairly easy with ssh. Design to tolerate human error. You can't accidentally enable password auth instead of key auth if the system doesn't even support password auth.

Its nice to be able to document to the security guys that nothing, and I mean nothing, flows in or out of company property that doesn't go over the VPN.

Finally, although this is kinda lame, setting up SSH over VPN encourages people to do everything over VPN, hey see it wasn't so hard to put SSHd on the VPN only, so why don't you remove internet wide accessibility (insert exclamation points here) for mysql and your application admin console thingy and your VNC server and ...

If you want to mess with peoples minds, put ssh on a TOR hidden service. This freaks some people out. A bot port scanning in China doesn't know what to think of your box only being accessible via a TOR hidden service. Its not hard to set up. Have fun with connect-proxy. Although this smells of security thru obscurity, its really just one of hopefully many layers of defense.


This is fairly common, if not ubiquitous.

Use key-based authentication, disable password auth in sshd_config and install sshguard.


That's been going on for years.

Move port from 22 to somewhere else.

Install fail2ban


It's also fun to change the encryption:

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com

in sshd_config. From what I've seen, the brute forcers can't work with this. (If you're sshd doesn't: upgrade)


its worth noting that essentially only openssh clients will work with these settings, and newer ones at that.


so fail2ban, disabling root logins, and key-based authentication are the answer?


A large portion of it, yes.

Limiting SSH to specific IPs or netblocks, and/or specifically excluding those you're likely to never use, would also help cut down on the attack surface. Not that hosts within your perimiter don't get compromised, but there are far fewer of them.

2FA including keyfobs is yet another option.


Port knocking can help also


Is this on a well-known cloud hosting provider? I assume so, unless "s3" indicates something else.

There have been attacks against known cloud hosting IP ranges for a long time. I'm constantly getting attempts on my personal server on common ports (i.e. 3128 for a proxy to spam through), that's on DigitalOcean.


No, it's just the third server of a set, a leased dedicated bare metal CentOS server in a large data center in Phoenix AZ. No "cloud" service is involved.

This has been going on for about three weeks now. I may configure PAM to force a login delay; the logs are getting big.


While it doesn't give any real security, changing the ssh port will at least filter out these kinds of attacks.


Would you consider fail2ban in this case?


"Warning: Using an IP blacklist will stop trivial attacks but it relies on an additional daemon and successful logging (the partition containing /var can become full, especially if an attacker is pounding on the server). Additionally, if the attacker knows your IP address, they can send packets with a spoofed source header and get you locked out of the server. SSH keys provide an elegant solution to the problem of brute forcing without these problems."

source: https://wiki.archlinux.org/index.php/Fail2ban


> Additionally, if the attacker knows your IP address, they can send packets with a spoofed source header and get you locked out of the server.

No they can't. That is not how TCP works.


From memory, you can spoof UDP but not TCP.

https://serverfault.com/a/153619/91708


I don't think it's totally random, when my server got taken over I found a list of ip addresses that the script was in turn trying to bruteforce. Someone made that list on purpose, though I don't know why or where the ip's came from.


same I saw on my servers, has been going on for more than two months. I disabled all ssh password accesses ever since. they're also testing rhosts, probably to jump from one compromised box to another, so I disabled that as well.


Get a range of ip addresses where you will log in from (your DSL address ranges from home for example), and block all other ip addresses on that port. You may want to add some other hosts you control to that whitelist as well as a backup in case your ISP changes addresses. If you are the only one that is supposed to access some ports, then block everyone else. (use last -a to see all the places you've logged in recently to start making your whitelist).

You can also rate limit new connections to ssh depending on your use cases. So even if your whitelist is beaten you limit the number of attempts.

Now add fail2ban, so that even at 10 new rate limited connections per minute they get banned after X failed attempts.

Change the default port of ssh, to make it harder to find.

Remove password authentication, and make sure you have good secure keys.

You can also disable sudo, and root logins perhaps. As well you could try 2 factor auth, and perhaps move your keys to a secure(encrypted) USB drive.

But really, whitelisting who can connect to ssh in the first place will stop 99% of the automated brute force attacks, with all the rest stopping the remaining 0.99%.


While this is a good idea to prevent folks from brute-forcing their way into your machine, the article is talking about DDoS attacks. If you have 150G pointed at your network the issue isn't going to be your servers. It's going to be congestion at your network links. Your SSH settings and Fail2Ban won't help at all in this case. You'll need something like CloudFlare's DDoS protection to identify and block DDoS requests from ever reaching your network.

EDIT: Oops, just saw that the article does talk about SSH brute-forcing. Your point is quite valid.


Thanks. That's very helpful information that's not easily gleaned by awanderin' around 'how to set up a server' tutorials. Simple things like, 'last -a' is a fantastic little hint. Almost would like to make run that command on every ssh session login.


If you configure your SSH server for a limited, secure set of ciphers and HMACs, these automated attacks won't even get to the point of attempting authentication.

https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

Since following the above guide, my auth log has been filled with nothing but this:

    Sep 30 09:46:00 myserver sshd[74033]: fatal: no matching mac found: client hmac-sha1,hmac-sha1-96,hmac-md5,hmac-md5-96,hmac-ripemd160,hmac-ripemd160@openssh.com server hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com [preauth]
Of course, I can't use old SSH clients to connect, but it's a good tradeoff, IMO.


That's very good hint, thanks.

There's only been a couple of remote ssh exploits (that I'm aware of) and both of them were stopped by white listing. If you can figure out your address ranges, I think it still makes sense to white list. I guess also the bots will catch up with modern ciphers.


To what end would people do this?

> The most frequent targets have been companies from the online gaming sector, followed by educational institutions, the Akamai team said in an advisory...

Doesn't seem like you'd ever make any money doing that. I suppose one competitor could target another. Not sure if its a valuable thing to do. I just dunno what the point of DDoS is besides ransom-type stuff.


Speaking from experience (Of getting hit, not running one of these services!) - Gaming is a particularly juicy target, you can set up a basic front-end that allows people to 'hit' certain IP addresses offline for x amount of time in exchange for money. Its use is quite common in certain games like League of Legends, CS:GO etc.


I assume to gain an advantage in PvP? If so, how does the attacker identify the IP address they want to hit?


OK, DDoSing and lagswitching has been a big deal in the Destiny community recently, so here's what I know:

In Destiny, there's often a weekend-long PvP showdown with top-tier in-game rewards. The game type is 3v3 with no matchmaking, so you must bring your own team; the best rewards are for winning 9 in a row. If your team disconnects, it counts as a loss (no cowards here!).

The way Destiny is set up, one player's console is considered the "host" and all the other players connect to it. So if you or someone on your team (a 50-50 chance) is the host, you can see the IPs of everyone in the game by monitoring network traffic. If not, you can still see the host's IP.

Find out IP of any opposing player, aim the botnet, fire. Boom. You've instantly got a one-player advantage. Repeat ad nauseam. Suddenly going 9-0 isn't so daunting.

The worst part? Unlike other network manipulations, this one is a lot harder for Bungie to prove. But you can bet they're working on it, and they aren't shy about wielding the banhammer if they find cheating.


Or to troll. A while ago, skype had a vulnerability that let people resolve skype names to IP addresses and since skype is the default communication of lazy gamers, it was pretty trivial to work out the correct IP address to poke. Other methods used to include hooking your Xbox360 up to your laptop and bridging your connection, running Cain and Abel and finding the addresses of opponents that way in games like Halo etc. No idea what the scene is like now that we're on xbones and ps4s, I pretty much exclusively game on PC now. There are plenty of services online that let you snatch IP addresses by sending a link to a 'friend', you could then keep tabs on them that way.


> besides ransom-type stuff.

With next to zero investment necessary, collecting ransoms is lucrative enough to pay off.


And you won't hear about those pay-offs in the news... bad press for the victim company.


Calling it a Linux security issue is a bit erroneous, a weak Ssh password on BSD will get you hijacked too. Or Osx, or even Windows!


Linux systems on the net tend to be servers with lots of available outgoing bandwidth. So Linux systems are very valuable targets for the DOS enthusiast and are quite common. So chances are that this attack only targets Linux systems.

But, yeah, the headline is click bait. Linux systems are known to be quite secure so this is the old "Man Bites Dog" shtick.


Sure, but if the payload in this example is a Linux binary then it won't do anything to an OSX/BSD/Windows machine.

If the payload is an interpreted script, or something that's compiled (if it can find a compiler, etc) then it may be able to infect different OSes.


I wonder if Linux emulation on BSD / plan9 / cygwin will run it?


This is, in my experience, another interesting and unintentional side effect of a free OS.

When the OS is free, the engineers I've talked to who have been tasked with using it to implement their product are generally less experienced (which is to say less expensive). And the entire impression I get is that there is some cost minimization going on. Some sort of psychological trick that if its an expensive OS you need some expert engineer to bend it to your will, but if its free and seems to be everywhere, well you can hire an intern to do all your integration, development, and testing.


Maybe, but out of the box a typical linux distro is fairly unsecure. There's no rate limiting of password guessing, no default password policy, no default fail2ban behvaiors, usually no firewall enabled by default, no auto-updating service on by default (or any at all), etc.

Some popular linux services are scary unsecure. Samba runs as root, which is a security nightmare considering its that poorly implemented reverse engineered crapfest with a long history of security problems.

Popular FOSS projects hosted on linux are bad too, like the recent massive hole in Drupal and the endless Wordpress holes (I bet this recent one is WP related). Not to mention heartbleed, shellshock, etc.

I don't think there's anything wrong with a cheap and common OS for junior people to use, its just the OS needs to be shipped with sane defaults. This will never happen in the typical world of FOSS for fear of breaking things and "keeping things simple" and "you should know what you're doing." The problem is many people don't know what they're doing, at least in terms of security. An authoritarian attitude regarding security that would tie developer's hands is the antithesis of FOSS culture.

I think the status quo of endless hacking is just the way its going to be until everyone gets serious about security. What that means exactly is hard to say, but shifting to some type of memory managed language (Rust perhaps?) and sacrificing some performance for security are probably what its going to look like. Even then, botnets aren't going to go away, but might be small enough where they aren't able to do DDOS's like this regularly.


My VPS had a default username/password called testuser, I never actually noticed it existed till my server got taken over by some chinese bruteforce attack. Every time I rebuilt the server that user account was created, apparently it is part of the image used by my ISP.

My point is that many people probably have their servers taken over in a similar way without even realizing it.


I'm pretty curious, why is this particular botnet noteworthy? There's a plenty of kids on efnet capable of pushing that


What's the best way to make sure that your system isn't part of this?


As others mentioned, prevent password logins. I also recommend fail2ban[0] for completely blocking IPs after detecting repeat failed access attempts.

[0] http://www.fail2ban.org/


Fail2Ban is great - I've used it for more than just blocking brute force attacks on ssh (although a real security expert might say this is the wrong tool to use).


Yep, I also use it to detect repeat errors on our own application logs and block offending IPs.

Fail2ban has a reasonably easy to tweak detection and blocking rules, plus lots of available ready-made ones that do the job. If you're comfortable with regular expressions (which most people on HN probably are), then it's really straight-forward to write your own rules.

The only problem I encountered with it is when you start it up and you have a huge amount of data in your log files. It can cause 100% cpu usage for a long time until it digests the whole thing...



`PermitRootLogin = without-password` in /etc/ssh/sshd_config

Same thing for non-root: `AuthenticationMethods = publickey`

And when buying a router, buy something that will get regular security updates, or where you can put OpenWRT.


From openssh 7.0 release notes:

  * PermitRootLogin=without-password/prohibit-password now bans all
    interactive authentication methods, allowing only public-key,
    hostbased and GSSAPI authentication (previously it permitted
    keyboard-interactive and password-less authentication if those
    were enabled).
It mentions that previously without-password it would still allow keyboard-interactive logins. Should be fairly easy to fake for a botnet!


I still prefer "PermitRootLogin=no".


However unrealistic it is, I still wish that open source Projects would agree on what to call things.

The /etc/sudoers NOPASSWD and sshd without-password sound like the same thing, but are far from that.

I feel like they could have named it better.


Sudoers NOPASSWD means you don't have to type the password for that feature. Sshd without-password means passwords are disabled. Not the same thing.


I garbled my post :(

Pull the plug /s

Don't allow password-based SSH access.

https://blogs.akamai.com/2015/09/xor-ddos-threat-advisory.ht...

https://isc.sans.edu/forums/diary/XOR+DDOS+Mitigation+and+An... example (first Google result)


[deleted]


Makes one ponder the issue of monoculture...


Sorry, I tripped up.


If for some bizarre reason you allow remote root logins over ssh be sure that that password is as strong as possible. Machine generated random strings are good, at least 12 characters to be sure.

If you turn off root logins but still allow user logins the remote attacking system will have no way to detect that fact and will still attempt to brute force you.


I think monitoring your outgoing traffic would give you a clue. Also if what I read on Ars is correct, this botnet preys on weak root passwords, so disabling remote root or using keys would be great ways to protect yourself against this botnet.


This! And also, in sshd_config, disable password-based authentication. But first, make sure that key-based authentication works ;)



Use a good password.


How do the botnets come to be named? Is it named by whichever security firm finds it?


There's no official rules on this, usually the botnets are named by their developers and renamed by AV developers for marketing purposes.

This results in stupid shit like kaiten.c which is a *nix bot from 20 years ago now being known (and detected) as "OSX/Tsunami".

This is actually harmful as it makes researching whatever infection you got harder when every AV vendor decides to give a very well known malware a different name.


The biggest attack surface is probably cgi scripts (php etc).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: