Hacker News new | past | comments | ask | show | jobs | submit login
Fishing for Hackers: Analysis of a Linux Server Attack (draios.com)
237 points by gighi on May 6, 2014 | hide | past | favorite | 50 comments



OP may also benefit from the use of an SSH honeypot. I use kippo (https://code.google.com/p/kippo/) with great success. It tracks all commands run, as well as keeps copies of all downloaded files.

In addition, it limits available commands to a certain predefined subset, allowing the host to prevent damage caused (e.g. a DoS attack in this case) by the system being compromised.


How much success have you had?

I ran kippo for a while and it seemed that all attackers were trying to upload files over SCP, which kippo does not support. A few attackers resorted to logging in and downloading with wget. However, the vast majority of attacks ended with a failed SCP session.


Great question. I had this exact same problem before I came across/contributed to a fork of kippo by micheloosterhof that implemented SCP (and more).

Check it out here: https://github.com/micheloosterhof/kippo


Massive success for me using kippo.

The typical intruder in our case uses wget to pull down files and that works w/o a hitch.

If you have the time to run it AND check on it you'll learn a lot.


If you like this kind of stuff, http://www.honeyd.org/ is pretty full featured as well and provides a lot more emulated services (http, ftp, network file shares, smtp). It is also built so it is (relatively) easy to add your own emulated services.

Not to say that kippo isn't good, I didn't look too closely but it seems to be mainly focused on ssh and terminal capture.


That's an interesting project.

Would it have recorded also statistics like the connection activity?

Seeing all the UDP traffic, and being able to trace its origin to the "@udp1 39.115.244.150 800 300" command, received not via shell but via a TCP connection, was pretty cool.


The original article is a bit unclear on this, but the command was not directly received from a TCP connection initiated by the botnet owner.

Rather, his server connected to some IRC server and joined a channel with all his other bot friends. The owner then sent the command to the IRC channel, and it is then broadcasted to all bots by whatever IRC server he is using.

(This is why lots of IaaS providers will forbid you from hosting an IRC server, and sometimes block all IRC traffic (by port anyway) on their networks)


I don't think it measures the inbound/outbound bandwidth - especially over UDP. It's more of an SSH emulator (so to speak) with everything being logged (commands, files, etc.).

In addition, there are quite a few good visualization tools to show the logs made by kippo. You can save them to a db, plot them nicely, etc.


Ironic that the IP may now suffer another DOS attack because of HN trying to access it after reading the article.


I anonymized the IP addresses in a consistent way before publishing the blog post, so in the very worst case the DoS attack will go towards a completely new host :)


Another useful honeypot approach is Bitcoin Vigil, which will let you know if (bitcoin-stealing) malware has already made it onto the machine


Great article. I had not heard of sysdig previously.

Based on the timestamps of the entered commands, I guess one of the takeaways for the attacker is to look into config management tools (eg ansible) :)


Since you hadn't heard of sysdig before, you might also be interested in this article posted[1] a couple of weeks ago: http://bencane.com/2014/04/18/using-sysdig-to-troubleshoot-l...

[1]https://news.ycombinator.com/item?id=7622121


I hadn't either and thank you very much for that link!


So a DO/Rackspace/AWS VPS with a guessable root password can expect to be cracked in ~4 hours?

That's terrible!

AFAIK, AWS defaults to ssh-key logins with password logins disabled. Can someone comment about Rackspace/DO?


This is not limited to AWS/rackspace... Here's from my home PC:

$ uptime

  22:09:07 up 30 days, 12:19,  2 users,  load average: 0,17, 0,09, 0,07
$ sudo fail2ban-client status ssh-iptables

Password:

Status for the jail: ssh-iptables

  |- filter
  |  |- File list:	/var/log/messages 
  |  |- Currently failed:	1
  |  `- Total failed:	1757
  `- action
     |- Currently banned:	0
     |  `- IP list:	
     `- Total banned:	242
1757 attempts from 242 IP address in the past 30 days...


My home NAS is also exposed to the internet.

    up 298 days, 20:42,  1 user,  load average: 0.00, 0.01, 0.05
"zgrep ssh auth.log* | grep -i failed" has no traces of any intrusion attempts whatsoever, just me not being able to type.

The distinction is, though, that the SSHd on that box is running on a non-standard port (220)... so that certainly makes a difference.


Absolutely not, as I said in the article, I went out of the "wise" way and manually enabled:

1) Password authentication 2) Root authentication 3) Changed root password to "password"

All the providers offer fairly safe defaults, either using very random passwords or just enabling SSH keys.


I covered that with 'guessable root password'

It is good to know that all providers have safe defaults, I only have experience with AWS in that regard.


In my case it was 5 hours http://www.fduran.com/blog/honeypots/ so although that's just another anecdote if you put up a server with an obvious dictionary ssh password, expect it to be compromised within hours.


At least in the early days of EC2, there were more than a few higher profile AMIs with password logins enabled, e.g. some of Oracle's AMIs used a trivial default password.

As far as I know, you can still create/publish AMIs where password auth is enabled, but all of Amazon's stock images only allow ssh-key auth.


Genius idea. Love it. Shared it with my favourite web host. I hope more security companies think like you do and do this type of reverse-phishing on the bad guys ;)


Most security companies do this; the term for a monitored, weakly-secured server like this is a "honeypot". It's a great way to find out about exploits in the wild, which is really valuable knowledge for every hat.

Having said that, it's amazing what OP could do with just one monitoring tool. Very impressive.


There is a company called Smart Honeypot (http://smarthoneypot.com) offering this service. I believe they are using combination of these techniques to track attackers. The point is to make so much of these honeypot so economically make it difficult for attackers to freely run these scripted attacks. Imagine out of 100 attempts for SSH, 90% be a honeypot. This will massively waste attacker's time and effort.


Cool article! A friend and I once did this but then recorded the commands attackers ran and replayed them on a big tv in our office. We called it hacker fishtank.


That sounds cool! Did you post the presentation anywhere?


I had a similar experience as the OP years ago. Luckily, the attacker forgot to erase a bash history so I could recover almost all the command lines. It seems that most of these operations were pretty standardized; they first downloaded a bunch of exploits from another cracked site, then in my case they tried to run a local exploit to gain a hole in kernel (it was a Linux 2.4 box whose privilege escalation bug was fixed just weeks ago). I had a patch applied to the kernel so it didn't succeed. Then they started an IRC bot as a disguised name (like /usr/X11/X or something) and left. I felt embarrassed but in hindsight it was a pretty good lesson.


For those who don't know sysdig: http://www.sysdig.org/


Fascinating! Two questions:

Wouldn't it have been better if the attacker had removed only the last few lines recording his commands from the log files instead of the entire files? Wouldn't the lack of continuity in the log files be very noticeable?

Also, is this a script running this sequence of commands or an actual person?

And, is there a log somewhere on the system of 'make' activity?


To answer your questions:

1) Yes, it would have been better but I honestly think this attack was completely botnet-driven and the attacker didn't really mean to cover his footprints too much: in the timespan of 10 minutes, he sent over 800 MB of UDP traffic. That would have been caught even by the most oblivious sysadmin pretty quickly, so these guys are just playing a number game, trying to break in as many hosts as they can knowing that the lifespan of the hacked hosts will be very short, maximizing the short-term profit then.

2) The attacker directly ran these commands on the login shell (no script was copied over scp or something else), so there was no script executed on the host itself, but the whole thing lasted roughly 2 minutes and a lot of commands were "typed", so I am almost sure this was just an automated script ran from another probably compromised host.

3) I didn't check if the build left logs, but by showing every executed process with "evt.type=execve" (which goes deeper than the spy_users chisel) you can see all the processes executed by the build: 99% are just uninteresting sed/gcc/autoconf.


I'm new to this area but am really interested. What do people usually use other than sysdig to try to see if/how their machine has been compromised?


Love this kinda stuff! Glad that there are people out there actively looking for the latest and greatest threats to the internet.


Awesome post! One question... Are you mounting your S3 bucket on the server? If so, are you using s3fs? Thanks!


Yes, I mounted the bucket using https://github.com/s3fs-fuse/s3fs-fuse


Great article, thanks. Does sysdig ready for production servers? tested so it does not introduce new issues?


I'm one of of the sysdig creators. Sysdig is a pretty young project (we released it around a month ago), so I can't promise it will be flawless in a production environment. However, we've had many installations under several different environments, and during the month after the release we had extremely few crash reports, which we've worked to fix right away.


By 'crash reports' do you mean the kernel/host?


I mean overall, so that includes kernel crashes.

Kernelwise, the main thing to report is a couple of crashes on non-mainline kernels like openVZ.


Wow a Romanian script kiddie at work. This kind of modus operandus is so 2004. Nice tool showcased though.


(Since HN isn't letting me post a reply under chippy1337's comment, I'll post it here.)

I found chippy's comment interesting and helpful and don't know why it was downvoted to hell while other "+1"-style comments (that didn't add any value) are left as-is: https://twitter.com/taoeffect/status/464090445677481985


'live capture not supported on OSX' - not surprising, I guesss.


Great read, definitely gave me some ideas for securing my systems.


Good job! sysdig quite a handy tool.


Interesting article, nice idea!


good job!! nicely done!


Great article! Thanks!


¿But how did they enter? ¿Force attack?


Yes, I didn't put it in the article because it was getting too long otherwise, but the attacker immediately tried brute-forcing the root account, and after a handful of common passwords ("qwerty", "qwerty123", "pizza" among those) he found "password".

I was able to find all the attempts by looking at the I/O activity of the sshd process, and also the syslog activity recorded every attempt.


Doesn't your system refuse root login by ssh by default ? If I remember correctly, on ubuntu server, sshd is configured by default to not allow root login from remote addresses.


On some providers yes, in fact I explicitly enabled root SSH login for those.

Other providers (such as Digital Ocean) use the root account by default even for Ubuntu, although the password is set to a really secure and random one.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: