Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't matter that much, honestly.

I only do root for administration tasks. Filesystem stuff, hardware, server config. All the goodies are in my homedir. Exfiltration is easy as that. Running bad binaries is easy as running under my username.

In the end, there's no protections of what my username can do to files owned by my user. And that's why nasty tool that:

     1. generates priv/pub key using gpg
     2. emails priv key elsewhere and deletes
     3. crypts everything it can grab in ~
     4. Pops up nasty message demanding money
works so easily, and so well.

The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go).

Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use. I could certainly see a "If you agree to keep this application on here, we will give you your files back over the course of X duration".

There's plenty more nefarious ways this all can be used to cause more damage, and "reward" the user with their files back, by being a slave node for more infection. IIRC, there was one of these malware tools that granted access to files if you screwed over your friends and they paid.




The thing is that, at least on the Mac, there easily can be protections on what your username can do to files owned by your user. There's an extensive sandboxing facility which limits apps to touching files within their own container, or files explicitly chosen by the user. All apps distributed through the App Store have to use it, and apps distributed outside the App Store can use it as well, but don't have to.

As I see it, the problem on the Mac boils down to:

1. Sandboxing your app is often a less-than-fun experience for the developer, so few bother with it unless they're forced to (because they want to sell in the App Store).

2. Apple doesn't put much effort into non-App-Store distribution, so there's no automatic checking or verification that sandboxing is enabled for a freshly-downloaded app. You have to put in some non-trivial effort to see if an app is sandboxed, and essentially nobody does.

I think these two feed on each other, too. Developers don't sandbox, so there's little point in checking. Users don't check, so there's little point in sandboxing. If Apple made the tooling better and we could convince users to check and developers to sandbox whenever practical, it would go a long way toward improving this.


What improvements to the developer experience for the Mac sandbox do you think are needed? If you get access all files through an open dialog, you're almost automatically set (and with a few lines of code you can even maintain access to those files). If you do something more complicated, you can write specific sandbox exceptions (as long as you don't want to distribute on the App Store). Privilege separation is also very easy to implement via XPC (complete with automatic proxy objects).

I think most apps don't sandbox not because it's especially hard, but just because it never occurs to the developers.


As noted in another comment, the macOS app sandbox is buggy and unnecessarily rigid in its permissions/capabilities. For many classes of apps, sandbox use is highly impractical or even impossible.

If these issues were fixed I believe that sandboxing would quickly become the norm. Many of us want to use the sandbox but don't want to waste too much effort fighting it.


> For many classes of apps, sandbox use is highly impractical or even impossible.

Worst case, you can see exactly what is being blocked in Console and then add word-for-word exceptions via the com.apple.security.temporary-exception.sbpl entitlement. You can also switch to an allow by default model by using sandbox_init manually.

Even if the sandbox doesn't work for your entire app, you can use XPC to isolate more privileged components in either direction (i.e. your service can be more or less privileged than your main app). What specific abilities are not provided that you think would help?


I don't think that this is correct. There are a lot of things that sandboxed apps can't do, even with exceptions. One such example is opening unix sockets -- a sandboxed app can only open sockets inside it's sandbox. This alone rules out a large class of apps. Shared memory is another problem. (These two currently prevent me from shipping Postgres.app on the Mac App Store)

Using sandbox_init manually sounds like it should be possible in theory, but it is way too complicated in practice. There is barely any documentation on it, and you'd need to be familiar with macOS at a very low level to effectively use it -- which is highly unlikely for application software developers.


You can allow access to a unix socket via things like:

    (allow network-outbound (remote unix-socket (path-literal "/private/var/run/syslog")))
Similarly you can allow use of shared memory:

    (allow ipc-posix-shm)
Most of the rule types are documented here[1]. Even for the ones that aren't, the error message in the logs uses the same syntax (e.g. if a unix socket is blocked you'll get a complaint about "network-outbound"). You mostly just need to be able to copy and paste.

[1]: https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sand...


Isolation via XPC is a good idea, but it's also a good chunk of overhead. It's a lot of added effort and room for error in the case of one-man indie apps/side projects (which a lot of mac apps are) for barely visible benefit and it's potentially problematic for scenarios requiring high rates of throughput.

For examples where (at least to my knowledge) the macOS sandbox isn't flexible enough, consider trying to write a reasonably capable file manager or terminal that works within the sandbox's bounds. Or even a simple music player capable of opening playlist files which could point to music files sitting anywhere – not just the user's home directory or the boot volume but anywhere on the local file system.


For the music player, you can whitelist files via extension:

    (allow file-read* (regex #"\.mp3"))
For a file manager, you can limit it to reading file metadata for any file:

    (allow file-read-metadata)


The problem still is someone thinking they're running your sandboxed application and not thinking too much about it and typing in the admin password to continue only to find that they installed some nasty malware.


My mac is set so that my most critical applications/ and documents/ are not modifyable without permission. This was tested once when I ran a shell script that accidentally evaled "rm ~/*" due to an error in string concat.

True story. My files were fine (although my heart jumped a bit)


For most non-developer users, there are few if any applications they use that both did not come with the system and need to write any files except files that the user explicitly requests them to, temporary files, and settings files.

Even most applications that they use that did come with the system, such as web browsers, have a quite limited set of files they should be writing. Browsers, for example, will need to write in the user's downloads directory, anywhere the user explicitly asks to save something, in their cache directory, in their settings file, and in a temporary files directory.

It's also similar for most third party applications they will use, such as word processors and spreadsheets.

It seems it should be possible to design a system that takes advantage of this to make it hard for ransomware and other malware that relies on overwriting your files, yet without being intrusive or impeding usage.


And the way Apple handles this for sandboxed applications is by hosting the open panel and save panels in a separate, privileged process, and extending the app sandbox around the selections made by the user as necessary. It's pretty neat.


Yes, see http://www.erights.org/talks/polaris.pdf from 2006 for a design like that. (I'm pointing to how it linked piercing the sandbox to normal user interactions with system-provided file-save dialogs and such; their way of sandboxing Windows XP isn't very relevant now.)

Nowadays there's Sandstorm with a similar model for networked apps. https://sandstorm.io/how-it-works


"The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go)."

Or the easier method.

rdiff-bacukp + cron job. Or Duplicity. Or Tarsnap. Or CrashPlan. Or...

That is to say backups with multiple stored versions, to another system where the (infected) client does not have direct write access. Ransomware can infect my home directory if it wants to. A fire can burn down my house. Zaphod Beeblebrox can cause my hard drive to experience a spontaneous existence failure. But I've got off-site automatic backups, so I'll never pay the ransom. (I will pay more over time for the off-site storage, but given that I'd pay for that anyway to guard against natural disasters / disk failure / etc it's not really an added cost).


>But I've got off-site automatic backups, so I'll never pay the ransom.

That's irrelevant though if they can also get all your credentials, stuff in the Keychain et al -- as they apparently did with the Handbreak malware.


Easy way I've found that works for me:

- Backup files are encrypted with gpg.

- Pull from local backup server with a backup account that only has read-only access to the directories you need to backup.

- Push to remote backup server with versioning (I'm using rclone with s3, if you need to backup large amounts this could potentially get too expensive).

You can restrict the s3 credentials so that the user pushing from your server isn't able to permanently delete any files.

There are plenty of other options out there, the key takeaway is a staging server for offsite backups and the principal of least privilege.


That’s why offline backups are useful as a complement. For example, Arq can do backups to an external hard drive.

Tarsnap can be configured so that a special key is needed to delete backups.


Backups don't stop someone from copying your data and credentials.


Is SELinux that hard? I have been running with Enforcing on my laptop for last 8 months and usually I can make selinux error go away by following directions in the selinux alert popup (or using search for selinux alerts from cli)

I used to be in boat where my first instinct was to disable SELinux but I must say it wasn't that hard.


SELinux in general isn't that hard imo, but it also wouldn't stop this attack in the default configuration.

You may consider writing a custom SELinux policy such that only the git executable can access the .git directory. This would be a much more useful mitigation against this attack, but it would also move the difficulty barrier significantly.


I think a big problem is that good documentation about SELinux is hard to find. When I was looking for how to allow nginx to work as a reverse proxy, most of the 'solutions' were just 'Turn off SELinux'. It took me a while to find the permission I needed to give it (it was a one-line in the terminal in the end).

This was on a server so no popup - you also need to know where to look (/var/log/audit/audit.log) to actually work out what is causing the 'Bad Gateway' error in nginx.


This was my problem, too.

At work, I have written an SELinux module for our java application servers. It properly reduces the permissions from system for the tomcat startup procedure, and then drops further permissions once the startup procedure actually executes a java binary. This two-step process is mostly necessary because the tomcat startup executes a bunch of shell scripts to setup variables, but I don't want to give the application server any execute-rights.

Conceptually, it's not hard to build such a module with some careful consideration about the files and permissions the process needs at different stages - I was surprised by this. But getting this module to work properly was a real hassle, because there's very little practical documentation on this.

Quite sad, actually. I want to be as smug as Redhat about SELinux stopping pentesters cold.


It probably isn't with sufficient training. Taking a crack at it as a weekend project, it was non-trivial getting a decent i3 desktop up and running without a lot of cruft. Much of what you expect to just work, just doesn't. To be fair, I'm more native in debian or Arch to some degree, so that probably had a lot to do with my difficulties. Regardless, it made me give up and just practice better security practices for my debian installs until I have time to dedicate to a further investigation.


How do you mean knowing Debian/Arch would hamper your effectiveness with SELinux? Can't you just apply the SELinux hardened kernel in Arch/Debian or whatever else?


Fair point. When in the office, we talkt about SELinux, it's always in the Red Hat/Fedora ecosystem. So I conflated the two.


> Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use

Correct me if I'm wrong, but most ransomware is operated almost completely through Tor. Doing email this way may be a problem (for obvious reasons), but for anonymity and uptime's sake most rely on it pretty heavily.


Oh yeah, Bitcoin/Tor gateway is how they're doing it. But I'm not seeing any sort of botnet functionality at least in WanaCry.

Or worse yet, I can see a daemon sitting around, snarfing juicy details and exfiltration. Along with that, it could contribute to booters' network. And as a near-last resort, it crypts everything to extract more out of the user. It can then monetize even this by being an infector and staying on the network (not reformatting).

Another thing that goes along with this infector idea, is by using OnionBalance, and using a load-balanced onion site to promote and speed up various "things". Since we're dealing with illegal, well, there's plenty of things that could be leveraged to host.

Yes, I do a lot of things in Tor onionland. ALl of my network exists in there, as does control to much of my services, MQTT, database, and more. This is how I use it: https://hackaday.io/project/12985-multisite-homeofficehacker...


Yeah, it's definitely interesting. I wonder if ransomware developers just don't overlap much with botnet developers. It has to be pretty hard to find customers to really make money running a botnet unless you're already deep in that industry.

It's cool for a variety of technical reasons, but if you just want to run a booter, you're better off using reflection attacks today than a botnet. Things like proxying web traffic to random home machines, performing layer 7 attacks on webapps, etc are pretty nice from a technical perspective and I think a lot of tech people can appreciate them in that aspect.

But that's pretty much where it ends. They don't make easy money like ransomware does. Ransomware produces customers, doesn't require hard business side work to acquire them, doesn't have competition, etc. From a business perspective, ransomware is just better.

EDIT: Your Tor automation solution seems pretty cool - do you use a VPN to authenticate things or are you relying on the privacy of your .onion names?


> EDIT: Your Tor automation solution seems pretty cool - do you use a VPN to authenticate things or are you relying on the privacy of your .onion names?

Thank you. Nope, no VPN. I run 2 types of onionsites. One side is for services like Moquitto and DB and Node-Red. The other side is an "onion with password", or HiddenServiceAuthorizeClient in the Torrc file. I use that for SSH backend. That means you need to know: onion site, key, username, password, root password; in order to escalate to gain control of the machine.

I'm also experimenting on things like GUN for types of databases that can live between them. Once I have a stable distributed database between my nodes, can start building webapps where the endpoints start and end in Tor.


Do you have any auth on the MQTT? If I recall correctly onion names are basically public.


Sure do. Login/password with a self-signed cert. Id have preferred to go with a proper cert attached to hash.onion , but evidently only Facebook can afford such luxuries...

In a side note, I thought about using OnionBalancer, a DB, and Boulder, and making my own OnionCA and talking with the EFF about funding assistance. Frankly, no CA just stinks, and I want to do something about it. I do know that the onionhash is the last 15 characters in the hidden site public key... but there has to be a better way than this.


There's really no need for a certificate on an onion name - onion names are already the hash of your public key. Tor validates everything for that already and if someone else can compromise your onion name, they could just produce a certificate for it anyways.

Unlike with regular http vs SSL, Tor provides confidentiality, integrity and host authentication integrated simply by connecting to the right name.


Backups. Just f###### do 'em.


Backups don't help you avoid credential compromise.


They do if you set them up right.


How?

I've got three separate encrypted copies of my homedir spread across two different locations and a fourth snapshot taken once a week on a drive that's physically powered down when not in the middle of a backup - and I've regularly tested restoring from each of them.

How does any of that help when malware grabs my .git .aws .gnupg .keypassx, etc directories from my running system - and unknown 3rd parties start downloading (or worse, backdooring) my "private" stuff from github?


Explain how please. All a backup can do is restore your system to a previously known state. It can't unring the bell when it comes to your data being possessed by a bad actor...


yup... the long-standing UNIX user privilege separation security model is obsolete. We need inter-app privilege separation as is being experimented on mobile phones.


Experimented? No, they inherited it from UNIX, that's how you do it there. UIDs are not necessarily human users, they are likely to be applications. Look at Postfix, OpenSSH, vsftpd, or any other software that bothers to limit its capabilities. They all have allocated UIDs in the system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: