Hacker News new | past | comments | ask | show | jobs | submit login
A Case of Stolen Source Code (panic.com)
527 points by uptown on May 17, 2017 | hide | past | favorite | 215 comments



> …breeze right through an in-retrospect-sketchy authentication dialog…

I can't blame them for this. A surprising number of apps ask for root (inc. Adobe installers and Chrome). As far as I know, it's to make updates more reliable when an admin installs a program for a day-to-day user who can't write to /Applications and /Library.

We're long overdue for better sandboxing on desktop (outside of app stores).


It doesn't matter that much, honestly.

I only do root for administration tasks. Filesystem stuff, hardware, server config. All the goodies are in my homedir. Exfiltration is easy as that. Running bad binaries is easy as running under my username.

In the end, there's no protections of what my username can do to files owned by my user. And that's why nasty tool that:

     1. generates priv/pub key using gpg
     2. emails priv key elsewhere and deletes
     3. crypts everything it can grab in ~
     4. Pops up nasty message demanding money
works so easily, and so well.

The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go).

Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use. I could certainly see a "If you agree to keep this application on here, we will give you your files back over the course of X duration".

There's plenty more nefarious ways this all can be used to cause more damage, and "reward" the user with their files back, by being a slave node for more infection. IIRC, there was one of these malware tools that granted access to files if you screwed over your friends and they paid.


The thing is that, at least on the Mac, there easily can be protections on what your username can do to files owned by your user. There's an extensive sandboxing facility which limits apps to touching files within their own container, or files explicitly chosen by the user. All apps distributed through the App Store have to use it, and apps distributed outside the App Store can use it as well, but don't have to.

As I see it, the problem on the Mac boils down to:

1. Sandboxing your app is often a less-than-fun experience for the developer, so few bother with it unless they're forced to (because they want to sell in the App Store).

2. Apple doesn't put much effort into non-App-Store distribution, so there's no automatic checking or verification that sandboxing is enabled for a freshly-downloaded app. You have to put in some non-trivial effort to see if an app is sandboxed, and essentially nobody does.

I think these two feed on each other, too. Developers don't sandbox, so there's little point in checking. Users don't check, so there's little point in sandboxing. If Apple made the tooling better and we could convince users to check and developers to sandbox whenever practical, it would go a long way toward improving this.


What improvements to the developer experience for the Mac sandbox do you think are needed? If you get access all files through an open dialog, you're almost automatically set (and with a few lines of code you can even maintain access to those files). If you do something more complicated, you can write specific sandbox exceptions (as long as you don't want to distribute on the App Store). Privilege separation is also very easy to implement via XPC (complete with automatic proxy objects).

I think most apps don't sandbox not because it's especially hard, but just because it never occurs to the developers.


As noted in another comment, the macOS app sandbox is buggy and unnecessarily rigid in its permissions/capabilities. For many classes of apps, sandbox use is highly impractical or even impossible.

If these issues were fixed I believe that sandboxing would quickly become the norm. Many of us want to use the sandbox but don't want to waste too much effort fighting it.


> For many classes of apps, sandbox use is highly impractical or even impossible.

Worst case, you can see exactly what is being blocked in Console and then add word-for-word exceptions via the com.apple.security.temporary-exception.sbpl entitlement. You can also switch to an allow by default model by using sandbox_init manually.

Even if the sandbox doesn't work for your entire app, you can use XPC to isolate more privileged components in either direction (i.e. your service can be more or less privileged than your main app). What specific abilities are not provided that you think would help?


I don't think that this is correct. There are a lot of things that sandboxed apps can't do, even with exceptions. One such example is opening unix sockets -- a sandboxed app can only open sockets inside it's sandbox. This alone rules out a large class of apps. Shared memory is another problem. (These two currently prevent me from shipping Postgres.app on the Mac App Store)

Using sandbox_init manually sounds like it should be possible in theory, but it is way too complicated in practice. There is barely any documentation on it, and you'd need to be familiar with macOS at a very low level to effectively use it -- which is highly unlikely for application software developers.


You can allow access to a unix socket via things like:

    (allow network-outbound (remote unix-socket (path-literal "/private/var/run/syslog")))
Similarly you can allow use of shared memory:

    (allow ipc-posix-shm)
Most of the rule types are documented here[1]. Even for the ones that aren't, the error message in the logs uses the same syntax (e.g. if a unix socket is blocked you'll get a complaint about "network-outbound"). You mostly just need to be able to copy and paste.

[1]: https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sand...


Isolation via XPC is a good idea, but it's also a good chunk of overhead. It's a lot of added effort and room for error in the case of one-man indie apps/side projects (which a lot of mac apps are) for barely visible benefit and it's potentially problematic for scenarios requiring high rates of throughput.

For examples where (at least to my knowledge) the macOS sandbox isn't flexible enough, consider trying to write a reasonably capable file manager or terminal that works within the sandbox's bounds. Or even a simple music player capable of opening playlist files which could point to music files sitting anywhere – not just the user's home directory or the boot volume but anywhere on the local file system.


For the music player, you can whitelist files via extension:

    (allow file-read* (regex #"\.mp3"))
For a file manager, you can limit it to reading file metadata for any file:

    (allow file-read-metadata)


The problem still is someone thinking they're running your sandboxed application and not thinking too much about it and typing in the admin password to continue only to find that they installed some nasty malware.


My mac is set so that my most critical applications/ and documents/ are not modifyable without permission. This was tested once when I ran a shell script that accidentally evaled "rm ~/*" due to an error in string concat.

True story. My files were fine (although my heart jumped a bit)


For most non-developer users, there are few if any applications they use that both did not come with the system and need to write any files except files that the user explicitly requests them to, temporary files, and settings files.

Even most applications that they use that did come with the system, such as web browsers, have a quite limited set of files they should be writing. Browsers, for example, will need to write in the user's downloads directory, anywhere the user explicitly asks to save something, in their cache directory, in their settings file, and in a temporary files directory.

It's also similar for most third party applications they will use, such as word processors and spreadsheets.

It seems it should be possible to design a system that takes advantage of this to make it hard for ransomware and other malware that relies on overwriting your files, yet without being intrusive or impeding usage.


And the way Apple handles this for sandboxed applications is by hosting the open panel and save panels in a separate, privileged process, and extending the app sandbox around the selections made by the user as necessary. It's pretty neat.


Yes, see http://www.erights.org/talks/polaris.pdf from 2006 for a design like that. (I'm pointing to how it linked piercing the sandbox to normal user interactions with system-provided file-save dialogs and such; their way of sandboxing Windows XP isn't very relevant now.)

Nowadays there's Sandstorm with a similar model for networked apps. https://sandstorm.io/how-it-works


"The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go)."

Or the easier method.

rdiff-bacukp + cron job. Or Duplicity. Or Tarsnap. Or CrashPlan. Or...

That is to say backups with multiple stored versions, to another system where the (infected) client does not have direct write access. Ransomware can infect my home directory if it wants to. A fire can burn down my house. Zaphod Beeblebrox can cause my hard drive to experience a spontaneous existence failure. But I've got off-site automatic backups, so I'll never pay the ransom. (I will pay more over time for the off-site storage, but given that I'd pay for that anyway to guard against natural disasters / disk failure / etc it's not really an added cost).


>But I've got off-site automatic backups, so I'll never pay the ransom.

That's irrelevant though if they can also get all your credentials, stuff in the Keychain et al -- as they apparently did with the Handbreak malware.


Easy way I've found that works for me:

- Backup files are encrypted with gpg.

- Pull from local backup server with a backup account that only has read-only access to the directories you need to backup.

- Push to remote backup server with versioning (I'm using rclone with s3, if you need to backup large amounts this could potentially get too expensive).

You can restrict the s3 credentials so that the user pushing from your server isn't able to permanently delete any files.

There are plenty of other options out there, the key takeaway is a staging server for offsite backups and the principal of least privilege.


That’s why offline backups are useful as a complement. For example, Arq can do backups to an external hard drive.

Tarsnap can be configured so that a special key is needed to delete backups.


Backups don't stop someone from copying your data and credentials.


Is SELinux that hard? I have been running with Enforcing on my laptop for last 8 months and usually I can make selinux error go away by following directions in the selinux alert popup (or using search for selinux alerts from cli)

I used to be in boat where my first instinct was to disable SELinux but I must say it wasn't that hard.


SELinux in general isn't that hard imo, but it also wouldn't stop this attack in the default configuration.

You may consider writing a custom SELinux policy such that only the git executable can access the .git directory. This would be a much more useful mitigation against this attack, but it would also move the difficulty barrier significantly.


I think a big problem is that good documentation about SELinux is hard to find. When I was looking for how to allow nginx to work as a reverse proxy, most of the 'solutions' were just 'Turn off SELinux'. It took me a while to find the permission I needed to give it (it was a one-line in the terminal in the end).

This was on a server so no popup - you also need to know where to look (/var/log/audit/audit.log) to actually work out what is causing the 'Bad Gateway' error in nginx.


This was my problem, too.

At work, I have written an SELinux module for our java application servers. It properly reduces the permissions from system for the tomcat startup procedure, and then drops further permissions once the startup procedure actually executes a java binary. This two-step process is mostly necessary because the tomcat startup executes a bunch of shell scripts to setup variables, but I don't want to give the application server any execute-rights.

Conceptually, it's not hard to build such a module with some careful consideration about the files and permissions the process needs at different stages - I was surprised by this. But getting this module to work properly was a real hassle, because there's very little practical documentation on this.

Quite sad, actually. I want to be as smug as Redhat about SELinux stopping pentesters cold.


It probably isn't with sufficient training. Taking a crack at it as a weekend project, it was non-trivial getting a decent i3 desktop up and running without a lot of cruft. Much of what you expect to just work, just doesn't. To be fair, I'm more native in debian or Arch to some degree, so that probably had a lot to do with my difficulties. Regardless, it made me give up and just practice better security practices for my debian installs until I have time to dedicate to a further investigation.


How do you mean knowing Debian/Arch would hamper your effectiveness with SELinux? Can't you just apply the SELinux hardened kernel in Arch/Debian or whatever else?


Fair point. When in the office, we talkt about SELinux, it's always in the Red Hat/Fedora ecosystem. So I conflated the two.


> Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use

Correct me if I'm wrong, but most ransomware is operated almost completely through Tor. Doing email this way may be a problem (for obvious reasons), but for anonymity and uptime's sake most rely on it pretty heavily.


Oh yeah, Bitcoin/Tor gateway is how they're doing it. But I'm not seeing any sort of botnet functionality at least in WanaCry.

Or worse yet, I can see a daemon sitting around, snarfing juicy details and exfiltration. Along with that, it could contribute to booters' network. And as a near-last resort, it crypts everything to extract more out of the user. It can then monetize even this by being an infector and staying on the network (not reformatting).

Another thing that goes along with this infector idea, is by using OnionBalance, and using a load-balanced onion site to promote and speed up various "things". Since we're dealing with illegal, well, there's plenty of things that could be leveraged to host.

Yes, I do a lot of things in Tor onionland. ALl of my network exists in there, as does control to much of my services, MQTT, database, and more. This is how I use it: https://hackaday.io/project/12985-multisite-homeofficehacker...


Yeah, it's definitely interesting. I wonder if ransomware developers just don't overlap much with botnet developers. It has to be pretty hard to find customers to really make money running a botnet unless you're already deep in that industry.

It's cool for a variety of technical reasons, but if you just want to run a booter, you're better off using reflection attacks today than a botnet. Things like proxying web traffic to random home machines, performing layer 7 attacks on webapps, etc are pretty nice from a technical perspective and I think a lot of tech people can appreciate them in that aspect.

But that's pretty much where it ends. They don't make easy money like ransomware does. Ransomware produces customers, doesn't require hard business side work to acquire them, doesn't have competition, etc. From a business perspective, ransomware is just better.

EDIT: Your Tor automation solution seems pretty cool - do you use a VPN to authenticate things or are you relying on the privacy of your .onion names?


> EDIT: Your Tor automation solution seems pretty cool - do you use a VPN to authenticate things or are you relying on the privacy of your .onion names?

Thank you. Nope, no VPN. I run 2 types of onionsites. One side is for services like Moquitto and DB and Node-Red. The other side is an "onion with password", or HiddenServiceAuthorizeClient in the Torrc file. I use that for SSH backend. That means you need to know: onion site, key, username, password, root password; in order to escalate to gain control of the machine.

I'm also experimenting on things like GUN for types of databases that can live between them. Once I have a stable distributed database between my nodes, can start building webapps where the endpoints start and end in Tor.


Do you have any auth on the MQTT? If I recall correctly onion names are basically public.


Sure do. Login/password with a self-signed cert. Id have preferred to go with a proper cert attached to hash.onion , but evidently only Facebook can afford such luxuries...

In a side note, I thought about using OnionBalancer, a DB, and Boulder, and making my own OnionCA and talking with the EFF about funding assistance. Frankly, no CA just stinks, and I want to do something about it. I do know that the onionhash is the last 15 characters in the hidden site public key... but there has to be a better way than this.


There's really no need for a certificate on an onion name - onion names are already the hash of your public key. Tor validates everything for that already and if someone else can compromise your onion name, they could just produce a certificate for it anyways.

Unlike with regular http vs SSL, Tor provides confidentiality, integrity and host authentication integrated simply by connecting to the right name.


Backups. Just f###### do 'em.


Backups don't help you avoid credential compromise.


They do if you set them up right.


How?

I've got three separate encrypted copies of my homedir spread across two different locations and a fourth snapshot taken once a week on a drive that's physically powered down when not in the middle of a backup - and I've regularly tested restoring from each of them.

How does any of that help when malware grabs my .git .aws .gnupg .keypassx, etc directories from my running system - and unknown 3rd parties start downloading (or worse, backdooring) my "private" stuff from github?


Explain how please. All a backup can do is restore your system to a previously known state. It can't unring the bell when it comes to your data being possessed by a bad actor...


yup... the long-standing UNIX user privilege separation security model is obsolete. We need inter-app privilege separation as is being experimented on mobile phones.


Experimented? No, they inherited it from UNIX, that's how you do it there. UIDs are not necessarily human users, they are likely to be applications. Look at Postfix, OpenSSH, vsftpd, or any other software that bothers to limit its capabilities. They all have allocated UIDs in the system.


True story. The status quo unfortunately conditions people into "just answer yes to the stupid questions" which then renders everything from developer certs to elevation warnings moot. "But you got a warning" is little recourse for when your hard drive is encrypted by some ransomware. (I know I'm mixing current events here – cut a fella some slack!)


Yes. I'm pretty sure WannaCry and its variants could have asked users with a system dialog whether it should proceed encrypting all their data and we still would have seen similar numbers of affected machines (if operated by a human).


macOS has optional per app sandboxing already, but many developers elect to sidestep it because it's a buggy, rigid mess. Last I worked with it there were issues with simple things like displaying an open/save panel inside of a sandbox — sometimes when the user requested one, an error would occur under the hood and give zero feedback to the user. It's also a pain in the rear for apps that need to be able to arbitrarily access files on disk to function.


Yes. I have a Mac app that is sandboxed, and occasionally it fails to open files the user selected in the Open dialog. There is zero diagnostics available, besides a message stating "permission denied" (the system error message confuses users even more by suggesting to check file system permissions).

All I can tell my customers is to restart their Macs.

Extremely frustrating.


There is no reason Mac/Linux apps should need root to install unless they are installing something like a kernel module.

I have ~/Applications and ~/Library which is where anything I install should go.


I refuse to install Adobe anything on my main partition for this reason. Remember Flash?


That doesn't really work when you're a creative professional who has to use Adobe products for their day job.


A lot of those products are now cloud-based.


None of the Adobe core apps is Cloud based. They are ordinary programs that run natively and download on your machine -- they just have a subscription model that checks your license over the internet, and for that they called the suite "Creative Cloud".


The Data can be kept in the cloud and the Adobe stock libraries are cloud-based, but the products (Photoshop, etc) are still installed locally and many users will keep data local as well.

Edit - added detail.


afaik the cloud part just means that it's a monthly subscription that requires internet access to remain activated


This is correct, I had my win10 tablet start bitching that it hadn't connected to Adobe's authorization servers in too long (despite the fact that it regularly has internet access, wtf Adobe) just the other day. There's multiple gigs of Adobe executables and support files on my computer.


>despite the fact that it regularly has internet access, wtf Adobe

There's Adobe's update etc. app that needs to run that checks for that, it's not the individual apps themselves.


Pretty sure that'd been running all the time. Not 100% sure, this is the only Windows machine I've ever owned and getting shit to reliably run on bootup is a mystery to me.


it does matter what you want to protect from. often using root to start an app can actually make it more secure by allowing it to start worker processes with a non privileged user and do some sandboxing with name spaces.


I'm a bit surprised at the "personalized attention" from the attacker: that a human on the other end takes time to poke around individual machines, recognize the developer, and tailor a source code theft + ransom campaign to them. I had assumed that these are bulk compromises of at least thousands of machines and they just blast out scripts to turn them into spam proxies or whatever.

Maybe given the limited scale of this one and the obvious interest the attacker has in producing trojaned versions of popular software, this is actually what they were hoping for in the first place.


Even the Mac's malware authors are more hands-on, engaged, and approachable :)


It might be as simple as an automated "look for ssh keys" in the malware. If you find an SSH key, pretty good odds it's a developer. Scan for git repos, or check their email address to see where they work and go from there.


This makes me wonder, is it easy enough to write a kernel extension such that whenever any process that tries to open(2) my ssh private key, or any hardlinks or symlinks pointing to it, it checks against a known whitelist, and if the process is not in the whitelist, a dialog pops up and asks me for my permission. Is this easy to implement?

Frankly I can only think of a small number of processes that need to automatically access the file: backupd, sshd, and Carbon Copy Cloner. Everything else should require my attention.


Alternatively, sidestep open(2) by implementing a SSH agent so that you can do creative things like https://krypt.co does, so the key is not laying right on your main filesystem in the first place (and possibly not even ever on the workstation).


Good idea! Talk to this guy:

https://objective-see.com/products.html


It's called Little Flocker.

Essential OSX software.

Looks like F-Secure just bought it in the last month or two. :(


In this case the attacker actually guessed the names of the git repos based on knowlege that the owner of the attacked computer was a Panic employee. The attacker guessed the repo names was Panic software names. So it was a very manual process indeed


Yeh sure, but possibly (probably?) after an automated process shifted through all the victims looking for what's described above. Once you've got a shortlist of potentially more valuable targets you can invest manual efforts on those.


Standard practice in lead-generation these days :) You want to devote your efforts to the most potentially-rewarding opportunity.


I find this story pretty fascinating. First, it's interesting how a broad attack, such as putting malware into software used by a large number of people, suddenly becomes a targeted attack: the attackers grab SSH keys and start cloning git repositories. I'm assuming that there was a significant number of victims in this attack. Were they targeting developers? Or did they just happen to comb through all this data and find what looked to be source code / git repositories.

The other thing I find interesting is this comment:

> We’re working on the assumption that there’s no point in paying — the attacker has no reason to keep their end of the bargain.

If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.

It might seem that there's some incentive for ransom holders to hold up their end of the bargain for the majority of cases if they want their attacks to be profitable.


> If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.

You're describing a legal system and the rule of law. I'm not sure there's way to guarantee anything like you describe when there is some illegality in the nature of the process.

Trade only works when you can trust either the parties involved or the system as a whole to uphold their promises (for the system, that's that involved parties that don't uphold their ends will be punished).


> You're describing a legal system and the rule of law. I'm not sure there's way to guarantee anything like you describe when there is some illegality in the nature of the process.

Legal systems aren't the only way to give confidence that both ends of a bargain will be held. As one example, some darknet markets have escrow systems for this purpose. It's not too hard to imagine a way to do this with ransomed code. Reputation-based systems also provide incentives for sellers to deliver on their promises.


> As one example, some darknet markets have escrow systems for this purpose.

Those only function because the darknet functions as the system, and the punishment for not following through is that the party loses access to or prestige in that market. What entity exists that is trusted and has leverage with both the people that are ransoming (criminals) and average citizens (ostensibly law abiding)? Should I trust a darknet broker to not screw me? No. They have no incentive not to, as long as their actual client, the ransomer, doesn't care. For the same reason, the ransomer should not trust any legal entity, because they can not deliver the money and give it back to the victim (since they are the client).

There may exist a way for this to work, but I certainly can't think of one, and what you described doesn't work either. Trust is the integral factor as I see it, and while you can have trust within a criminal community, and within a law-abiding community, I'm not sure how you get that trust to cross that boundary.


A simple solution is the one you describe. A reputation system for ransomers. Time earned reputation for upholding promises.


And how do you ensure you are dealing with the same person from one transaction to the next? Any authority that can confirm an anonymous criminal is who they say they are needs to be illegal to keep law enforcement from finding out the identities, and if not they are still participating in a crime.

Again, how do you trust a criminal person or organization? By their nature, they don't follow the same rules.


Wouldn’t a cryptographic sig suffice for this?

You don’t need an authority vouching for you to become a ‘trusted’ criminal. You just need proof of identity, and a reputation established over time. Drug dealers do this all the time, even though they’re criminals. Hell, it’s even how legitimate businesses work - the FBI isn’t going to shut down Bic for selling shoddy pens, so they build a reputation on “we’re Bic and we did right by you last time”.

An example: a malware group sends every target an RSA-signed demand (with public key disclosed on Pastebin or something). The few people who pay up find that they follow through, so they grow a reputation as sincere. They could even kick things off with a round of freebies - “Here’s your data, here’s our sig, we deleted/unlocked/whatever it for free this time to prove ourselves.” I suppose they’d have to publish demands and outcomes since most targets won’t disclose on their own.

There’s likely a flaw in my specifics (probably around disclosing attacks and proving followthrough), but I only put five minutes into it. As long as you can prove identity, you ought to be able to build ‘trust’.


> Drug dealers do this all the time, even though they’re criminals.

Drug dealers and those buying from them are both committing illegal acts. That changes the dynamic. Neither party can rely on the legal system to enforce misconduct. That allows an entirely criminal system to work. For example, if you don't pay the drug dealer, they'll just hurt you. If the drug dealer doesn't give you the drugs, or gives you crappy/cut drugs, you just won't use them next time. It's important to note that this transactional relationship does not begin with one party accosting the other, as in the ransomware case.

The ransomware scenario is the equivalent of being mugged in an alleyway, but only of your smartphone, and the mugger offering to give your phone back if you go to an ATM and come back with $100. The whole interaction began with an crime perpetrated by one party on the other.

> As long as you can prove identity, you ought to be able to build ‘trust’.

One problem is that the identity, because it is anonymous, it worth fundamentally less for this purpose than any real identity. The ransomer could decide law enforcement is getting too close, and stop responding to all payments, or abandon the system and someone else could take it over. For any identity used just for this scam, the loss of reputation is irrelevant, and if they are using the same identity for multiple scams they are inviting more law enforcement response. There are no future consequences of mention to screwing people over, since the identity can be changed at any time.

The only thing that really protects you in any of these situations are the incentives of the criminals, but those incentives, be they economic or liberty based, are subject to very different constraints than a legally operating entity. The bottom line is that the person or people involved has started the whole relationship by showing they are willing to screw you over. Establishing trust is not impossible (some people will trust), but it's very hard to do, a large percentage of will never actually trust you, and they likely shouldn't, because you don't have the same incentives or punishments they do.


> Any authority that can confirm an anonymous criminal is who they say they are needs to be illegal to keep law enforcement from finding out the identities, and if not they are still participating in a crime.

It's not a requirement that the authority be legal. Note that a person's name isn't required to establish authority, pseudonymous reputation provides assurance as well. Darknet markets have reputation systems, and have already figured this out.

> And how do you ensure you are dealing with the same person from one transaction to the next?

The same way we do it with pseudonymous systems now: by having an authoritative identity somewhere that can verify their actions. @shittywatercolour could make a new account on HN, do an AMA, and post on his Twitter that he's doing an AMA with <name> for proof. Banksy can claim work by posting it on his website. In the same way, a reputable seller on any marketplace (such as a darknet marketplace) could do the same thing.


> Darknet markets have reputation systems, and have already figured this out.

But again, why should I trust a darknet? What makes a group of criminals trustworthy when a single one isn't?

You haven't really addressed the fundamental problem of trust, just kicked it down the road to a new point. Any legitimate entity seeing usage in an effort to authenticate a criminal will likely be seeing subpoenas for access information. If they are resistant to those subpoenas, then they are helping the criminals, and are acting illegally. Both states have severe negatives for one of the parties.


What makes anyone reliable? A good reputation.

Only a small fraction of trust among non-criminals is backed by force of law. The rest is backed by past record. If you don't have one, you put up collateral, get someone else to stake you (e.g. loan co-signers), or start small until people get to know you.

The only real question here is how you verify who you're dealing with. That's doable, and once it's done everything else is a pretty established process.


> What makes anyone reliable? A good reputation.

It's not just about how reliable they are, it's about what incentives they have to follow through, and what recourse you have when the do not. Entities acting illegally have very different incentives than legal ones, and your recourse if they do not follow through is very limited, especially if you are acting legally.

> Only a small fraction of trust among non-criminals is backed by force of law. The rest is backed by past record.

Past record accounts for some of it, that ability to exact your own punishments accounts for some of it. Any drug dealer that screws over a client needs to account for that person taking the matter into their own hands.

> The only real question here is how you verify who you're dealing with.

That's not the only question. I believe I've outlines many more in my other responses in these threads (one of which was to you).


> Those only function because the darknet functions as the system

This isn't true, think Yelp. Why couldn't Yelp exist for ransomers?


Yelp is a very interesting example. It's hard to make the analogy work because there's an asymmetry to the transaction between restaurant owners and restaurant customers (you don't have to be a customer to leave a review).

Even so, Yelp is renowned for extorting restaurant owners for money (whether or not illegal, and officially extortion)[1]. That's in a market where all participants are supposedly acting legally. Why am I to believe that illegal, anonymous entities won't be willing to burn their reputation (which may only exist for this scam) when they decide to stop?

1: https://www.google.com/search?q=q=yelp+extortion


Escrow works well with physical goods. How do you return source code that can be copied endlessly. How many copies do you return? How do you prove that one of them is the "original" copy?

Returning digital goods (or more general "knowledge") works either based on trust or through enforcement. The latter is the rule of law.


> Escrow works well with physical goods. How do you return source code that can be copied endlessly. How many copies do you return? How do you prove that one of them is the "original" copy?

Just brainstorming, but:

1. Trusted third party creates a service that (a) provides a one-time-use encryption key (b) provides an endpoint to upload an encrypted blob of information along with an email (or a passcode) and a date after which the decrypted content will be made available to that email (or via that passcode), (c) provides a UI that allows a user to pay $x (redeemable via email/passcode) to wipe the encrypted content from their server, if paid before the ransom date.

2. Malware author compromises system, encrypts content using (a), uploads encrypted content with their email/passcode to (b), sends user a link to (c).

3. Malware author provides some evidence that they haven't also uploaded non-encrypted content elsewhere to give confidence that once the user pays, the content will not exist elsewhere. Some ideas: system/network logs, malware analysis that shows that it only uploads to trusted third-party, providing proof in decompiled source that malware only uploads to trusted third-party, and/or a reputation/review system. Note that this doesn't need to be airtight proof, it just needs to give the victim enough confidence that they think it's worth the risk to hand over some money.

Would this work well, in practice? Who knows. But I think it's a proof-of-concept that shows that there are potentially other ways to escrow ransomed content.


> Malware author provides some evidence that they haven't also uploaded non-encrypted content elsewhere

Any amount of information that could show this would invariably give away the identity of the hacker. Even then, since the information comes from them, it can't be trusted.

> But I think it's a proof-of-concept that shows that there are potentially other ways to escrow ransomed content.

There's a difference between keeping the owner from their own materials and threatening to spread those materials to others. In the first, you at least know whether you get the files back (for the most part, it might be hard to notice small changed/omissions). In the second, not only do you not necessarily know it's been shared, the blackmailer retains the right to spread it in perpetuity (whether it still retains value or not).


Even with physical goods, what type of agent would hold the trust of both the criminal and law-abiding elements of the deal? A criminal agent cannot be trusted by a law abiding party, and a law-abiding agent cannot be trusted by a criminal party (they can just give everything back to the rightful owner).


I think this sort of thing could be done using Etherium. Allowing exchange in a mechanical way with code that the parties can verify on their own. A programmed agent being quite impartial. Not sure how hard it would be.

Of course, you can never verify that they will not release the code or keep using it maliciously.


I think ethereal just hides the problem slightly. If it's information, as you say there's nothing preventing future use of it. If it's physical, there needs to be some holder of the item, and we're back at how can both sides trust the escrow agent?


Indeed. Hard to avoid an element of trust.


How about an ethereum smart contract that gives back your money unless the owner releases the key used to encrypt your files (which may be possible to verify in the contract)


That would possibly work in the case of locked files, but not in the case in the submission, where it was about the public release of files. There's no way to ensure they blackmailer didn't keep a copy, and won't threaten again or release anyway.

Also, I'm not familiar enough with ethereum to know whether there are downsides to using it, such as it leaving a trail until laundered (like bitcoin).


This is historically where the Mafia came from, as a means to keep members of a price fixing cartel mutually honest. The old saying about "no honour amongst thieves" being solved by outsourcing to a body to provide a parallel system of contract enforcement.

Harder to achieve online but not impossible, though plenty of criminals make enough without essentially having to place themselves at risk of physical attack from organised crime.


> If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.

Could a smart contract system work here ? In this example, the smart contract would assure you the hash of the repo sent to you corresponds to the one you already had locally. You'd add automatic payment when conditions are fullfilled...

Is that feasible?


The problem is that you have no way of knowing how many copies of the data the hacker has. It's very easy to confirm that the hacker has your data, but confirming the opposite - that the attacker no longer has your data - is pretty much impossible. If there's even a way to do it it would surely involve require the hacker to have encrypted data which can only be decrypted if certain conditions are met. If you're going to go to that length then why not just encrypt it by a conventional means and not risk your data at all?

Unless someone fancies setting up a trusted hacker escrow that acts an intermediary between compromised servers and hackers? That sounds incredibly complicated, highly illegal and unlikely to be trusted by either hacker or hacked though.


Simplest solution: payment put into escrow, ransom is released to the ransom holder after 365 days provided the source code is not leaked, the ransom is released to the victim if the source code is leaked prior. If the ransom holder released the source after the fact it would be a year out of date.


> It might seem that there's some incentive for ransom holders to hold up their end of the bargain for the majority of cases if they want their attacks to be profitable.

There's also the fact that they don't care about who you are or what you do, their only consideration is financial.


I suspect the code is worthless in anyone else's hands.


How does one realistically protect against these new attack vectors? It's all become so quick - the malware infects your machine, and seconds later your repos are cloned.

Most computers are always connected to the internet when they're on, even if they don't necessarily need to be. Airgapping isn't really used outside of very sensitive networks, but I'm starting to think we need to head towards a model of connecting machines only when really needed.

Of course the cloud based world doesn't allow for that, and perhaps I'm a luddite, but I increasingly find myself disabling the network connection when I'm working on my PC. Kind of like the dial-up days.


Have a fun laptop, a work laptop, and maybe banking tablet?

As a good corporate drone, this arrangement is kind of forced on me, but a lot of small company / startup folks totally mix the two. Might be a good thing to not do.

Sure it doesn't protect you from e.g. a tool you need for work being compromised, but it reduces the attack surface - this guy probably wouldn't have installed handbrake on his work machine.

Another thing we do specifically because medical data is, a lot of the time I'm forced to work inside a non internet connected network that I vpn and then remote desktop to. Firewall rules mean the only thing getting in from my laptop is vnc. Some systems also require plugging into a specific physical network. Overkill for most uses but it makes losing laptops fat less scary if you can keep a lot of your stuff on a more secure remote system.


> Have a fun laptop, a work laptop, and a banking tablet?

Try out Qubes: http://qubes-os.org


This is a really good thing, and thank you for showing it to me.

Something like this could be good if you wanted to rapidly switch between different compartments on a single device. It would be great for e.g. keeping a 'sensitive data' compartment seperate from a 'emails and paperwork' compartment on a work laptop.

Doing something like this is certainly better than using a single device with no seperation or just user accounts.

Psychologically, I still think that training people to use different devices for different things is more likely to stick than (account seperation on steroids). This extends to physical security - not leaving a work laptop in your backpack in a nightclub cloakroom like you might a personal device. But in the end with that reason, at a small comapany where you can avoid hiring idiots, it's up to each person to decide what psychological tricks they need to get themselves to do things.

I wouldn't trust something like this to keep high security information seperate. When some exploit that escapes Xen or (for a corp) accesses windows systems otherwise securely configured, there is nothing like isolated networks to keep your blood pressure low. For most software a service dev type people you already have this - your data lives in a data center on carefully configured production servers. But for data science type users, you see a lot of people (especially in accademida) doing work with potentially scary datasets on local laptops they probably also watch pirate TV on at home, which is a bit concerning. I guess at least if they were using qubes it would be a bit better though.


Training users has been tried for over two decades and has largely failed to hinder black hats in any significant way.


Failed on the users who took well to the training, or to those who ignored it/failed it?

Because we can always not care about those others in the context of what we should do.


Failed to improve computer security overall. Users (generally speaking, not HN readers) don't have the skills/inclination/time to be proficient at managing their systems. Efforts to educate them in malware avoidance, system upkeep etc etc are failures by and large.

Technology can only do so much to "protect" users from themselves, and from miscreants. Couple this with an indifference to privacy on most of the connected population, and you've got a recipe for a world where nothing is safe.

http://panelsyndicate.com/comics/tpeye


> Have a fun laptop, a work laptop, and maybe banking tablet?

I would both prefer and hate this setup. I use my personal laptop for work and having all my apps, data, settings, etc available in one place is amazing. I could get past using different computers but the sad reality is my provided work computer is underpowered compared to my 3.5 year old macbook. I can run circles around my coworker's machines on the simple fact I have an SSD. IDEA opens in seconds for me while they go get a cup of coffee. Our desktops haven't been updated in probably 4+ years and I strongly believe they'd be more productive on macOS than on whatever flavor of linux they are using (Most use Linux because they'd rather die than use Windows and they can eek a little more performance out). A number of them have older macbooks they use for meetings but they aren't powerful enough to actually develop on.


"Work" usually requires more software to be installed than "fun". This "Handbrake" app may be used for creating videos for web, for example.


* How does one realistically protect against these new attack vectors?*

Do not install unsigned software is a good start. Does that dialog need a secondary 'Are you really really sure?' absolutely .. but the basic defence in this specific case was in place.


Yeah, Linux especially I've never downloaded and installed something manually from the internet. I get all of my packages directly from:

  pacman -S foo
Or sometimes maybe:

  yaourt -S foo
tl;dr Use your operating-system's package manager.


The AUR (Since you mention yaourt) or PPAs are the linux equivalent of downloading random crap off developer homepages though. They have benefits in terms of updates etc. but they're no more secure (And you may want to look at the next PKGBUILD you install and see how many of them are literally just grabbing stuff off third party servers anyway)

See for example:

* Kivy: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=pytho...

* Chrome: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=googl...

* Vivaldi: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=vival...

* Plymouth (over HTTP too!): https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=plymo...

* Oracle JDK (also plaintext HTTP!): https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=jdk


That's how pretty much all packages in any distribution are built.

Note the included hashes — if the file on the server gets replaced, the building process will complain. (Sure the package maintainer will probably just replace the hash :D But if the file changed but the version number didn't change, or there was no release announcement, that's suspicious…)


But arch pulls these files and rebuilds every time, right? Compared to most other distros where only the (more) savvy maintainer does these steps. Don't arch end users just assume there's a new version out and the package hasn't been updated, ignore the error and install anyway? Or are they better trained to take notice of this suspicious stuff?


> ignore the error and install anyway

I don't remember how Arch does it, but in FreeBSD Ports you need to actively replace the hash in the text file, there's no easy ignore option. (FreeBSD also mirrors the files on the project's servers, which is pretty cool)


MacOS's package manager is the App Store. Which Handbrake isn't on.

It isn't on the Windows store either as far as I can tell.

Why, I don't know - maybe nobody involved wants to pay the fees to become an Authorized Developer, maybe there's a Free Software religious argument going on, maybe Apple doesn't want a program whose original function was "ripping DVDs" to be on there because of the many deals they have with the entertainment industry.

tl;dr: the program in question ain't in the operating system's package manager.


App store has a lot of issues as is, I actively try to avoid the app store. I'm sure it's more secure, but security isn't everything. The issue doesn't have to do with the app store, it's being careful what you are downloading no matter what it's from.


Maybe use operating systems that have proper package managers then.

cough Arch cough


AUR (Arch User Repository) is a great way to provide user and vendor provided installers IMO: you can check the build script, comment on it, flag outdated ones define and change maintainers while providing a web interface as well.


I'm using linux for some time and I installed tons of software without my package manager (thats unavoidable because not every package archive has every software).

In the end its all about trust. If you trust some web domain you can also trust their software. If that software is compromised you're out of luck. No package manager or walled apple garden can help you with that.


But there is more to trust than just domains (web servers): signatures. If only people used these.


[flagged]


How do you know someone downvoted it? It looks the same as others to me (not grayed out).


We had a system that was used to generate television graphics. Our installer for new software was capable of bringing up a system with a new hard drive, so one of the options it had was to format the hard drive. The installer asked three times if you were sure, with increasingly severe warnings about losing all your data. Sure enough, a customer with an existing hard drive ran through all three warnings, formatted their hard drive, and then called customer service to complain about losing all their data.

The solution, of course, was to add a fourth question...


A fourth question obviously isn't going to help. Make them enter "erase my drive" into a text box, that might get them to pause for a moment.


Well, the probability approaches zero asymptotically as the number of questions approaches infinity...


>How does one realistically protect against these new attack vectors? It's all become so quick - the malware infects your machine, and seconds later your repos are cloned.

1) Don't install random crap off of the internet: only use the Mac App Store, with sandboxed apps and "System integrity protection" turned on.

2) If you absolutely need to have some non-MAS app, check the checksum, download the DMG, but let it rest, and only install it a month or so later, if no news of breach, malware etc has been announced.

3) Don't give a third party program root privileges -- don't give your credentials when a random program you've downloaded asks for them.

4) Have any sensitive data (e.g. work stuff, etc) on an encrypted .DMG volume or similar, that you only decrypt when you need to check something. Even if your mac is infected, they'll either get just an encrypted image of those, or wont be able to read it at all.

5) Install an application firewall, like Little Snitch.

6) Keep backups.


Use a packet manager like apt to download and install your software. I think there are also packet managers for Mac OS and Windows.


I definitely agree with this advice in general, but as it so happens, users who installed HandBrake via homebrew (a package manager for macOS) were affected by this too because the hash for the latest version of HandBrake was changed to the infected version[1]. Still, package managers definitely make it harder for the attacker in most cases.

[1]: https://github.com/caskroom/homebrew-cask/pull/33354


Wow, that's a strangely aggressive reply from one of the contributors on that thread. And then he said:

> 99% of the time these hash changes are innocent

That's actually not very good at all and proves they shouldn't just trust hash changes! Very odd


In cases where I need to download and install unsigned software that's not available via package manager, I run hashes (MD5, SHA1, SHA256, etc.) on the downloaded file and the run Google searches on those hashes. As long as the software has been released for more than a day or two and it has a decent sized user base, the hashes will show up in various places such as fossies.org and will be cached by google. That would have protected against this particular attack.

EDIT: But in this case, the software in question is signed, so the (fallback) technique described above is not necessary. The download page [0] contains a GPG signature along with a link to the author's GPG public key. Checking the signature would have prevented the attack.

[0] https://handbrake.fr/rotation.php?file=HandBrake-1.0.7.dmg


IF you really want to be pedantic about it, utilize an egress firewall policy, whether on your machine, or your router. For MacOS, LittleSnitch or RadioSilence. For Linux/BSD, setup your firewall of choice to do some filtering.

Yes, it will take alot of effort to setup, and some effort to maintain, but it helps.


Would a passphrase on the SSH key help in this case? Attacker would have the SSH key but need the passphrase to be able to use it. That's how I have my SSH keys.


I believe this malware included a keylogger. Retrieving the correct passphrase would be another step for the attacker but wouldn't stop them if they're determined.


> I also likely bypassed the Gatekeeper warning without even thinking about it, because I run a handful of apps that are still not signed by their developers.

Apple really needs to fix this. In particular open source applications don't sign for whatever reason and it's clear that barring some change they aren't going to start now.


Fix what? Remove the option to bypass? Remove the warning? Lock it all down to just app store apps?


No if enough of them were signed then people wouldn't be in the habit of bypassing the warnings, there's no need to force it to be all locked down.


Users click-through any and every kind of dialog box without reading. It's one of the principles of UI design. Users don't read. Requiring the user to type in "Install dangerous program" would work.


> In particular open source applications don't sign for whatever reason

Most open source applications are signed, just not by Apple's Appstore. Instead, most OSS downloads provide a GPG signature. You should not execute downloaded code before checking signatures - either by the Appstore or package manager, or manually.


A big problem I find with signatures is that I'm not sure what extra security they provide in cases such as this. If the binary can be changed, how can I be sure that the attacker hasn't been able to change the sha1 file or has been able to re-sign with the developer private key?


Leaked keys must be revoked. Whether it's the appstore developer key of a key to sign website packages with.

The main difference is that Apple provides a management tool for revoking a developer key, and that OSS projects must have their own trusted server where they publish their public keys (and issue a revocation in case of GPG).

I realize that this often is not the /easy/ way. But IMO /no/ software is worth running without verifying its integrity. The good news: the more you verify your downloads, the easier it'll get :)


You can't be sure. There is no silver bullet anywhere in security.

The better approach is to turn that question around and ask what extra security can you gain by not checking the signatures? None? Then it's best to check. Worst case, it doesn't help you at all. Best case, it saves your ass. A win on average, I'd say.


What do you suggest?


Not having to pay $99/year to sign builds of your open-source application would help.


Which means it is cheap to sign your malware.


Mandatory two-factor-confirmation-by-email in order to push software updates on the app store would have been helpful in this case.


And have your cheap malware signature revoked by Apple.


And expensive for Apple to process and vet the now orders of magnitude larger numbers of developers asking for a (hypothetically) free signature.

There's no such thing as a free lunch.


After it has run its course...


Yeah I think if it were like many SaaS where open source projects can trivially qualify for a license that might be a start.


Slightly OT: I'm a reasonably competent Mac user, I use them all day and depend on them to control my house as I'm disabled. In the event I were to be compromised, can anyone suggest a logging tool/tools that I might be able to use on my network such that I could work out what the problem was and correct anything that needs correcting please?

We are looking at four or five Macs of differing types but all running the latest OS, a number of iPhones, iPads, more Raspberry Pi's than I'm going to admit to and a number of other IoT devices.

TIA!

Also, I really wish more companies would be this forthcoming when they pwned. I think it's really good when are large company comes out with this type of mea culpa, mea maxima culpa. If professionals can get totally pwned, I really do think it tends to make ordinary users think about their security a little more. Or maybe I'm just hopelessly optimistic!


Network syslog and a Raspberry Pi with an external drive should be more than enough.


I've been doing some googling and it looks like syslog is something that I run on every machine, and then it passes the results of its logging to the Raspberry Pi for collation and possible inspection later on. Have I got the basic gist of it?

Thanks for the answer, greatly appreciated. :-)


One way to protect against this is to not have SSH keys on your laptop. I've been using Kryptonite https://krypt.co/ lately, which is sort of like two-factor for SSH keys.


That doesn't seem especially more secure. You're just trading away trusting yourself to another company.

You can get a similar ssh 2fa setup with Google Authenticator's PAM module (https://github.com/google/google-authenticator-libpam), and maintain full control over your infrastructure.


Kryptonite is more like a GPG smartcard (or YubiKey) of sorts (with some security trade-offs for the arguably better UX). The key never leaves your mobile device. Backdoors and bugs are still a possibility of course (when aren't they?), and you probably wouldn't wan to run this on an outdated mobile device.

Of course, all you're doing with any of this is preventing your key from leaking. A sufficiently motivated attacker could just backdoor your ssh/git binary and access things through that instead, but it's still a good defense-in-depth mechanism, IMO.


>Kryptonite is more like a GPG smartcard (or YubiKey)

Can it do full disk encryption? Works with GPG agent? PIV? PKCS11? OTP? U2F?

>arguably better UX

Requires battery. Not waterproof. Not crushproof.

>The key never leaves your mobile device.

Can't backup the key? I need to buy two iPhones to have a backup in case one is lost?


> Can it do full disk encryption? Works with GPG agent? PIV? PKCS11? OTP? U2F?

My understanding is that the only use-case is SSH right now, but that could change, I guess.

> Requires battery. Not waterproof. Not crushproof.

Those are all valid. OTOH, it doesn't require yet another device that can be forgotten or lost, it's far easier to set up (compared to YubiKeys or other smartcards), and it's free.

> Can't backup the key? I need to buy two iPhones to have a backup in case one is lost?

That's correct. They plan to add paper backups and syncing via QR, IIRC. A second (regular) key that's kept offline would do as well.

(FWIW, I use YubiKeys for GPG/SSH/OTP/U2F as well, but I'd definitely recommend this to anyone looking for a cheaper or more usable alternative.)


>it doesn't require yet another device that can be forgotten or lost

iPhones can be lost too.

>it's far easier to set up

Yubikey can do U2F and OTP out of the package with no setup required.

While it can be easier (I wrote a nice shell script for myself), I don't consider setting up a Yubikey for SSH hard for the type of person who uses SSH,

https://developers.yubico.com/PIV/Guides/

It's literally copy/pasting a few lines out of Yubico's docs into a shell.

>it's free

So is the Yubikey software.

>They plan to add paper backups and syncing via QR, IIRC.

I would consider that a fatal flaw. Once the private key is on the device, it should not be easy to recover it from the device.


> iPhones can be lost too.

emphasis on yet another. One more thing to keep track of. Not everyone likes to have their (door) keys hanging off their notebooks all day.

> Yubikey can do U2F and OTP out of the package with no setup required.

Yes, but we're talking about git/ssh here.

> While it can be easier (I wrote a nice shell script for myself), I don't consider setting up a Yubikey for SSH hard for the type of person who uses SSH,

That's not my experience, but it suppose it might depend on who you know/work with, or in what area you work.

> So is the Yubikey software.

I was talking about the free beer kind of free, but it's not accurate to say YubiKey is all free (libre) either, it depends on the product. There was quite a controversy a while back[1]. Personally, I'm fine with closed-source security products. Ideological reasons aside, I don't think making decisions based on whether the code is open source makes sense.

> I would consider that a fatal flaw. Once the private key is on the device, it should not be easy to recover it from the device.

I wouldn't say it's a fatal flaw. You rely on your phone's security to manage access to signing operations anyway, so if an attacker has access to the app, you're pretty much screwed either way. Again, there are trade-offs, but it's a step-up from keeping keys on-device.

[1]: https://www.yubico.com/2016/05/secure-hardware-vs-open-sourc...


>Not everyone likes to have their (door) keys hanging off their notebooks all day.

As if a whole phone is better... That's not a requirement with a yubikey anyway. Mine hangs on a lanyard on a dust plug. Many people have yubikey nanos and basically leave them plugged in their laptop all the time.

>I was talking about the free beer kind of free, but it's not accurate to say YubiKey is all free (libre)

Kryptonite is not libre. It's not even free. It's all rights reserved.

https://github.com/kryptco/kryptonite-ios/blob/master/LICENS...

Yubico piv tool is 2 clause BSD, which is GPL compatible.

>Personally, I'm fine with closed-source security products.

So is RMS, assuming device software is fixed. See his comments on microwave ovens. Yubikey fits this description. Firmware is not updatable, for better security. Anyone who is trying to make a controversy out of what yubico is doing is either more extreme than RMS, dumb, or a competitor.


The parent comment literally said arguably, as in, can be argued and is based on personal choice.

And, although I'm not the commenter, it doesn't read anywhere near it trying to imply Kryptonite is a replacement for Yubikey. They were replying to someone comparing it to the 2FA PAM module, and explaining it's more like a smartcard or Yubikey in that it stores the key, rather than add a second verification outside the key.


I asked the questions because I'd like to know the answer. I'm not arguing passive aggressively.

I looked at the source repo, and at a glance, it looks like the app stores the key pair on the iOS keychain. My guess is that means the key can be removed from the device if the user chooses to do so, or if the user give access to the keychain to another application. Perhaps I'm wrong about that. I keep hearing "The key never leaves the device" repeated, and I'd mainly like to know how that guarantee is made.


The app uses kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly[1]. Quoting the relevant iOS documentation[2]:

> After the first unlock, the data remains accessible until the next restart. This is recommended for items that need to be accessed by background applications. Items with this attribute do not migrate to a new device. Thus, after restoring from a backup of a different device, these items will not be present.

[1]: https://github.com/kryptco/kryptonite-ios/blob/42c71d6381def...

[2]: https://developer.apple.com/reference/security/ksecattracces...


That doesn't fix the problem either. The attacker could instead wait until a legitimate request piggy back off that and you'll be none the wiser.


You can encrypt SSH keys with passwords too.


Yes, but passwords can be keylogged and SSH keys on a hardware token (like yubikey) can't. Also if you've got touch-to-use enabled on the token you can't even use the key without physically touching the token.


Any user-level process can actually obtain your decrypted private-key: https://blog.krypt.co/why-store-an-ssh-key-with-kryptonite-9...


AddKeysToAgent defaults to no. Ptrace might also be disabled, depending on system. I would be more concerned about keyloggers, or any tricks that result in me running something else than the real ssh client (e.g. custom program somewhere in PATH).


This relies on the security of my phone OS, which I trust much less than my desktop's.


I wonder why is that? Apps on the phone are by default sandboxed and there is no way to get root (instead of running the exploit).


Lack of updates from OEMs, and a general lack of attention to security on the part of the same, at least on Android. Many devices still haven't received the patch for last month's Broadcom vulnerability, for example.


I have a Nexus 6P, and I'd trust the security on it a million times more than on my Arch-Linux desktop.


If you store things in the secure enclave of the device on iOS, it is likely much more secure.


Great writeup!. I think a lot of developers would do well to understand both the 'right' way to respond to this sort of event, and the tools you need in order to do so. Most importantly being detailed loggging and processes for re-keying everything.

I've participated in, and run, exercises where such damage is inflicted on purpose to surface gaps in the the response processes and to fix them. I was inspired by the Google DiRT (disaster recovery) and NetFlix Chaos Monkey exercises. Both of these create not simply review processes but simulation by action, or actually doing the damage to see the process work. Setting up your systems so that you can do that is a really powerful tool.


That actually goes a step further than Chaos Monkey. I wonder how many organizations would survive that approach if it were intense enough from day #1. Better to ramp that up carefully and give people room to breathe and fix things.


heh, new idea: GitHub Monkey — randomly makes your private repos public


And this is why ssh keys need to be encrypted - it's a good 2nd factor that will prevent access to all your important stuff if your laptop is stolen/compromised.

ssh-keygen -p -f keyfile


- Do not install unsigned software

- Do not install personal software on your work computer


It's not particularly hard to add malware to an already compiled binary, without access to the source code, is it?


You are correct, but if you actually have the source and can compile a binary from that, it is much easier to evade detection. As you might imagine, the gnarly things you have to do to add malware to existing software often trigger detection mechanisms.


"There’s no indication any customer information was obtained by the attacker. Furthermore, there’s no indication Panic Sync data was accessed."

Read: The attacker could have accessed all that data but didn't send me an e-mail telling me that he did.


It wasn't their production environment that was compromised; it was their source code repository.


> without stopping to wonder why HandBrake would need admin privileges, or why it would suddenly need them when it hadn’t before

Seems like it's completely random that an app needs admin or not. Blender3d? No admin. Unity3d? admin. etc.


Arbitrary, rather than random. Most of the time, it's entirely up to the developer. I'm sure the percentage of applications that actually require administrative privileges to perform tasks is in the single digits or lower.


> I'm sure the percentage of applications that actually require administrative privileges to perform tasks is in the single digits or lower.

This is probably true. I'm surprised we don't get after companies for unnecessarily requiring admin with their apps.


No one has time to examine every line of source code in the 3rd party applications that we use. That being said it irks me when people don't at least isolate their sensitive material. There are many solutions available including virtualization and jails to run 3rd party applications with less risk involved.


A version controllsystem which would allow the seperate safe versioning of ip central code with merge to build system would be nice.


Meh, should all be on github anyway. Like.. Handbrake!


Handbrake is on github, people put lots of hours into it and it can be downloaded, checked and compiled. handbrake is used to transcode video - often between proprietary formats, people often put a lot of hours into the videos transcoded by handbrake, but this was binary handbrake on a mac, macs are based on unix, people put a lot of hours into unix... people put a lot of hours into the company's source code... but it too was stolen, the world is a cynical place somehow. Maybe if it all was on github the world would not seem such a cynical place and people would realize that the value is in what they themselves bring and not in the thing on github.


> And more importantly, the right people at Apple are now standing by to quickly shut down any stolen/malware-infested versions of our apps that we may discover.

The "stolen" part bugs me — even though it would be incredibly shitty to distribute cracked-from-source versions of Panic apps, I hope that Apple wouldn't prevent users from running them. I appreciate the malware protection built into macOS, but this might be an abuse of it.


I read that as more Apple being on hand to remove any unauthorized clones from the App Store. Software piracy has been around for years and they've never used their antimalware system to prevent a non-malicious app from launching just because it's pirated.


Can you expand on your comment? I don't follow your logic. Isn't Apple legally culpable if they knowingly act as a marketplace for stolen goods?


Sorry, I wasn't thinking of the App Store. I read the original text as meaning that they might block cracked versions of Panic apps from running on Macs entirely.


[deleted]


If someone broke into the KFC vault and wrote down the spice recipe used for the chicken, we'd still call that a "stolen recipe". If part of the value of the source code is its secrecy, then its value decreases when it's made public.

Look at an example of one way the word "steal" is used in speech. If I say "Good artists copy; great artists steal", and I saying that great artists break into a building and illegally remove a physical artifact, or am I saying that they copy something for their own benefit? If one can "steal" an idea, then isn't that a "stolen idea"? And if that stolen idea is directly used to create some salable product, then isn't that a "stolen product", in that sense?

edit: The comment I responded to made the claim that source code couldn't be stolen, only copied (similar to the standard argument "it's copyright infringement, not theft", often applied to copied media). There was more, but I don't remember the wording, and I don't want to misrepresent the position.


If part of the value of the source code is its secrecy, then its value decreases when it's made public.

It's not necessarily true that part of the value of source code is its secrecy, though. We'd like to believe that, but it's difficult to come up with evidence to support it. Most instances where source code is leaked result in no damage to the owner, for example.


I concede the point. I don't think I can prove that hidden source is always beneficial for the company, and maybe not even that it ever is (although that weaker version of the claim just needs one counter-example ;-) ).


>It's not necessarily true that part of the value of source code is its secrecy, though.

Pretty sure the same could be said of KFC's secret spices recipe.


I hate to be the person to post this comment, but anyway: We have pretty good evidence that KFC's recipe has been reverse engineered and/or leaked anyway. Doesn't seem to have affected its sales much.

https://en.wikipedia.org/wiki/KFC_Original_Recipe#Recipe


Indeed, never underestimate the power of the brand itself.


"It would be incredibly shitty to try to harm their business like that, but that's no reason to try to prevent it"?


I read it more as "I hope this isn't used as an excuse to lock down execution of arbitrary unsigned binaries".


I don't believe that the role of an OS maintainer includes blocking users from running any software. (It's OK to make malware difficult to run.)


> Within 24 hours of the hack, we were on the phone with two important teams: Apple and the FBI.

FBI, seriously? Calling the cops, over malware, as a cool independent software company?! I mean, sure, fuck malware, but what happened to "fuck the police"? :D


Lesson learned: None.

You use the same machine for development of commercial, closed source software and video transcoding for most probably private use.

Your postmortem can be summarized as "[advertisement]".

I get that real security is too hard for most people. But even a few precautions can make a big difference. In order of effectiveness (least effective first):

* Don't have sensitive data mounted automatically (yes, ubuntu, your encrypted home directory is a joke).

* Don't have sensitive data on the OS-drive. Even if you are limited by archaic USB2, RAM is cheap and so is a virtual memory backed disk. Pushing your closed source into it won't take more than 30s.

* Work hard and party hard. But keep that separated. One computer for fun, one for work. The one for work should not even think about talking to external devices until it's sure the environment is friendly.

PS: I do drink my own kool-aid - I always carry 2 laptops that run 4 operating systems. My development and sysop environment is not even capable of playing a movie.

Edit: "too hard for most people" may sound harsh, but it is not meant like this. I teach OPSEC to activists in developing countries, work for a non-profit with real privacy concerns in a first world country and make real money doing audits for rather large companies. When I say "for most people" it should probably have been "in most circumstances".


i think its not too hard. its just not convenient. We need to make security the path of least recistence.


Why was he installing Handbrake on a work computer? Maybe he had a business need to transcode videos, in which case no problem, but was he installing Handbrake on a work computer in order to rip DVDs personally? Worse, was he perhaps doing work on a personal computer?

Folks, don't mix your business & professional lives. The cost is not worth the benefit!


>> ...don't mix your business & professional lives...

Hard advice for the co-founder of the company to follow, I'd expect.


> Hard advice for the co-founder of the company to follow, I'd expect.

If my co-founder used his personal machine for work, or his work machine for personal use, we'd have words.

I honestly can't believe this got downvoted to -4. Is it really so insane to say, 'don't mix your personal and business computing'? Is it really so crazy to impose a small amount of discipline which prevents a personal breech from endangering your entire business and all your customers?

edit: seriously, will one of the 8 people who've downvoted these comments post a substantive comment? I honestly don't understand the anger.


Not voted you down, but you are victim blaming.

What if it was another piece software? Who says he used it only for private use?

and please don't ask about why you get voted down, it derails the discussion (and pretty much always is yet another down vote magnet)


"Victim blaming" is a phrase designed to shut down discussion. Why are victim's actions always beyond criticism? In this case, the victim fully documented his opsec mistakes, which is often very difficult to admit to, and likely learned from them. And I think we're all better off having read his account of what he did right and what went wrong!


Probably because there are practical reasons for many people to prefer carrying around one laptop instead of two.


Yes, it is a chore to carry around two laptops — I'm really not a fan. But then, I don't carry around my personal laptop much: I do most of my personal computing on my desktop, and my work computing on my work laptop. When I travel, I'll chuck my work laptop in my checked baggage (carrying the battery with me, of course) and carry my personal laptop on with me.

It's a chore, but it prevents the sort of cross-domain spillage we see in this article. I think it's worth some minor inconvenience to me in order to prevent major damage to my company and my customers.


Why do you think that Handbrake is only for ripping DVDs?


I mentioned right there its use for transcoding videos!


Okay, then was your comment at all necessary? Why assume he was using it for personal DVD ripping when you literally provided a legitimate business use in your own comment?


> Why assume he was using it for personal DVD ripping when you literally provided a legitimate business use in your own comment?

Because IMHO that's the most likely reason for a developer to have Handbrake installed. It's not the only reason, as I noted, but I believe it's the most likely one.


No, that's the most likely reason for you to have Handbrake installed. Don't project yourself onto others.

As I already said, you provided a valid business case, at which point the rest of your holier-than-thou comment goes out the window because it could very well have been that and the OP has no incentive to say claim otherwise.


Because that's what OP uses it for...


I don't think malware actually cares why you installed the software it came in...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: