Hacker News new | past | comments | ask | show | jobs | submit login
How the modern containerization trend is exploited by attackers (kromtech.com)
200 points by dsr12 on June 13, 2018 | hide | past | favorite | 99 comments



If you didn't build the container, then you are putting all your trust and your companies private bits in the hands of Joe Random.

Please do not blame the technology. This problem existed long before containers and will exist long after they are gone. This is people trusting unknown anonymous third parties to build things that will run in their datacenter.


Isn't that always true though? This is just one additional layer of trust. Sure, there are reasonable layers we should care about, but you're rarely, if ever, going to be doing everything and trust everything.

Ie:

> If you didn't build the container..

> If you didn't build the package on Debian..

> If you didn't verify the source code when compiling by hand..

etc.


I think it is about legal culpability. If I am running CentOS in my datacenter, there is some degree of confidence that the packages were rebuilt by the CentOS team, a few members of which are Redhat employees. Redhat have an obligation to make reasonable effort to keep malicious code out of their supported software.

If there is a commercially supported version of Debian, then the same would apply.

If I pull in RPM's, Containers, VM Images built by Joe Random, I am legally on my own. My customers will be furious when they find out I have done this and the court will say that I did not make reasonable effort to protect my customers.


> Redhat have an obligation to make reasonable effort to keep malicious code out of their supported software.

No. read the license terms. For all Linux distro, there is a clear mention that the software is provided as is, and they are in no way responsible for whatever happens with it. Very standard. So absolutely no legal standing and therefore no obligation.


There’s a social and economic understanding that Redhat works hard to keep malicious code out of their distributions.

That doesn’t exist with containers pulled from joevandyk/postgresql.


That is specific to the Linux code itself which is taken from upstream. Linux distro vendors provide a contractual relationship with their customer base that provide SLA's around patching security defects and bugs. They also enforce policies around uptake of new third party code. They also do extensive patching of all of their packages to mitigate the vulnerabilities that upstream providers do not patch. There is much more to this that would take a blog entry to explain.


> That is specific to the Linux code itself which is taken from upstream.

No, I don't believe that's the case.

> Linux distro vendors provide a contractual relationship with their customer base that provide SLA's around patching security defects and bugs.

I don't think many - if any - GNU/Linux distro vendors provide anything like that.

RHEL may - it's been a while since I've read a RH contract - but most distributions, as noted by parent, make it quite clear in the licence agreement that everything is provided as is and is sold without any warranty or assurance of suitability etc.

> They also enforce policies around uptake of new third party code.

Is third party code here the same as 'upstream' in the first take? 99% of most distributions code is 'third party' or 'upstream' in the sense it comes from people other than distribution maintainers.

> They also do extensive patching of all of their packages to mitigate the vulnerabilities that upstream providers do not patch.

I know Debian does this, and I trust them with the process. I'm not a big fan of RedHat, but I also know they have an excellent reputation on this front.

It doesn't change the fact that licences clearly put responsibility on the user not the distributor.


For commercial software, there may be some level of legal liability, but it would depend entirely on your contract, and I'd imagine if you look at most standard contracts, they disclaim all such liability.

For CentOS (or any other open source software) you may have that confidence but you have no contract :)

Now do Redhat/Debian package maintainers do detailed security reviews on all the software they distribute... I don't know but the odds would say it's not likely as they don't employ (to the best of my knowledge) the number of code review professionals that would be required to do that.

And of course as soon as you venture off in to other repo's (npm, rubygems, CPAN, nuget etc) you're entirely on your own.


I agree, I am riding on the backs of people using RHEL. There is a direct contractual relationship between those companies and Redhat. In my case, I am relying on the other companies having that relationship and I can still say some effort is being made to validate the supported packages. While I can not sue anyone, I can say that I am using an OS that has some degree of code validation and feature set stability.

For sure, things like npn, gems, cpan, pear, pip, etc... is basically back to square one with Joe Random. Each of those things can be pulled into a code repo, built internally and turned into RPM packages. I agree that the effort to code diff review these things is quite large. It is likely still a smaller effort than rewriting all of this code from scratch.


As to code review effort being lower than writing, sure in most cases (although finding well hidden backdoors is likely harder than writing software)

That said even at less effort there it seems extremely unlikely that anyone is doing actual code reviews on the software being packaged up into all the Linux repo's out there. Even automated static analysis over that volume of code (as error ridden as that would be) just isn't practical.

That's not to say they're not more trusted than npm et al, as the developer can't push directly to the repo., so an attackers life is more complex.

Although that said it does introduce a new possibility, that of the malicious/compromised package maintainer...


> although finding well hidden backdoors is likely harder than writing software

Very likely:

https://en.wikipedia.org/wiki/Underhanded_C_Contest


Are you basing your assertions on a discussion with an attorney, or better yet, a written legal opinion, or is this your interpretation as a lay person?

To date, I have yet to see a software contract that absorbs any legal culpability. Not even high 3-comma annual renewals. The way culpability is usually handled is a clause demanding information security and/or data privacy insurance in client-vendor Master Services Agreements. If your experience with reading business contracts is different, and you've seen actual absorption of legal risk, then please tell some war stories of the contexts, as I'm always up for learning how others do business.


I am not a lawyer and this is not level advice.

I am referring to after you have been breached, your data has been lost and your CEO and CFO are standing before the judge. The judge will make some punitive decisions based on what effort you can show you made to protect your customers.

If your devs are grabbing every shiny gidget widget from Joe Random and you did not make attempts, as a company, to protect your investors and uphold your fiduciary responsibilities, then the hammer will come down much harder.


> ...your CEO and CFO are standing before the judge.

This doesn't happen often, and you more commonly see lower-level line staff or managers standing in court because the high-level executives simply point to the written company policies their legal team advised b put in place that forbid such wanton behavior. Now indictment not to speak of prosecution has to clear the far higher bar to show that such high-level executives deliberately, consciously structured incentives such that meeting such policies was outright impossible.

Issuing a policy that demands any such conflicts be raised immediately to management neatly short-circuits such prosecution strategies. Unless the executives are particularly careless or brazen, it is worth more to the prosecution to go after lower-level staff.

I believe that it helps if legal precedent can be set such that management is held more accountable for behavioral structuring through incentives and selective policy enforcement.


> to be doing everything and trust everything

Also, it's sort of weird how often people conflate these two things. There's this idea that home-rolling is naturally safer, and it's simply not true.

Everyone doing anything with software is relying on layers someone else built, and we should keep doing that. Layers I handle myself are layers that I know aren't malicious, but that doesn't mean they're secure. The risk of malice doesn't just trade against convenience, but against the risk of error. Using somebody else's programming language, compiler, crypto, and so on doesn't just save time, it avoids the inevitable disasters of having amateurs do those things.

We live in a world where top secret documents are regularly leaked by people accidentally making S3 buckets public. I'm not at all convinced that vulnerable containers are a bigger risk than what the same people would have put up without containers.


There's this idea that as long as everything is not rigorously proven secure, we might as well grab binaries of file sharing sites and run them in production.

This argument tires me. Every time some smug developer asks me if I have personally vetted all of gcc, with the implicit understanding that if I haven't we might as well run some pseudonymous binaries off of docker hub, I extend the same offer to them: Get a piece of malware inside gcc and I will gladly donate a month's pay to a charity of choice.

Sometimes I have to follow though the argument with the question if they will do the same if I get malware on docker hub (or npm or whatever) but the discussion is mostly over by then. Suffice to say, so far nobody has taken me up on it.

The point is, that there's a world of difference between some random guy on github and institutions such as Red Hat or Debian or the Linux kernel itself. Popular packages with well functioning maintainers on Debian will be absolutely fine, but you probably shouldn't run some really obscure package just because some "helpful" guy on Stack Overflow pointed to it, and you certainly shouldn't base your production on some unheard of distribution just because the new hire absolutely swears by it.


Right. All-or-nothing thinking is the bane of analysis, and philosophy in general.


The difference is that Docker has centered their momentum on the transclusion of untrusted/unverified images. It's true that executing random untrusted code has been a major problem since people got internet connections (although most HN denizens like to fancy themselves as too smart for that, so this story is undoubtedly uncomfortable for them), but when Docker makes it a core part of the value proposition, it's worth calling out.


Very true, but doesn't that make this basically a cost-benefit calculation, with risk-of-malicious-code vs. risk-of-reinventing-the-wheel(badly)? I assume the critics would say that container tooling makes it easier for reckless amateurs to put things up when they otherwise might not have managed to deploy at all without them...


> basically a cost-benefit calculation

Absolutely. There are some famously settled issues - don't write your own crypto, you'll screw it up, do write your own user tracking, third parties will inevitably farm data - but generally there's a decision to be made. And it's not the same answer for everybody; there's a reason Google and Facebook do in-house authentication services, which everyone else borrows.

I've seen the "containers let clueless people go live" claim before, but I'm not really convinced. Containerization offers most of its returns when we talk about scaling, rapid deployment, and multiple applications. If you just want to put up an insecure REST service with no authentication, it seems at least as easy to coax some .NET or Java servlet horror online without any containerization at all.

The examples in the article of containerized learning environments are a bit of a different case, granted. A space specifically designed to let stranger spin up new instances and run untrusted code would usually take a fair bit of thought, but containers have made it much easier to do without any foresight.


I don’t think either offer much assurances unless there’s good test coverage, mocking, stubbing, fuzzing, property testing and so on to ensure code is solid. Trust but verify (automatically)


Reproducible / Deterministic builds are a more realistic solution to this trust question.

https://www.gnu.org/software/guix/blog/2018/tarballs-the-ult...


It is. One procedural solution is increased rigor, i.e., formal methods (a-la seL4) and unit/integration testing to prove isolation properties. I still don’t understand how Linux or Docker get a free pass, be so popular and complex while lacking basic assurances of automated, repeatable, demonstrable quality.


It comes down to history, long term track record of reliability, and responsibility. The number of times that actual malicious software has made it into an official Debian apt repository is very, very low. The people who build the .deb packages and make them available (with appropriate GPG keys and hashes) keep things secure and are generally very trustworthy individuals.

https://wiki.debian.org/SecureApt

At a certain point it does come down to trust. From the position of a potential attacker, you can't just upload a randomly built thing to the official CentOS or Debian repositories and expect it to be made available to the rubes.

Very different than people downloading and trusting random Docker images.


> Very different than people downloading and trusting random Docker images.

I'd say there is a difference of using official Docker images (from the software vendor) vs images from a random person.

Official images exist for most popular packages, under a separate namespace and usually have checksums published etc.


It's true that this problem is not unique to container tech -- it's a problem that every packaging ecosystem faces. Who polices what packages are available? And how many eyes are on these packages, to make sure that they are safe?

It would be pretty difficult to sneak a covert Monero miner into an officially approved mainline Debian package.

However there is a sense in which this is a problem with container tech, in that there is no container equivalent of `deb http://deb.debian.org/debian stretch main` (yet!).

This is a statement about the maturity of the ecosystem, rather than a criticism of the technology itself, as you say. But I think that it's meaningful to say that this is a problem that containers currently have, that Debian (or other Linux distro) packages don't face to the same extent.


> However there is a sense in which this is a problem with container tech, in that there is no container equivalent of `deb http://deb.debian.org/debian stretch main` (yet!).

That's what the Docker standard library is



> This is people trusting unknown anonymous third parties to build things that will run in their datacenter.

Red Hat, Canonical, Pivotal (I work for Pivotal) all provide this kind of assurance and it's a lot of our bread and butter income to do so.

In particular, Cloud Foundry uses buildpacks, providing runtime binaries with a chain of custody from sourcecode to the buildpack uploaded to the installation.

Buildpacks make this overall problem a lot easier, actually. You don't need to track the underlying base OS image or manage the runtime binaries. The platform does it for you. But you will still be responsible for tracking more immediate dependencies (NPM, Maven etc), which is a poorly-solved problem currently.


Exactly, similar issue exists with nearly all package manager tools - you’re putting a lot of trust in a lot of people you don’t know.


docker run some/container is basically equivalent to curling a shell script and piping it straight to bash isn't it?


not really, Docker (and similar containerization technologies) provide a restricted environment for the downloaded code to execute in (by default, it is possible for users to remove the restrictions)

Assuming a default Docker engine install, and no options passed as part of the run, an attacker could DoS the box most likely, and may be able to intercept traffic on the Docker bridge network (although that's not a trivial thing to pull off), but they're unlikely (absent an exploitable Linux kernel flaw) to be able to easily compromise the underlying host.


Couldn't have said it better myself.


The blame with npm, DockerHub, etc. basically boils down to "it makes it too easy to share and run software".


There used to be this thing called software engineering. You are presented a problem and you or a team come up with possible solutions choosing carefully the right tools and components for the job.

Now we are entering the everything as a service era which includes software engineering. Instead of designing a solution you download someone else's, duct tape on a few packages, tweak variables and publish it.

You can also blame the breakneck pace demanded by today's tech sector where everything needed to be deployed yesterday in order to beat the other guy to the silicon valley millionaire finish line.

You can say it's a mix of impatience and laziness.


This effect isn't new, even in ye olde days of waterfall. Back then it was called rapid prototyping and first system syndrome.

The rapid just got rapider today with easy packages, frameworks and containers. The prototypes just became "web scale" and are run way beyond their intended lifespan, just like any first system is.


> There used to be this thing called software engineering.

And computers were once accommodated with the same with the space they deserved in large rooms.

Now we are entering the everything as miniature. Want more RAM? Throw out the whole thing, because it can't be fiddled.

You could say it's a mixture of impatience and abject cheapness.


> Now we are entering the everything as miniature. Want more RAM? Throw out the whole thing, because it can't be fiddled.

Development of some technologies seems to happen on some weird curve like this:

  end-user control / flexibility / repairability
  ^
  |                 ........
  |               ..        ..
  |             ..            ..
  |          ...                ...
  |      ....                      .......
  | .....                                 ....
  +-------------------------------------------->
                 power / money-making capability
(Not really sure about the Y axis label; I'm having trouble expressing the quality I'm thinking of.)

Things start as prototypes - hard to sell, hard to use, hard to tweak on user end. Over time, they become more flexible - think of e.g. PCs with interchangeable components. This is the golden era for end-users. They can fix stuff themselves, they can replace or upgrade components. The technology reaches maturity, and the only way from there is downhill - as businesses find new ways to fuck users over, both purposefully and incidentally. Make things smaller. More integrated. Sealed. Protected. The end result is the ultimate money-maker - a black box you lease and can only use as long as you're paying for the subscription.


Hardware may be cheap or expensive, it does not make a lot of difference.

The key is whether you see your data, or your customers' data, as a precious thing that needs care and protection.

If you do, you make the best effort to select the right software, understand how it works, deploy it correctly and securely, etc. If you don't, you just slap together something that sort of works from the easiest-to-obtain parts, and concentrate on other things.

A lot of people don't care too much about their own personal data; some of them are developers, product managers, even CEOs.


I'm not sure what distinction you're trying to make? Build it yourself vs reuse? Old-fashioned file-sharing reuse vs modern network-based package management? Sounds like you're mocking anything that isn't homegrown or acquired via floppy disk, but I want to give you the benefit of the doubt...


I would suggest that depends on how they present the community artifacts. If they provide a good deal of obvious disclaimers that the artifacts are built by Joe Random and "Use at your own peril", etc, then they are probably covered. If not, then they are guilty of conditioning people with bad behavior.

For an example of how this existed long before Linux containers:

There are third party RPM and APT package repositories that have existed for a very long time. The packages are not vetted by a company and there is no legal culpability for anything nefarious being contained within. People use these packages at their own peril and it is assumed they have mitigating controls to reduce risk.

Github is community contributed code and there is no enforceable legal contract between the developer and the consumer. The same thing applies. Use at your own peril and have mitigating controls (code diff reviews, static analysis, legal review, etc) This is especially true for all those projects under the MIT license.


The headline is misleading. It's not "the modern containerization trend" that is the root cause of this. I expected to read about something about container breakout or the difference between container confinement and VM confinement.

Instead, it turns out that it's the "store model" (Docker Hub in this case) and malware injection into that store that the article is really talking about.

The article also seems to talk about misconfigured systems permitting some level of admin access to everyone. That's not really a new "container" class of vulnerability though; it's the equivalent of leaving root ssh open with a weak password or similar.


Even that isn't quite it - this is not a case of people accidentally downloading and running malicious containers.

People are leaving kubernetes/docker/whatever open to the world, and attackers are instructing their servers to download and run these containers.

The complaint is that Docker Hub is hosting the attack code for the attackers. They could have hosted it on their own custom registry server if they wanted. (But why bother if you can just host it on Docker Hub.) In the same vein, they could use GitHub to host their attack code. Or they could put in in an S3 bucket...


Whilst this article has some decent points, I feel it overblows/mis-understand others.

It's fair to say that downloading and running images from Docker hub without establishing trust is a dangerous practice.

Similar in danger to using npm, rubygems, nuget, Maven central etc. In that there is only limited curation of content.

That attackers have "malicious" images on Docker hub isn't that relevant, unless they can get people to execute them. If they were typo-squatting or otherwise trying to trick users into running those images that would be more relevant. Instead what seems to be being described is the use of Docker images as part of attacks on other systems (e.g. Kubernetes installs with poor security)

The bit around running a malicious container instantly leading to root on the host is just wrong. With a standard Docker install, no customization, there are some risks, however unless you do something like run --privileged, or mount a docker socket inside the container, you're not guaranteed to be able to get root on the host.

(BTW anyone who reckons this is trivial should give contained.af a look)


disclaimer: Security Engineer at Docker

It is VERY hard to do runtime detection of mining apps for two reasons:

1) it's mostly CPU usage intensive work and only if you know what's the average amount of computer power needed by your application upfront will you be able to make a policy decision on which image to stop and how to adjust Cgroups resources. If you don't, you'll have to build a reference profile of a trusted image anyway to be sure of what's the expected behavior.

2) There is no other "malicious" activity that might be reported by runtime security tools (it generally doesn't trigger anything blocked by your seccomp/LSM/filesystem-integrity profiles).

------- How to protect against this -------

The best protection is at the build chain level. There are tools out there to "bless" and/or verify an image's content/creator. Notary and Docker Trust (higher-level abstraction based on Notary inside Docker) are two tools that allow you to do:

  - key management

  - signer management

  - trust management
over Docker images.

It is crucial for people out there to make sure they only deploy trusted images and make decisions on what to run (CI or Prod) based on signature integrity of trusted images.

For a quick tutorial on Docker Trust and Notary, check this out: https://github.com/dockersamples/dcus-2018-hol/tree/master/s...

Stay safe and do not run unsigned/untrusted images!


Isn’t the real problem mentioned in this article that people are running their docker daemon unauthenticated on public endpoints? That’s not the default behavior right? So people have actually gone out of their way to make themselves insecure.

Look at the names of the containers in the article. Nobody is pulling these themselves. The problem is attackers compromising docker hosts and pulling arbitrary containers.

What safeguards does docker provide against exposing the daemon publicly, accidentally or otherwise?


The daemon is listening by default on a non-networked unix socket so if you're exposing listening on the network, you're already out of the default behavior (which is totally normal but that means that you've started regarding the instructions/doc on how to do so, and our doc page on this matter also includes security guidelines to enforce TLS verification/whitelisting daemon-side).

There is currently no "superduper-safe-mode" that enforces `--tls-verify` at the daemon-level to prevent lack of client verification/whitelisting. This can be discussed, the issue obviously being the UX (that means getting proper certs, specifying them in the config etc..).


Containers were cool. No config, single deploy, etc.

But now I have to configure many yaml files, launch config and build script that do god knows what to databases and other config scripts hidden inside the container.

Some projects are really cool and easy to install while others are just pile of hacks.


Stateless services are all nice and dandy, and all the marketing you'll see is about stateless services. The trouble starts when you:

- run a stateful service that's not explicitly designed to work well in a distributed fashion.

- have services that need to be started in a precise order.


> By default, docker containers run as root which causes a breakout risk. If your container becomes compromised as root it has root access to the host.

Is this really true, unless you start container with `--privileged`? Incidentally, I just read plan for better security defaults to avoid `--privileged` (which is not default, AFAIK) on lwn: https://lwn.net/Articles/755238/


Yeah, I believe the quote is incorrect (or at least out of context). If an attacker has access to control the docker daemon like in the attack the article is talking about, then yes that is root [0], but if only a container is exploited then I believe you need one of [1]:

1) an exploit in the kernel,

2) optimistic configuration that allows host access, or

3) a volume mount that exposes something vulnerable like the host root or docker socket.

The quoted article was talking about running within the container as a different user, so I think with context what the article was saying is that _if_ there is a container breakout it's much worse when running root within the container.

[0] https://fosterelli.co/privilege-escalation-via-docker [1] https://security.stackexchange.com/questions/152978/is-it-po...


It's about UID mappings between namespaces. When you are UID=0 in namespace X and manage to get out of namespace X, then you are still UID=0 outside X, so you're root.

It's possible to remap UIDs such that root in namespace X has UID=12340, and when root gets out of X, then he's nobody.


it's not quite as straightforward as just UID mapping. Assuming a standard install of Docker, the container processes only have a limited set of capabilities, have an AppArmor/SELinux profile applied and have a seccomp filter also applied, which makes it harder to break out the the underlying host.


Maybe this isn’t clear, but something I read a while back on this topic:

Running a container from dockerhub is basically the same as curl piping into bash.


What? No.

Curl piping into bash will trivially steal all of your data at once.

Running a container from dockerhub is much safer, provided you do not give it privileges using --privileged or bind-mounting system files like docker control socket.

If your system is up to date and there are no docker 0-days active, the worst "docker run --rm -it RANDOM-CONTAINER" can do is to use too much resources -- your local secrets would be safe.


...unless said docker container is running an app server that has direct access to your database.


It is kind of disturbing that apparently a huge number of people installed these Docker containers and did not care to notice that they were using 100% CPU on all available cores, 24x7.


The fact that so many people are nitpicking the analogy instead of the argument is indicative of how true this is.


Even worse. A simple script can be stored, reviewed and then executed. Reviewing a whole image is practically impossible.


Reviewing images is relatively straightforward. For anyone using automated builds you can just review the Dockerfile either on Docker hub or github.

For non-automated builds just pull to a local machine and use something like portainer to have a look around.


> For anyone using automated builds you can just review the Dockerfile either on Docker hub or github.

And then review what it `FROM`s. And then review the core OS build that relies on.

It's a lot of work. It is doable, but it is a lot of work.


indeed there's a hierarchy to follow up so can be painful, but then no different to where a shellscript goes and pulls more code as part of it's run.

I just wanted to make the point that I don't think it's impossible :)


No, it isn't. It's like curl bashing in a chroot jail.

(Unless you explicitly expose ports or mount volumes or grant elevated kernel permissions.)

I can't think of safer way of running someone else's code, can you?


qemu


Yep. Even full virtualization isn't truly sandboxed, but the sandbox is much tighter.

FreeBSD has jails and Solaris has zones, both of which were designed to be safe sandboxes for OS-level virtualization or "containerization" as it's called today. The consensus, as far as I can tell, is that these are pretty safe/strict, at least as far as "provide a safe environment to execute untrusted code" goes.

On Linux, resource control mechanisms like cgroups and namespaces have been co-opted to simulate secure sandboxes, but it's not the same as actually providing them.


FWIW, AWS Fargate -- which uses Docker containers as the unit of virtualization -- is now HIPAA compliant.

I can't speak with authority on Docker security, but that's a data point, from the largest cloud provider in the world.


Sure, and there is nothing wrong with either one in most cases. Salesmen, bloggers, security people, and others like to disagree, but they do it out of bias, and not because they want you to get things done.

Edit: I'd like to be wrong about this. Maybe some brave downvoter could help out here?


Security people certainly "do it out of bias". Most are, rather understandably, biased against having systems they're tasked with managing get pwned from under them.


Piping curl to bash is equivalent to running a remote code execution exploit against yourself. Even if you implcitly trust the endpoint, do you trust that they will remain uncompromised forever? Also, it's especially silly because it's never the best or only way of accomplishing a given task, so it serves only to shoot yourself in the foot.


If a container system were formally-proven to provide all of these:

- hard limits, priorization and accounting metrics of all resources, incl.: IO, storage, compute, mem, net, kernel structures

- provable isolation / no side-channel leaks

- SELinux

- Live migration of processes and storage to different hosts, suspend/resume

- Type 4 hypervisor containers for different kernels, OSes, etc. configured and managed seamlessly with the same API

Then and only then can the jumble and complexity of containers running on hypervisors go away and be more like SmartOS with an ability to run bare-metal without losing devops flexible capabilities of running T4 hypervisors under everything.


Okay, quick search failed me: what is a type 4 hypervisor?


One interesting point here, that Matt Levine has pointed out, is that previously, upon penetrating a company's network, hackers would do malicious user-harming things like stealing credit card or personal information.

Now, (some) criminals are just running mining software.

As a company, your bottom line might care more about the latter than the former, but as a consumer, this is great news; if hackers lose interest in our data and start stealing and re-selling compute cycles, then the chances of catastrophic identity theft could go down dramatically. Sure, prices for web services might go up a little, but they'll do so across the board.


I feel like the guys in Office Space here, but I'll buy some subscriptions to Vibe if somebody can explain theoretically how you can convert a large sum of ill-gotten cryptocurrency to usable money without getting immediately caught?

From the article it sounds like the attackers' wallet id is hard-coded into the malware. I'm not familiar with monero, but aren't all transactions in cryptocurrency public and permanent?

Wouldn't it be obvious from following that who is ultimately benefiting from this?


Monero is a privacy coin. I.e., all transactions are obfuscated by default. This makes it infeasible to follow such a money trail if the attacker is trying at all.

https://www.monero.how/how-does-monero-work-details-in-plain...


Other commenters are mentioning altcoins specifically designed for privacy, but there are ways to solve this problem with bitcoin too: https://en.wikipedia.org/wiki/Cryptocurrency_tumbler


Monero has the "laundry" part built in.

I also don't think these criminals necessarily convert to fiat to aquire what they need/want.


When you're talking about millions of dollars of bitcoin or other altcoin stolen...well, it's hard to smoke that much crack.


But it is possible to by some crack and sell it for cash.


You can buy gold with crypto.


Millions of dollars worth? Or is it some penny-ante exchange like most Bitcoin-to-fiat services? Or will it have a strong know-your-customer rules that make it problematic for these actual criminals?


I don't see why not. Transactions worth tens of thousands of dollars are relatively common as I understand it.


>I'm not familiar with monero, but aren't all transactions in cryptocurrency public and permanent?

Not with monero (and possibly other cryptocurrencies)


That’s what they mean by it not being fungible.


>Wouldn't it be obvious from following that who is ultimately benefiting from this?

It would not, unless you could trace that address to a person.

There are coin tumbler services that supposedly "clean" the origin of whatever crypto you are using by exchanging it.

There's also Monero, which essentially has this feature baked in.

Cashing out anonymously can be difficult- localbitcoins allows you to basically conduct ad hoc exchanges of crypto with other people.


You can only get caught if there is some link to your identity. A bitcoin ATM is anonymous. Wear a hat and sunglasses if you're worried about local cameras.


google "cryptocurrency tumbling". There are grey and black market services that will take a incoming input of a bitcoin transfer, keep some percentage, commingle it with a large number of other BTC, and attempt to obfuscate the output.


N


They use Monero because you don't need a GPU to mine it.


I think I'm missing something about this attack, how does one actually get attacked? From the article it sounds like a combination of bad firewall configuration and an exposed Docker Daemon allows the attacker to install any image they want into your host?

Why are so many commenters here saying things about running untrusted Docker containers then? From my interpretation, the people affected didn't go to dockerhub.com/docker123321/maliciousCopyCatImage and run it. Instead it was injected into their otherwise-safe containers and they had no idea this ever happened until things started acting weird. Am I missing something?


I also had trouble following this article.

The Tesla vulnerability at the top seems to be simply a misconfigured kubernetes cluster, through which the attackers were able to get at AWS credentials. But then all of the subsequent examples are malicious images hosted on dockerhub. Were malicious images involved in the Tesla exploit in some way? What's the connection?


Here is one thing I have trouble understanding, despite working in cybersecurity.

If I run a VM, I have to harden that VM. If I run a Docker container on top of that, I now have to harden the Docker container as well. This is more work, and a greater “surface area” of attack.

Forgive my ignorance, but do most ppl run Docker through managed PaaS services now, so that they don’t have to worry about the double work of hardening the VM? That’s the only way I see it making sense long-term, where the Cloud providers manage the physical infrastructure like they do now as well as the EC2 / IaaS later.


Most people just assume that Docker is a magic box that solves all of their problems. They build Dockerfiles that depend on a grotesque cascade of hardly-vetted parent images, because then their Dockerfile is "only three lines! Ha! Take that, old guys!"

If people had been asking any of the basic system engineering and security questions, we wouldn't be talking about this, or to be frank, most of the stuff that passes for "DevOps" these days.


Our team uses Docker containers more as a deployment strategy than as a sandbox. In conjunction with an orchestrator like Kubernetes, we get a standard way of deployment, horizontal scaling, update handling, TLS termination, etc. that works the same for every service under our reign. And it's very malleable: You have a certain amount of isolation that makes each individual service easier to handle, but you can still get through to the kernel by giving the container the appropriate privileges.


Take WordPress press for example. If you've never before setup a LAMP stack to run WordPress, you probably be running a WordPress container. And if you don't regular check the log files and monitor system resources, then you're asking for these problems.

I wouldn't run a server with a pirated version of Windows and I wouldn't run a dodgy container from an untrustworthy source. Yet people do. I wouldn't blame MS for a pirated Windows that came with malware like I wouldn't blame containers for this either. It's all about Trust.


Although I understand reading the sources of all software we're running would take way too much time to be reasonable, running a quick docker inspect / docker history on all images we use is, in addition to be interesting, probably a big first layer of protection.

Having a tool doing this for us - i.e a sort of docker anti-malware, that would inspect images and containers for us, without necessarily go through all the security stuff the official tool checks - would also be very handy.


There is an entire genre of such tools now. Some names to look up are BlackDuck Hub[0] (commercial) and CoreOS Clair[1] (opensource).

At Pivotal we use both -- BlackDuck is built into a number of our pipelines and Clair is shipped as part of PKS (in the Harbor registry[2] contributed by VMWare).

A lot of our customers also use other security scanning tools that have expanded to include container scanning.

[0] https://www.blackducksoftware.com/products/hub

[1] https://github.com/coreos/clair

[2] https://github.com/vmware/harbor


I didn't know them. Thanks for bringing them up!


You can easily avoid this by only using Official Images and Certified Content from the Docker Store: https://store.docker.com/


I would like to get more research of this sort.

Would someone kindly point me towards it?

Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: