Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: CloudFront Is Having an Outage
53 points by klinskyc on July 18, 2023 | hide | past | favorite | 24 comments
Sites hosted via Cloudfront

(e.g https://app.circleci.com/) are having an outage and returning 421s and 500s currently




AWS Status page doesn't show anything yet: https://health.aws.amazon.com/health/status


Short of something within a few hundred feet of Bezo's desk being actually on fire, AWS's health page never seems to actually report issues.


Bezos is no longer the CEO. It's Andy Jassy.

I agree, AWS (and Amazon Alexa + Ring) seem to be skirting and/or choosing active silence on that "Lambda" outage that took out us-east-1. Alexa had at the very least two full scale outages this year, and Ring was successfully hacked as well.

Amazon's response: "looks like a Lambda... more later" then nothing, nothing, and "it must be a 3rd party skill."

AWS health page: GREEN.

AWS Post-event summaries: non-existent after 2021.

This is quite troubling when you consider what AWS holds in its coffers, along with the number of customers AWS currently has. No transparency, not even strategic transparency, is a definite red flag.


> Ring was successfully hacked as well.

Wait, do you have more details about this? I see a Vice story[1] about a ransomware crew's claim that it hacked Ring, but it includes this:

> It is not clear what specific data ALPHV may have access to. In a statement, Ring told Motherboard "We currently have no indications that Ring has experienced a ransomware event." But the company added that it is aware of third-party vendor that has experienced a ransomware event, and that Ring is working with that company to learn more. Ring said this vendor does not have access to customer records.

There are similar stories with similar invitations for Ring employees to leak details, but the lack of updates usually indicates that the event isn't as juicy as the gang claimed.

[1]: https://www.vice.com/en/article/qjvd9q/ransomware-group-clai...


Yea. In the spirit of NDA, I bring you OSINT. There were people on Reddit confirming ongoing issues related to decryption following the initial report by Vice (13 March 2023) [1][2].

Also important to highlight is the Twitter comment "It's believed a third-party supplier without access to customer info was affected. Ring is investigating"[3].

And, why not. While we're at it, I'll go ahead and highlight the concerns called out in an adjacent Reddit post (29 March 2023)[4], along with @Cloudguy's post on Mastodon (24 April 2023)[5][6].

It's not that people don't report the issues. It's that a big giant company uses its power to make those issues "disappear," without ever informing the public or fixing them. Scary stuff.

[1]https://www.reddit.com/r/Ring/comments/11vrrmy/why_am_i_gett...

[2]https://www.reddit.com/r/Ring/comments/11vqnie/ring_app_star...

CarbonFreeFuture · 4 mo. ago

I'm having the same problem. Videos from 3/16 and more recent give the error UnsuccessfulDecryptionException (except Live View and Live View recordings). Videos from 3/12 and before are fine. (So the problem occurred sometime between 3/12 and 3/16.) Just started working with Ring Support by telephone. I escalated to Advanced Technical Support. They will email me when they have further information. They did credit me $10 towards my next $39.99 annual bill.

ngrigoriev OP·4 mo. ago

https://community.ring.com/t/decryption-error-with-end-to-en...

Also Ring refuses to comment on the status of this issue, which makes me believe it was/is more serious than we think. And since the issue is "not resolved", according to them, they refuse the question about the compensation. OK, if they insist, the issue is still ongoing!

CarbonFreeFuture ·4 mo. ago

Seems like I now have the same results as everyone else.

I received an email from Ring support this morning (Tuesday, March 28th), saying that the problem was resolved, for new videos.

I can confirm that new videos starting 3/21 can how be viewed in the Event History. But videos between 3/16 and 3/20 still show the same error ("UnsuccessfulDecryptionException").

[3]https://twitter.com/TheRegister/status/1635506291232894976

[4]https://www.reddit.com/r/alexa/comments/125oiud/comment/je80...

Muted_Sorts ·4 mo. ago > "I think there is a fair bit of security via obscurity/low profile here"

I don't think so, unfortunately. There's a huge amount of posturing. But it costs money to maintain security standards. This is why Amazon took away basic features for Ring owners that don't pay a premium. Even then, it appears there's problems that keep happening (e.g., https://www.reddit.com/r/Ring/comments/11vqnie/ring_app_star...).

Please also keep in mind, Amazon lies about issues with ease (e.g., https://www.theverge.com/2021/3/25/22350337/amazon-peeing-in...). This poses a serious risk to anyone using anything made and/or maintained by Amazon.

When we can't trust Amazon, it becomes a very serious issue. And when we are talking about tech like home security items and voice assistants, this becomes crucial to critically evaluate the overseeing company. When there is a proven track record of inability to trust Amazon due to Amazon pushing lies, deceit, manipulation, and gaslighting onto the customer, please second guess your decision to link your home security to Alexa.

[5]https://web.archive.org/web/20230518073720/https://sackheads...

[6]https://practical-tech.com/2023/06/13/how-an-amazon-fire-kid...


Between the companies I've worked with and the incidents I've been through, I'd say AWS is well-known to be "very slow" at updating their status reports when something goes down.

Even had AWS liaisons tell us that things were definitely down even though the status page was completely green.


GMs/directors of AWS services are judged on how many "dashboard posts" they have (i.e., how many times their service posts to the status page). Incidentally, these same people are the ones in charge of approving posts to the status page for their service.

Guess why this is an issue.

Everyone on the ground at AWS knows this, and most AWS employees hate it.

Source: Former AWS engineer for >5 years.


Got posted a minute ago.

Operational issue - Amazon CloudFront (Global) Service Amazon CloudFront Severity Informational RSS Elevated Error Rates Jul 18 10:26 AM PDT Between 9:37 AM and 10:13 AM PDT, we experienced elevated error rates for request serviced by the CloudFront Origin Shield and Regional Edge Cache in the US-EAST-1 region. The issue has been resolved and service is operating normally.


"elevated error rates" is now my favourite euphemism for a downtime


FYI, it takes some pretty high-level approval at AWS for someone to make a change on their "status" page (signaling SLA breaches and money stuff). Not an actual live status page.


All our HTTPS (ACM) sites are returning:

> 421 ERROR

> The request could not be satisfied.

> The distribution does not match the certificate for which the HTTPS connection was established with. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.


Lots of users seem to be impacted for a short time period: https://isdown.app/integrations/aws/amazon-cloudfront


Well, that explains wtf is happening. You guys seeing this on us-east-1 or across?


As the saying goes: Friends don't let friends rely on us-east-1. (It was the very first region, and it shows)


Having worked at AWS: us-east-1 is probably the easiest region to deploy in (it's old, it's popular, every AWS service is available there), but might just be the worst region to rely on (it's old, it's popular, and every AWS service is available there—but only most of the time).

I vividly remember when AWS Fault Injection Simulator first launched, posting a meme in the #aws-memes slack channel posting a drake meme with the first panel being "Using AWS Fault Injection Simulator", and the second "Deploying in us-east-1".


I will never understand why anyone in our infrastructure team thought for a moment that putting our backup datacenter in us-east-1. And why nobody else tried to get him fired.

That's like buying a backup generator from a guy that wants to meet you in the Walmart parking lot.


Whoa them fightin words. I bought a generator at a Walmart parking lot and it was fine. I only got stabbed twice.

Generator didn’t actually work though


Seeing it across 3 regions at minimum


saw it across us-east-1 mainly


us-east-1 only for us


My employer’s sites are hosted via CloudFront and seem to be fine. The underlying buckets are in Dublin, in case that makes any difference.


Seeing 421 on our AWS amplify splash page as well


It looks like they are back up now


Seems stable now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: