I agree, AWS (and Amazon Alexa + Ring) seem to be skirting and/or choosing active silence on that "Lambda" outage that took out us-east-1. Alexa had at the very least two full scale outages this year, and Ring was successfully hacked as well.
Amazon's response: "looks like a Lambda... more later" then nothing, nothing, and "it must be a 3rd party skill."
AWS health page: GREEN.
AWS Post-event summaries: non-existent after 2021.
This is quite troubling when you consider what AWS holds in its coffers, along with the number of customers AWS currently has. No transparency, not even strategic transparency, is a definite red flag.
Wait, do you have more details about this? I see a Vice story[1] about a ransomware crew's claim that it hacked Ring, but it includes this:
> It is not clear what specific data ALPHV may have access to. In a statement, Ring told Motherboard "We currently have no indications that Ring has experienced a ransomware event." But the company added that it is aware of third-party vendor that has experienced a ransomware event, and that Ring is working with that company to learn more. Ring said this vendor does not have access to customer records.
There are similar stories with similar invitations for Ring employees to leak details, but the lack of updates usually indicates that the event isn't as juicy as the gang claimed.
Yea. In the spirit of NDA, I bring you OSINT. There were people on Reddit confirming ongoing issues related to decryption following the initial report by Vice (13 March 2023) [1][2].
Also important to highlight is the Twitter comment "It's believed a third-party supplier without access to customer info was affected. Ring is investigating"[3].
And, why not. While we're at it, I'll go ahead and highlight the concerns called out in an adjacent Reddit post (29 March 2023)[4], along with @Cloudguy's post on Mastodon (24 April 2023)[5][6].
It's not that people don't report the issues. It's that a big giant company uses its power to make those issues "disappear," without ever informing the public or fixing them. Scary stuff.
I'm having the same problem. Videos from 3/16 and more recent give the error UnsuccessfulDecryptionException (except Live View and Live View recordings). Videos from 3/12 and before are fine. (So the problem occurred sometime between 3/12 and 3/16.) Just started working with Ring Support by telephone. I escalated to Advanced Technical Support. They will email me when they have further information. They did credit me $10 towards my next $39.99 annual bill.
Also Ring refuses to comment on the status of this issue, which makes me believe it was/is more serious than we think. And since the issue is "not resolved", according to them, they refuse the question about the compensation. OK, if they insist, the issue is still ongoing!
CarbonFreeFuture ·4 mo. ago
Seems like I now have the same results as everyone else.
I received an email from Ring support this morning (Tuesday, March 28th), saying that the problem was resolved, for new videos.
I can confirm that new videos starting 3/21 can how be viewed in the Event History. But videos between 3/16 and 3/20 still show the same error ("UnsuccessfulDecryptionException").
Muted_Sorts ·4 mo. ago
> "I think there is a fair bit of security via obscurity/low profile here"
I don't think so, unfortunately. There's a huge amount of posturing. But it costs money to maintain security standards. This is why Amazon took away basic features for Ring owners that don't pay a premium. Even then, it appears there's problems that keep happening (e.g., https://www.reddit.com/r/Ring/comments/11vqnie/ring_app_star...).
When we can't trust Amazon, it becomes a very serious issue. And when we are talking about tech like home security items and voice assistants, this becomes crucial to critically evaluate the overseeing company. When there is a proven track record of inability to trust Amazon due to Amazon pushing lies, deceit, manipulation, and gaslighting onto the customer, please second guess your decision to link your home security to Alexa.
Between the companies I've worked with and the incidents I've been through, I'd say AWS is well-known to be "very slow" at updating their status reports when something goes down.
Even had AWS liaisons tell us that things were definitely down even though the status page was completely green.
GMs/directors of AWS services are judged on how many "dashboard posts" they have (i.e., how many times their service posts to the status page). Incidentally, these same people are the ones in charge of approving posts to the status page for their service.
Guess why this is an issue.
Everyone on the ground at AWS knows this, and most AWS employees hate it.
Operational issue - Amazon CloudFront (Global)
Service
Amazon CloudFront
Severity
Informational
RSS
Elevated Error Rates
Jul 18 10:26 AM PDT Between 9:37 AM and 10:13 AM PDT, we experienced elevated error rates for request serviced by the CloudFront Origin Shield and Regional Edge Cache in the US-EAST-1 region. The issue has been resolved and service is operating normally.
FYI, it takes some pretty high-level approval at AWS for someone to make a change on their "status" page (signaling SLA breaches and money stuff). Not an actual live status page.
> The distribution does not match the certificate for which the HTTPS connection was established with. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
Having worked at AWS: us-east-1 is probably the easiest region to deploy in (it's old, it's popular, every AWS service is available there), but might just be the worst region to rely on (it's old, it's popular, and every AWS service is available there—but only most of the time).
I vividly remember when AWS Fault Injection Simulator first launched, posting a meme in the #aws-memes slack channel posting a drake meme with the first panel being "Using AWS Fault Injection Simulator", and the second "Deploying in us-east-1".
I will never understand why anyone in our infrastructure team thought for a moment that putting our backup datacenter in us-east-1. And why nobody else tried to get him fired.
That's like buying a backup generator from a guy that wants to meet you in the Walmart parking lot.