Hacker News new | past | comments | ask | show | jobs | submit login
Uber investigating breach of its computer systems (nytimes.com)
591 points by arkadiyt on Sept 16, 2022 | hide | past | favorite | 313 comments



I think it's worth repeating: at this point, MFA that is not based on Webauthn (https://webauthn.guide/#about-webauthn) should be considered dangerously insecure. Uber almost certainly enforces MFA for remote access; I strongly suspect we'll end up hearing that it was successfully provided during the authentication step (update: screenshots on Twitter appear to confirm this). As we saw in the case of the 0ktapus campaign, a sufficiently-skilled attacker will simply proxy the MFA calls to the real identity provider in real-time, the user none the wiser.

Webauthn, however, binds the authenticator to the domain and port, and requires https as the scheme. If a user gets phished, they cannot be compromised: the phisher's domain will not match and any Webauthn authentication challenge would fail.

So if your workplace is letting you authenticate with SMS codes, push notifications to an app, or 6-digit codes generated by an authenticator app/hardware device, you need to start banging on pots and pans up your reporting chain to get your security team the support they need to make Webauthn + FIDO2 hardware tokens or Webauthn + Mac Touch ID happen.


> you need to start banging on pots and pans up your reporting chain to get your security team

Not sure how Webauthn works, but as long as it can be stored in the cloud, I'm fine.

Industry is going towards this scheme of physical 2FAs that assume some living conditions appropriate for whoever designs things, without alternatives for people who don't fit that ideal way of doing things. Physical stuff gets lost or stolen. I'm optimizing for when I am traveling from home around the other side of the world, 6 time zones away, and my phone / stuff gets lost.

2FA is already unmanageable at this point: "just use your recovery keys" is what people tell you, but that's NOT a viable solution to the problem. Sorry but my recovery codes are in a safe lock, 10,000 Km away from me, I just lost the purse with my phone, or my device broke, or got stolen, or whatever, and need the damn TOTP code to telework _right now_.


Your bus factor is the problem here, not your 2FA system.

"I absolutely need full access to systems while working on a random device using only my password" means that phishing gives all the keys to the kingdom and that you don't have systems in place to make sure things work when people go on vacation.


> ...or whatever, and need the damn TOTP code to telework _right now_.

Do you? Really?

"Your lack of planning is not my emergency"

Unless you're the founder+owner, I'd expect that tech support at your company wouldn't expedite your access request just because you feel entitled to it.

Will a million dollar sales call fail and/or have to be rescheduled because you didn't have 2FA access? You should accept the responsibility, apologise with whoever it is that you let down and move on.

Of course, companies should give us all the tools we need to succeed. Not cheap out on their budget and then shift the blame on us. This means also giving you multiple devices and tokens, to ensure that you have redundancy (if a device fails you have a backup, if you lose a token you have a backup).

Even then, it can happen that right after a trip you might've realized that you left home and/or misplaced your security token. The professional thing to do is to communicate it to the company right away (so they can arrange to verify your identity and/or to ship something to you) once you discover it when you land. Not ignore the problem until you'd be back at work and in urgent need to attend some work meeting.

Travelling across the world, 6 time zones away is not something that you can do with every job. If your company allows it, that's a perk, but you should also treat it with the due care that requires.

Missing a day of work is small stuff compared to the risk that the whole company runs by allowing their employees auth to be phished. If you have to skip a day (or more!) of work on extremely short notice, it might be an unpleasant conversation with your manager, but it's a conversation that you should have nonetheless. (btw, do you have the phone number of your manager on your personal phone? if you lose your work devices, it's important to still have a way to reach out to them).

Security tokens are cheap, just make sure that you have N+1 (one for each device you need, plus one)


> Will a million dollar sales call fail and/or have to be rescheduled because you didn't have 2FA access? You should accept the responsibility, apologise with whoever it is that you let down and move on.

Spoken as someone who has clearly never had any tech duties in the financial sector.

You don't understand what time critical means until a dealer's access stops working / computer freezes 10 minutes before market close. ;-)

That could easily cost millions. That could easily loose the company the entire account.

And god help you if the market moves overnight and you were unable to get the trade on ...

A grovelling apology to the client might help avoid a complaint to the regulator, but you're unlikely to keep their business.

So yes, there will always be genuine need for IT to be able to bypass a user's 2FA, because its certain that user won't be able to wait until you send them a new Yubikey in the post.

And yes, financial companies are also well aware of phishing / SE and take appropriate steps to ID user.


The answer there, clearly, is to not have an individual be a potential SPOF. If failure of that kind of support costs millions of dollars, you absolutely need to have the ‘walked in front of a bus’ scenarios worked out.


> The answer there, clearly, is to not have an individual be a potential SPOF. If failure of that kind of support costs millions of dollars, you absolutely need to have the ‘walked in front of a bus’ scenarios worked out.

I'm not going to post details in public, but suffice to say, you are over-simplistic and don't understand the context.

Sticking with my example of dealers, let's just say people like dealers are not employed in great numbers in all but the largest financial organisation. Let's also say that there are certain events and certain times of day when the entire dealing desk is, shall we say, "busy and stressed out". There is little scope for a colleague to step in at those times, because everyone is franticly busy on the phones with their own workload.

In terms of 2FA therefore, the "walked infront of a bus" scenario is to (after correct security protocol, which includes, but not limited to, senior board-level management and compliance being told and approving) temporarily bypass 2FA for that dealer. Telling the dealer to pass his work to a colleague is just not going to work.

Of course financial organisations have "walked infront of a bus" plans. But they equally have levels of escalation of plans. Sometimes doing stuff at lower level with the help of the IT department is more than sufficient.

I'm not going to elaborate further.


> Sticking with my example of dealers, let's just say people like dealers are not employed in great numbers in all but the largest financial organisation. Let's also say that there are certain events and certain times of day when the entire dealing desk is, shall we say, "busy and stressed out". There is little scope for a colleague to step in at those times, because everyone is franticly busy on the phones with their own workload.

That just sounds like optimizing for efficiency over redundancy, which is a trade off you can make, but not one that is required. Financial organizations could hire more dealers so you don’t have “little scope” for others to help out. Or they could staff an IT group that is open 24/7 ready to help these traders instantly.


The options you are considering seem to be putting over bypassing MFA is:

- hire more dealers ($$$$$$$$$) - staff an IT group that is open 24/7 ($$$$$$$$$) - bypassing MFA ($)

Not sure if you are being serious that the other options are comparable to the 3rd for a business


That’s what I mean by optimizing for efficiency. They’d rather not spend the money to operate in a way that allows for them to be secure or redundant.

Honestly if they are going to just skip MFA everytime it’s a bother they might as well just not use it


I see what you mean, appreciate the clarification


Unfortunately, some places' idea of having this problem "worked out" is to react by making the SPOF's life miserable with punishment or firing. And the bus scenario is "covered" by having the scapegoat be dead. Not a good strategy for the business, of course, but it's definitely the reality at some places. Actually having the SPOF scenarios prevented would be a much more mature approach.


this does not even apply to Uber in any way.

There is no job at uber that could not be done by a coklwague that has access. There is no situation where 10 million delay in uber loses you 100 million dollars


> "Your lack of planning is not my emergency"

> Unless you're the founder+owner, I'd expect that tech support at your company wouldn't expedite your access request just because you feel entitled to it.

You think corporate IT support doesn't help out users who've forgotten their credentials?

I can assure you, resetting forgotten passwords is probably one of the most frequent things first-tier IT support does. And sorting it out synchronously while they're on the phone is normal - it's not like you can do it asynchronously when they're locked out of all the async messaging systems.

(Of course, the bypass might take an inconvenient form - like calling you back on the phone number HR have on file with you, or a three-way video call where your boss vouches for you)


I'm sure it can be frustrating, the idea of employees who don't seem to be sufficiently diligent, and then expect IT to drop everything to fix problems that the employee caused.

Ideally, the IT department is empowered to work proactively on effective infosec, for all of the company's real-world situations.

Then the standard for responsibility of each non-IT employee is only good faith compliance with what IT dept. told them -- not to be an IT expert who can reason about infosec tactics and strategy.


I think the tradeoff between "the entire company is breached" vs "I lost my device while on vacation and I have a tight deadline" is probably best geared to help prevent the former than the latter.

(Webauthn by design requires physical hardware tokens, not cloud storage.)


WebAuthn does not mandate any kind of form factor[1], external tokens use CTAP for USB/Bluetooth/NFC, Apple FaceID/TouchID and Windows Hello using proprietary interfaces with the built-in hardware. Blink-based browsers ships with a virtual authenticator for debugging[2] and there are a few more[3].

Apple and Google already announced cloud syncing earlier this year, using "passkey" as a friendlier term for end-users. QR codes already allow for cross-ecosystem non-synced use cases, like using my personal Android phone to log in an account with my work Macbook. https://securitycryptographywhatever.buzzsprout.com/1822302/... is a good listen to catch up on the latest developments.

[1]: https://www.w3.org/TR/webauthn-2/#authenticator-model [2]: https://developer.chrome.com/docs/devtools/webauthn/ [3]: https://github.com/herrjemand/awesome-webauthn#software-auth...


You are correct, and I should have said "Webauthn is designed to rely on something you have" rather than saying "physical tokens," since the latter is confusing and could be taken to imply a form factor.

If you lose the things you have while on vacation, though, it will be inconvenient (which is what the OP seemed to be against, and what I meant to be responding to). I think for a corporate environment that inconvenience is a reasonable tradeoff.


> (Webauthn by design requires physical hardware tokens, not cloud storage.)

That's not true. As an obvious reductio ad absurdum, you could just build a fake USB driver that presents as a security key. But more practically, I'm pretty sure that iOS the Webauthn secrets are synced cross-device via iCloud.


Not true. Apple Passkeys is cloud based; syncs through iCloud and is webauthn.

Nothing about the spec mandates physical hardware tokens. Software tokens work fine too. https://developer.apple.com/passkeys/


To start with, let's say you have your corp laptop and corp phone (both are access devices and both should have device bound certificates), and your yubikey for webauthn, and your corp photo-badge and your government issued photo identity (all three are authentication factors).

Let's say you are traveling and you lost one or both of your access devices. First immediate step is contact your security hotline and notify them that you lost the device(s). They should immediately remote-wipe/disable said devices.

If you are visiting another branch office of your company, local IT should be able to physically verify you with your government issued ID and your corp badge and issue you a temporary laptop/phone and get you basic access privileges.

If you need high-trust access, it should require more verification steps (your manager has to confirm you are not on vacation and that it is really you who is meeting with the IT shop etc), and more elapsed time (this sucks but it is important to slow things down anytime primary access devices are reissued due to loss/theft).

Obviously, if you are a super critical person and have become a single point of failure, that's bad.


> I just lost the purse with my phone, or my device broke, or got stolen, or whatever, and need the damn TOTP code to telework _right now_.

So, wait for a new authenticator device to be shipped to you. Like, if the work laptop broke, you'd presumably have to wait on a replacement for that, too...


The way I view these types of objections - I know one or two people who have lost their wallet more than a couple times in their live ("I need new ID!") and the rest of the people I have know have NEVER lost their wallet. Even people who have their wallet stolen often recover it later (thieves remove valuable part and throw it away so they aren't caught with it). We probably shouldn't design our security procedures around the very, very small number of people who have lost their wallet a few times as an adult.


"User lost their token/forgot their password/lost access to their e-mail/lost their phone/changed their phone number/changed their mailing address" may only be 0.1% of users - but it's 100% of social engineering attacks :)


Yep, and the GP is saying that you should optimize your procedures to deal with the attacks, not with the honest errors. Exactly because the honest errors almost never happen (even when there are thousands of people on the organization).

Anyway, if a complaint about procedures makes exactly as much sense if you replace the cause with "gets sick and spend a day at the hospital" without losing meaning, then it's not a valid complaint.


I'm curious what problems physical tokens have that don't have easy workarounds. I have a Yubikey on my keychain with home/vehicle keys that supports USB and NFC (so it works with my phone).

For work, I use that as a backup. For personal use, I just leave a cheap Feitian plugged into my desktop. For work, I just leave a slim USB-C plugged into my laptop all the time (if the laptop gets stolen, they still need the first/password factor)

In addition, emergency recovery codes just get copied to a note in my password manager. You don't need to lock recovery codes in a safe--they're only 1 of the two factors. You just need to make sure you don't store them in the same place as the other factor (or, if you do, that place is protected with multiple factors)


Just plug your recovery key in a backup server on a network you have physical access to and expose ssh port as a tor hidden service (optionally with onion client auth if you're paranoid)? Either way as long as you have backup keys that don't involve biometrics you can set up your own infra for that as you see fit.

But yeah, the face whatevers really must not be the only option.


I just travelled eight time zones away and (apparently) lost my security key there. It caused pretty much zero issues, I just stopped by the office and picked up a new one to bootstrap off a spare key. Then I revoked the old one. If you don't have another key (you should really get one!) you can call in and they'll figure it out.


How did you drop into the office to get a new key? Is your office eight time zones away from where you live?


Nope, I was traveling on business to a place with another office. This is true of a significant portion of business travel, no?


Perhaps, but what does this have to do with the situation you are responding to where the person WASN'T doing that.

I have literally never travelled to "another office" in my entire career. But then I've always worked for startups ...


> Not sure how Webauthn works, but as long as it can be stored in the cloud, I'm fine.

Apple is solving for this exact request with Apple Passkeys; it is just WebAuthn under the hood with cloud backup, multi-device, and sharing built in.


Truly secure access to computers is simply not possible under those conditions. A hardware root of trust used to verify your identity is mandatory.


It’s too bad the user experience across devices sucks. The best experience by far is a yubikey nano since it is mostly permanently attached to your laptop. It’s always there and you just quickly tap it. Love it.

Of course that doesn’t work with my iPhone. So I guess I need a second NFC yubikey that stays on my key chain in my pocket (which I don’t have since I don’t carry keys.). So then I have to remember to register both yubikeys. Then every time I have to login to GitHub or whatever on my phone I have to pull out my keychain (which I don’t have) and tap it on my phone.

I wonder when I can just get a virtual yubikey built into my phone. No extra device. My phone is my device. It kind of sounds like what Passkey is but I don’t want to pull out my phone to auth my laptop.

I really loved the idea and convenience trade off of SoftU2F. Too bad it’s dead now.


> I wonder when I can just get a virtual yubikey built into my phone. No extra device. My phone is my device.

Last year, Apple shipped the first version of this: you can enroll your phone (or TouchID-equipped Mac) on sites like GitHub.com and it’ll use the Secure Enclave for WebAuthn secrets. I’ve been doing this since 15.4 came out and it’s great. Prior to that, I used a Yubikey 5 with USB and NFC, which is still handy since that’s where I store TOTP seeds for less secure sites.

Passkeys extend that idea further by allowing you to register once and have it synced rather than having to register every device on every site[1].

That last part is important because AWS has a huge barrier: the number of MFA devices you get is one, which means you either need insecure things like synced TOTP seeds or you have to be comfortable never losing your Yubikey. I have been asking our TAM to prioritize fixing that for years so backups can be a real thing.

1. Over simplified a little - hear Adam Langley at https://securitycryptographywhatever.buzzsprout.com/1822302/... for the right version


> That last part is important because AWS has a huge barrier: the number of MFA devices you get is one, which means you either need insecure things like synced TOTP seeds or you have to be comfortable never losing your Yubikey. I have been asking our TAM to prioritize fixing that for years so backups can be a real thing.

Actually, can we mark AWS as insecure? Seriously, it was a bother at the time it was rolled out but now I could say that Amazon is insecure. If they could not bother with this, there are definitely more security holes intentionally hidden from the general public.


No – the problem here is really the reverse: they limit the number of devices which can gate access to your account, on the theory that you'll handle a breach by contacting them. That's theoretically more secure but slower.

(Non-root MFA can be reset by another admin or root so this is most of a concern for the root account)


> That's theoretically more secure

I fail to see how that can be true. How will they validate that it is you on the phone?


If you haven't dealt with this before, it's not calling support and social-engineering someone into hitting the reset button. The last time I knew someone who had to reset an AWS root account password (broken Yubikey), it required multiple phone calls AWS initiated to the billing & technical contacts (which can only be set by root so an attacker can't easily change them) to confirm intention and then they had to sign a form in front of a notary who checked photo ID.

I classed that as more secure since you can't do it remotely and the in-person portion further increases the difficulty.


You do not have to even talk to AWS to remove the MFA from the root account. You simply need access to the phone number on the account (though there are ways around the phone number, see below) and the email address for the root account.

It's been a little over a year since I've done it but as I recall this is how it goes. You receive an email with a link that takes you to a site that starts a verification process via the phone. You get a number from the site that you are prompted to enter when they call you on the phone. Once that's done you can log into the account with the MFA device and then even remove the MFA device entirely.

The email address I believe can only be changed by AWS (and at least the last time this was an issue for me can't ever be reused for a new AWS account).

The phone number can be changed by anyone with aws-portal:ModifyAccount, which probably means someone with admin access. It is NOT restricted to being modified by the root account.

So if you have a working access to an account with that permission and access to the email you can change the phone number to one you have access to and go through the whole process. Meaning if you have the above permission you really only need access to the email.

Link to the documentation for this flow: https://aws.amazon.com/blogs/security/reset-your-aws-root-ac...


Ok, that's not trivial to hack, but it's in no way more secure than accepting a few more backup tokens.

Both email and phone numbers have widely known and exploited vulnerabilities that won't ever be fixed (worse if the phone part is only SMS). Requiring both at the same time is OKish, but not any exemplary security.


For what it's worth the phone portion is a voice call where you have to enter a number with touchtone.


It's possible that even though we are not using GovCloud they had additional precautions enabled for us (this was a few years back). My coworker vividly remembers having to wait for the notary to show up.


Slowing things down is the right approach when resetting/reissuing/rebinding auth devices.


When removing MFA, yes. What I'd like would be changing n=1 to n=2 so you could have a backup against a single failure.


Regarding AWS, it's truly insane that they only support 1 device (breaking from the FIDO2 recommendation) but you can also put AWS behind SSO, and then have 2FA on the SSO.


> it's truly insane that they only support 1 device

I recently got a notification to create a separate AWS account (I don’t use it), and thought might as well enable 2FA. Added the first key. Looked for a way to add the backup key. Was confused, removed 2FA. What is that, lockout Russian roulette?


This predates FIDO2 by at least a decade. SSO works great for everything except the root account, where you really do just need to lock it up tightly and make sure you know how to authenticate to support in a disaster.


Recently my phone broke and it took a day to get a new one. I was so glad to have my 2fa codes somewhere else as well. I'd never want to rely solely on one device for access to everything.

Apples plan is that I need to own several Apple devices for that, this is a non-starter for me.


Your backup plan currently can be hardware devices (a $20 Yubikey works great) or printed codes. When vendors other than Apple ship passkeys, that should extend to include other devices.


The backup option can be a Yubikey rather than another Apple device.


To add, Chrome currently supports webauthn using your phone via BLE. When you try to enroll/sign in, if you click 'add an android device', that QR code will also work in the iOS camera app and allow you to use icloud to store & log in with that security key. The only real requirement here is a browser support and a desktop with bluetooth, something not super common on gaming / custom built PCs until a few years ago.


Apple did something similar: you can login using your phone’s WebAuthn credentials if you’re in BLE range. The main problem is that it’s Safari-only but the passkey spec should allow Firefox to implement it.


TouchID unfortunately does not work with Firefox. Making it non viable for a large rollout.

Yubikeys have the advantage of working with all browsers


Corporate environments usually have no problem mandating a specific browser be used.


Right, that's how we got IE6 :)


How do you recover if you lose access to your device?


Currently: use a hardware key or printed backup codes.

Passkeys: same, or one of the other participating devices.


Backup key on your keychain.


Do you mean a physical key?


I can't speak for them but that's exactly what I do with a Yubikey on my personal keys & work badgecard lanyard. My rationale is that the physical key prevents remote attacks since they can't use the token and anyone who finds/steals my keys won't have the password. Scenarios where the attacker has both aren't on my personal radar — that's the kind of situation where you're looking at things like duress codes.


create a second account or use the root account?


If you have a Linux PC with a TPM, you can use https://github.com/psanford/tpm-fido to create and "plug in" a virtual USB WebAuthn key whose secret is irretrievably stored in the machine's TPM. This effectively asserts that your specific machine is being used to enter a given site. However, it's important to remember it doesn't necessarily verify that *you're* present, or even if *anyone* is present at all, since the presence check is done via a software dialog and can be pwned along with the rest of the system.


>The best experience by far is a yubikey nano since it is mostly permanently attached to your laptop.

Unfortunately ports are increasingly at a premium. You basically need to "mostly permanently" block the use of one of your USB ports for this to work. It's OK with my old MacBook Pro which has a USB port I mostly don't need to use for other purposes but thinner/lighter laptops don't have a lot of ports to spare.


Android has a built-in FIDO2/webauthn authenticator these days (well, built-in to Chrome, and by Android I mean Pixel phones). I'm sure Apple will build something similar as they have the hardware for it.


They called them Passkeys. It's FIDO2 with resident keys only AFAIK though https://developer.apple.com/passkeys/


Here’s Apple’s documentation from 2020: https://webkit.org/blog/11312/meet-face-id-and-touch-id-for-...


> Of course that doesn’t work with my iPhone. So I guess I need a second NFC yubikey that stays on my key chain in my pocket (which I don’t have since I don’t carry keys.). So then I have to remember to register both yubikeys. Then every time I have to login to GitHub or whatever on my phone I have to pull out my keychain (which I don’t have) and tap it on my phone.

That's why I use one of these rather than a yubikey: https://www.ftsafe.com/products/FIDO/NFC . I don't know why yubikey doesn't make a dual-method key when it's such an obviously nicer way to do things.


YukiKey does make a dual key. The problem is that these keys suck to have because you can’t leave them in your laptop and carry it around like that. Which makes it easy to forget.

I personally was always forgetting my key when I worked in office. Or more likely I’d have the key and forget a USB A to USB C adapter (work was cheap and wouldn’t give out USBC keys).

My new job gives out USBC keys and I have yet to forget it when I needed it. They just don’t work as NFC on a phone.

https://www.yubico.com/product/yubikey-5ci/


Regarding the keychain issue, what works for me is using a wallet with a side pocket, that way the yubikey doesn't fall out... (can't remember the name for such an item; clutch wallet?).


And some sites like AWS management console don’t allow you to register more than one key :/


So, where does it end?

Cybersecurity has been playing those silly games of "increasing security" and some 80% of recommendations were frankly BS

"longer passwords" yes. "password has to have symbols, numbers and be rotated every 3 mo" no

2FA yes, but not SMS, but not OTP because people get fished, blah blah blah

Not to forget the "put everything in a password manager" then you lose or forget your "extra safe random password" and are SOL

Meanwhile there are still incompetent people around that think asking for Mother's Maiden Name should be a security question

So where does it end?


> So where does it end?

It doesn't (can't) ever end because security is a process, not something you achieve and are done.

With ~inifite budget, one could achieve perfect security (but only for a clearly scoped threat model) for an instant, but both the infrastructure and the attackers move on, things constantly change, so it's not perfect anymore. And of course infinite budgets don't exist.


I don’t think it ever truly ends, so long as there are secrets and people who want to uncover them. This is the classic arms race, and subsequently people who can’t keep up are just casualties…


It ends when you have an internet that has consequences. To get proper consequences, you need all users on the network to be identifiable and every device linked to that identity. With ipv6, you can give each person a (few?) static ipv6 address(es). This will basically allow you to determine where and who originated every piece of information on the internet. Next to every comment/photo or other upload, your real name/surname and facial photo needs to show. For each country you travel to, you get an IP address and you are bound to all laws for that country. Your IP address basically becomes your online passport. Anonymity is a source of massive amounts of nastiness online, there are things that people say/do online that they would never do in real life as there are social consequences. The same rules can be applied (or even stricter ones) to businesses or any server, that way you know who your attackers are. It should also be easier to block out whole countries from connecting to you, if you want that (not enforced).

Obviously, no children should be allowed online even in a clean / locked down version of the internet. Ideally no ads/marketing should ever target children either.

I have the same impulse as most people that locking the internet down is a bad idea, but it seems like that is the only natural outcome eventually. We either stop abusing the internet and keep some form of freedom of speech / freedom (some countries values this more than others), or eventually the internet might become heavily regulated/controlled, which is the path we are currently on. Companies are already testing the waters with this, in some cases they are all ready committed to this (see self-hosting email vs big providers blocking small sender).

The problem of security is just an arms race towards the above, it is just a side effect because it is tolerated / no real consequences for misuse of the internet in most countries (that includes leaking data, being breached and data stolen, or just plain ol phishing - companies face zero consequences for data breaches).

Another way to think of it, you have a scale: on one side you have ultimate control, ultimate safety, no freedom - on the other side you have less control, less safety, but absolute freedom. The security arms race is just the shifting of balance between the two extremes.


Smart cards have long been mostly phishing proof. WebAuthN is essentially a more convenient interface to the same technology.


Then your phishing involves them installing some type of remote access on their machines. Or to get some information you need


Don’t worry, we’ll cover this in annual mandatory security training.


I sure hope WebAuthn is easier to implement than that site is making it look like. I know nothing about how it works, but the sample code (Under "Example: Parsing the authenticator data") that requires parsing slices of bytes out of the response and then constructing some object with magic numbers looks really hacky. Maybe it is supposed to be exposed at a low level like that so wrappers can be made around it, but if there is any hope of migrating sites over to it, it's going to need to be dead simple to implement without screwing up.


For the most part, none of that matters client side. The web client side is mainly just a passthrough to your backend that'll do the actual processing of those binary blobs other than for example code.


I see, if all that is server side then it makes a lot more sense as I assume backend libraries will get created to handle this for various languages. Bookmarking it to check it out later when I have time to read it all, hopefully it isn't as daunting as it looks.



How would one slice a byte array without magic numbers? Or without constants representing said magic numbers?


By having an API that abstracts the details away, like any other browser API. I would expect something like attestationObject.getPublicKey() versus whatever is going on in that demo.

I believe it was some part of oauth or saml (again not an expert here) where developers were making a common mistake by not verifying everything in the spec, leading to an easy bypass if you knew how it worked. Having devs implement a complex spec relating to authentication is a recipe for disaster.


It's good to have but in the real world that wouldn't have stopped a determined attacker. They could have social engineered them to run code on their PC


So an entire class of attacks would have been removed and the attacker would have moved to another class of attacks.

As for running code in the environment there are many, many ways to deal with that. Obviously it's an easier environment to audit, but it's also much easier to control.


Yes, I don't disagree with anything you said. I am not saying MFA may not have at least slowed down the threat actor but the focus here should be how easy lateral movement was. Like you said there are many ways to get in. If the network share was treated the same as internet facing stuff though, that sounds like a deeper issue many orgs face but I am surprised that a fairly new org like Uber is not doing that already.


> So if your workplace is letting you authenticate with SMS codes

There's an old saying in photography, "the best camera is the one you have with you".

IMHO its very much the same thing with 2FA.

Any 2FA is better than no 2FA.

Sure some 2FA options are more secure than others, but by the same token, there's also a scary number of websites out there that have zero 2FA options. Others make it inordinately difficult to find (e.g. I'm looking at you Slack ... finding where to turn on 2FA in Slack is a nightmare).

Ironically there's no 2FA option for HN either. ;-)


That's like saying MD5 is fine for hashing passwords, because it's better than plaintext.


> That's like saying MD5 is fine for hashing passwords, because it's better than plaintext.

No, I'm saying we need to come down to planet earth and recognise we live in the real world. Hence SMS or TOTP is preferable to nothing at all.

Its a bit like the hardcore open-source types who can't see the wood from the trees and cannot fathom why anyone would possibly want to use anything else other than Linux and fully open-source alternatives to Microsoft Office or Photoshop. Sometimes you have to compromise.


> That's like saying MD5 is fine for hashing passwords, because it's better than plaintext.

If for whatever reason you can't have anything else, MD5 is obviously better than plaintext. Not fine, but better.

With passwords you don't have external dependencies but with MFA, you do. Things are more complicated and real life is messy.


They're saying perfect shouldn't be the enemy of good enough. Completely valid.


> Ironically there's no 2FA option for HN either. ;-)

You can't even delete comments or your account on Hacker News, so it's not like it takes privacy or security seriously.


Any 2FA is better than no 2FA

That's simply false because of the poor customer service of the providers and fates of many phones.


How is that false? Name a single example where SMS 2FA is worse than none. And just because it will always come up: 2FA, not treating the second factor as only factor.


> Name a single example where SMS 2FA is worse than none.

SMS is terrible because it is so easy to lose account access.

Phone broken/stolen? Completely locked out.

Or, I have this one financial institution that insists on sending SMS 2FA to the phone number on file, which is a 20+ year old landline which obviously can't receive SMS. Completely locked out. Someday I'll have to find out some way to get my money out of there (they have no local branches).

I will always use TOTP if at all possible, because it's not a single point of failure. I store the seed values securely and they are backed up, so can't be lost.


That is actually a good point. Hadn’t thought of that.

I hate TOTP, can handle SMS 2FA (sim-swapping is super rare here) and love FIDO/U2F/Webauthn (or whatever it’s called today). I have one with NFC on my keychain, and a backup device in the drawer. No off-site backup key, but encrypted backup codes.


when your sim gets hijacked and someone steals your entire bitcoin wallet?

worse than none because it "justifies" being sloppy with the first factor (i.e. account password).


Okay, I guess if you stretch that hard you can reach your goal.

edit: Your first sentence is meaningless because that is just as stolen with no 2FA.


> Any 2FA is better than no 2FA.

False.

SMS 2FA is significantly, uncategorically, undeniably worse than no 2FA at all.

If your SIM card is hijacked, most websites/companies will quite happily let the impostor click a "Forgot password" link and get a SMS code to verify their identity, which will allow them into the account to take/change whatever other details they want at that time.


That's a poor password reset process, not SMS 2FA. You can do SMS 2FA without having that terrible reset process, you can have a terrible SMS reset process without SMS 2FA. They're two different concepts.


> most websites/companies will quite happily let the impostor click a "Forgot password" link and get a SMS code to verify their identity

That's not 2FA. There is one single factor there, the SMS code.

SMS 2FA does not require you to have a 1FA backdoor, so you can't claim the latter is an inherent fault of the former.

For example, pairing "enter the SMS code" with "click the link we sent to your backup e-mail address" gets you a two-factor password recovery process.

That isn't the best or only method of 2FA password resets, it just comes to mind first because it's the last one I used and it is sufficient to prevent access via SIM hijacking alone.


You’re right and fair enough.

I do feel though that SMS porting is such a lax system that using it as an authentication factor leads you into a lot of (SMS && social-engineering) situations that would be more preventable if SMS was not involved.

I say this fully realising that in this scenario the party allowing allowing these attacks to work due to poor understanding or lack of proper checks is the real problem.


100%. The common thread in all of these recent attacks (Uber, Twilio, Okta, etc) is the “phishability” of the authentication methods involved -- as you mention, the unphishability of WebAuthn is what makes it particularly compelling.

What’s head-scratching to me is why tech-forward enterprises haven’t been faster to adopt unphishable forms of authentication like WebAuthn. I’m biased as I run an identity and access management company (stytch.com), but I hope more companies will consider integrating WebAuthn to support unphishable MFA.

Today, WebAuthn introduces some nuances that can discourage a B2C company from supporting it today (e.g. account recovery with lost devices), but it’s a clear win for corporate network/workplace authentication and B2B apps. I believe some of the lack of adoption is due to complexity to build (more complex than traditional MFA) and cost for off-the-shelf solutions (Incumbents like Auth0/Okta require ~$30k annual commitments to let developers use WebAuthn). If developers decide to build with Stytch, WebAuthn is included in our pay-as-you-go plan and can be integrated in an afternoon(https://stytch.com/products/webauthn)


I’ve not read up about webauthn yet. How does it work & what makes it unphishable?


Here's a bit more background on WebAuthn: https://stytch.com/blog/an-introduction-to-webauthn/

What makes it unphishable is that the authentication is not based upon something that a user can be deceived into sharing with an attacker. Passwords and one-time passcodes (OTPs) can both be remotely acquired from users when attackers convince users to share these text-based verifications with them.

Because WebAuthn validates possession of a primary device that was previously enrolled (either the computer/phone the user is leveraging for the biometric check or the user's YubiKey), it's device-bound and cannot be phished.


How do you proxy MFA unless you're using a third party service for authentication? Plenty of password apps can bind to specific URLs and ports to support TOTP. In what ways do you think it is more secure if an authentication provider gets hacked? Then they could just as likely proxy the hardware token handoff. I don't think hardware tokens are all that much better than someone who is more security conscious, but they are certainly great for people that have no clue what they are doing or just one step in a MFA process.


Actually using a pw manager for TOTP is quite rare. People using a PW manager at all outside of the tech space is rare in my experience as well, outside of the built-in chrome PW manager.

Their 2fa is most likely okta or Duo style, in that the default authentication method is via push notification.

> Then they could just as likely proxy the hardware token handoff.

You can't do that because the security token itself receives the "relying party" in the form of the domain name it's trying to present authentication for. Requesting "uber.com" when on "ubeer.com" won't work.


You think that it is worth repeating that multifactor authentication not based on the latest unproven marketing hype technology, Webauthn, is dangerously insecure? You don't know what you're talking about.


> not based on the latest unproven marketing hype technology, Webauthn

WebAuthn is an ongoing project but the history goes back almost a decade to U2F, and the ongoing work has been carefully reviewed by a number of industry heavy-hitters. We know that it’s robust against phishing, too, which is why it’s so relevant to this conversation.

I’d also like to know more about your rationale for describing a system all of the major players have implemented as “unproven marketing hype”.


Maybe "unproven" was a poor choice of words. I'd be willing to go so far as to say that it is "proving" itself as bleeding edge technology. However, if measured by adoption and risk-taking, it is largely unproven.

The history may go back almost a decade, as experimental technologies driven by industry working groups tend to do, but that work does not extend beyond the theoretical. If Facebook and Google implemented WebAuthn, they're still not staking their reputations on it. If they did, we wouldn't be using password-based logins nor MFA. Instead, they're slowly testing the waters in the real world, waiting to see how hackers respond to it. Consequently, WebAuthn remains on the bleeding edge, in the very early part of the adoption curve as it proves itself.


> If Facebook and Google implemented WebAuthn, they're still not staking their reputations on it. If they did, we wouldn't be using password-based logins nor MFA.

I think the naming here is causing confusion. This thread started about MFA usage, which is a chain of functionality going back to U2F:

> at this point, MFA that is not based on Webauthn (https://webauthn.guide/#about-webauthn) should be considered dangerously insecure.

That is broadly adopted and all of the companies you mentioned use it internally and and recommending it as the most secure form of MFA, based on both the strong phishing resistance and ease of use improvements, and I don't think that position I quoted is especially controversial in the security community other than that people in enterprise environments acknowledge the challenge of retrofitting older applications and services.

WebAuthn also allows you to setup passwordless login flows, which relies on some newer features which were added such as attestations about how the token was unlocked (i.e. corporate IT probably wants to require biometrics, not just a Yubikey-style tap). That is definitely newer, but again, you're talking about something which Microsoft and Apple have already shipped.


Well, they have recently added a bunch of weirdness to the spec.

At one point U2F was simple - a USB token with a hardwired user presence button, providing a second factor alongside a username and password. Trivial to move between different computers and OSes. Secure even if the host OS can't be trusted. Physically unpluggable.

These days there's a mad variety of options. Options that are only secure if the host OS can be trusted. TPM-based options that are tied to a single laptop. Options like Windows Hello that are locked to a single OS vendor. Passwordless login, turning two factors into one. Copying credentials between devices, through the cloud. Sketchy low-cost biometric scanners.

For a security system, the latest versions sure are embracing a lot of complexity.


When you make a communications protocol open, people create all kinds of crazy endpoints for it.


I will grant you that the parent's opinion is a little strong, but fundamentally, they have a point. The weakness here is the human. Standards like this take make social engineering attacks much more difficult.

With that said, MFA in some form is better than none. However, some implementations provide better security than others (of course).


I'm a security professional with a decade and a half of hands-on, real world experience. My most recent position being the product manager for Identity and Access Management for a leading B2B SaaS, dealing with real world attacks from extremely sophisticated threat actors. I assure you I know what I'm talking about, I've lived and breathed it every day for many years and have followed these standards since their initial drafts.

If you have knowledge on superior ways to protect users from MFA passthrough, please share it. I am always happy to learn about better ways of doing things. But contrarian bandwagoning without providing effective alternatives isn't helpful.


I would be shocked if they didn’t issue all employees YubiKeys.


A lot of people still have legacy Yubikeys floating around, and these are replayable. What you need now is something like the Google Titan FIDO2 key or one of the Yubikey FIDO2 keys. Transitioning an entire company to these, getting everyone to self-enroll, and then removing the ability to use all the less safe options across the employee base, contractors, etc is not cheap nor easy, and of course requires a massive amount of retraining. It's not as trivial as just buying them, sadly.


What retraining? You install the yubikey by plugging it in, registering it, and using it by tapping it as needed. What is complex?

There was literally no training involved for this during my time at ElGoog. One wiki page covered it adequately.


For someone who claims to have worked in big corporations and accuses others of not having worked in them, you sure are optimistic about the capabilities of the average user. Not every company is Google.

For starters, how do you register the Yubikey? On Okta, this is a multi-step process with at least one non-obvious step, and one easy way to screw it up.


The people who work at Google are not smarter than the people who work at Uber, or any other tech company. Getting people to understand 2FA might take a little bit of work but it's not hard.


WebAuthn (well, U2F, but that's essentially WebAuthn with a slightly different browser API and WebAuthn is backward compatible with U2F only devices) support has been on the YubiKey since the Neo in 2014 [0]. It's basically impossible to have a YubiKey that does not support WebAuthn. You do not need FIDO2 on-key resident credentials to benefit from WebAuthn.

[0] https://www.yubico.com/blog/neo-u2f-key-apps/


They did for a while but it was too expensive. Uber uses OneLogin, who I'm sure is also investigating. We had apps on our phones that received as "is this you trying to log in?" notification. You had to consciously hit "Yes" in order to continue the login flow. It wasn't offline 2FA like Authy or something. There _was_ a much higher standard of security there.

This was _social engineering_, something that even the finest MFA algorithms don't guard against.


I my experience, prepare to be shocked.

And even if they do, they likely have a “backup” for people who never seem to be able to use the Yubikey right.


[flagged]


This is false, a gross oversimplification. Every organization has complexities, it doesn't reduce to a common idiocy. Even when the net result is idiotic in hindsight.


[flagged]


It sounds like you're experience has been at a small firm. At scale, 2fa and yubikeys are a no brainer with regard to risk vs reward/ safety.

Do you think all security engineers or whatever you want to call them are total incompetent idiots? If yes, I can't help you. If no, then you don't need further explanation from me.

Security requires a complex balancing act, and in this case they got it wrong, end of story. As stated elsewhere in this thread, there are only those who've been breached and those who don't know they've been. End of story.


Corporate security teams are built on three different traditions:

1. The policy/compliance tradition - the kind of people who looked at PCI-DSS and decided that was what they wanted to devote their life to. The accountants of the tech world. You've got resources and want to roll out U2F? That's not in the policy document, we'd rather spend the resources on this great audit of our suppliers' compliance that I've been planning....

2. The be-lazy-be-popular tradition - for the team that hires a guy to reduce other people's workload, not increase it. Resources to roll out U2F? I won't stop you if you've got the money to spend, but what we've got is probably enough - a lot of companies don't use U2F, you know.

3. The hacker tradition - the kind of people who see every real, exploitable vulnerability they find as proof of their 1337 status. They don't care if a policy document says you should disable paste, that's bad advice, don't do it. Rolling out U2F sounds like a great idea - but a lot of corporate environments will chase these types out, or curb their enthusiasm by ignoring their reports.

Perhaps kirbys-memeteam worked in companies with security teams that tended more towards traditions 1 and 2, while your employer's security team had more of tradition 3?


[flagged]


The bigcorps don't make exceptions for Tiny Tony's. If you work at these sorts of firms, you should probably start an anonymous exposé blog, it would be enlightening for the rest of us. It would also probably help get things fixed so they could avoid further embarrassment before it becomes a real problem (like in this case).

I bet you could make a fair sum from the ad impressions alone, and feel good knowing you were acting as the force multiplier for positive change.

Edit: Your personal jabs aren't in the spirit of a collaborative or curious conversation. You've revealed yourself as just another 007 wannabe. Boring.


Explain why what is false? You're just making vague claims about anecdotal experiences. It seems unlikely, in general, that a security team would not support a Yubikey rollout. Even one that only cares about compliance would likely support it because it will make compliance easier - auditors care a lot about phishing and if you can say "OK, yeah, we had some users fail the phishing test again BUT our 2FA is phish-proof" that's an easier conversation.

I'm sure there are truly lazy and incompetent security teams out there but it makes no sense that they would be the majority or even particularly prevalent. Maybe you're just unlucky and ran into one, or maybe there were real reasons why a Yubikey rollout wouldn't work;

a) Who's going to ship the keys? Yubico provides services for that, will you use those? Pay for them?

b) Who's paying for this? Did your infra team ask the security team to pay for it? Who's paying for replacements and support?

c) Is this a high priority for the team vs other issues?

d) Do all of your vendors support Yubikeys or are you going to have to have a hybrid solution? What will migration from vendors configured for some other FMA solution to Yubikeys look like?

I support a rollout at any company, for the record, but these vague statements with the conclusion of "security people don't care" leave a lot to be desired.


> Security exists just to check boxes at most firms

And then you get the bullshit solutions that the OP was complaining about.

Nobody serious ever claimed that SMS based MFA was secure. Large companies implemented it anyway, and pushed it into unknowing developers nonetheless.


What? Security is the one domain I found where you can't just waltz in because you've heard of a computer. You need to do the work upfront with Sec+ or the like, it would take months for a newbie. Past that point, what more guarantee can you have? Even work experience can be meaningless if they weren't in the right team/role.


Security is a cost center, not a profit center. Most companies cut that investment to the bone, which means paying the bare minimum that lets them check boxes.

This is true for basically any non-tech company, and is true for like 75% of the tech companies.

> You need to do the work upfront with Sec+

Sec+ is part of the paper mill parent is referring to. A book of terms to memorize for 3 months and then call it good.


>Security is a cost center, not a profit center. Most companies cut that investment to the bone, which means paying the bare minimum that lets them check boxes.

This. Definitely.

But even at organizations with the budget, the knowledge and the infrastructure to do security right, no matter what the security folks think/want/suggest, UX and low friction matters. If a process is too onerous (and that varies from org to org and person to person), it will be rejected post haste -- as will you if you try force it.

This is especially true in the finance sector. Joe trader is too busy making bank to worry about all that security bullshit. "Just make it work! I don't have time for this. I banked seven-figure bonuses in three quarters this year and you're just some asshole! Get the fuck out of here, I'm busy!"

And that attitude often extends to management as well.

If you get away from the front-end and its users, the InfoSec guys are all over the back end like a cheap suit. Because their (not seven-figures, but not a kick in the teeth either) bonuses depend on making sure nothing bad happens.

Money (especially in the finance sector) is a powerful motivator, but it sometimes (more often than I'd like) creates incentives that thwart optimal security practices.


+100

Not only is it a cost center it’s also seen as a hindrance to the fast progress. Rarely will you come across an exec who takes security seriously. For them it’s just a checkbox at best and an obstacle at worst. I’m speaking about application security though. It’s possible that IT sec, physical security etc are taken more seriously.


Cuts both ways, of course. There are terrible IT Security departments that don't understand the concept of false positives, create approval flows for critically needed items with 2 week SLA turnarounds, topple the network with poorly designed endpoint security scanners and tons of useless telemetry and so on.


That is why government intervention is needed. Australia is proposing significant changes to its cyber security framework and legislation. https://www.homeaffairs.gov.au/reports-and-pubs/files/streng... Mandatory cyber security obligations backed by penalties and direct government intervention for critical national security companies. https://www.homeaffairs.gov.au/reports-and-pubs/files/exposu...


We kinda sorta already have laws designed to make software systems secure, but they aren't really followed in spirit. I suspect what's needed is an expansion of NIST-like bodies, more concrete specifications for what is and isn't allowed (e.g. like seatbelt regulations) and such.

Ultimately, its going to be a cat and mouse game. A really determined hacker will find a way. But admin users with permissions over everything, passwords/encryption keys stored in plaintext etc. these are things that we can probably patch up really well, and force companies that can't afford to do that to (justifiably) go out of business.


Sec+ is laughably insufficient.


And even when you do get it setup you end up having to make all sorts of exceptions for various people who can’t be told “no”.


Painfully true. And once that exception is made, its easy to poke holes for more requests, until the security systems becomes somewhat pointless.


Why are people talking about MFA on this thread. Look, as someone whose day job is responding to such incidents, someone targeting Uber and is persistent will get in, MFA or not. Infostealers for Mac are a thing (Uber is a mac heavy shop I hear) and that's all it takes to steal cookies and tokens post-mfa, or why even bother with that, if you're running code just make it a reverse shell.

The big screw up here is powershell script on a network share. A cheap pentest would have uncovered something like that.

Modern security is not perimeter focused where you try to keep the bad guys out. Yes, you should do MFA,firewalls,vpns the whole schtick but(!!) your presumption should always be that threat actors already have a foot-hold in your network. This is very important because it helps you focus on basic things like scripts and gpos with creds in them but also you treat internal devices the same as internet exposed devices. It's sort of what the whole "zero trust" thing is about as well. In other words, host/user compromise is a given but lateral movement should be at least as difficult as breaching the perimeter.

But my prediction is, just like top commenters here, they will slap MFA on it and of course cleanup scripts with creds and call it fixed until the next compromise. Oh and FYI, MFA on VPNs is a PITA, that's rarely done for good reason, instead you use device certificates in addition to passwords which is what the recommendation should be not yubikeys or webauthn (vpn!=web??) because VPNs need to reconnect and you can't have people insert a yubi each time their connection drops. Ideal setup would have 7 day valid (or however long is reasonable for users to disconnect their PC and remain out of office) mutual-auth certs+ocsp getting conditionally reissued new certs to remain connected (compliance stuff like patching, unapporoved software,security alerts for the device). If you think about it, you typically issue users two yubikeys not just one so if the backup gets stolen you have a problem depending on how easy it is to social engineer users or reset their password with a good yubikey but a stolen laptop means certs+password revoked immediately.


The powershell script is a minor part of the screw up. The real issues are multitude...

1) hardcoding actual production credentials in a script at all. Seriously what the fuck.

2) Thycotic not enforcing MFA for the keys to the kingdom admin account. Even my cellphone provider has better security.

The root cause is likely the assumption that the VPN is sacred. This needs to die asap - your internal network should assume the VPN is wide open and secure everything accordingly. Defense in depth not eggshells.


So, on that, it is surprisingly difficult to get rid of hard coded credentials and funnily vendors like tychotic and hashicorp are supposed to prevent that by having some api thing that integrates with scripts.

That said, tychotic,cyberark and pals that manage credentials almost always need domain admin. I think just moving to full AAD and azure key vault might be better but realistically this is the nature of the beast. If I had to guess the "network share" is probably a GPO on a DC that is used by tychotic, anything short of that is ridiculously bad. GPOs are shared to all machines so they can be pushed to them so you see creds in there sometimes if scripts need them, the thinking being "if the bad guys are in the network we have bigger issues" (again with the perimeter centric intuitive security mindset).


the cute answer to all this is always: how do people think your service accesses the secret vault? its anlther credential.

the real issues are tougher, like why does this one cred have access to all these other creds, and how or if they were auditing usage of that cred from authorized client devices.. but all of these problems take a lot of effort and care to solve. and as history has shown, you only have to mess up once.


I agree but an api key for a PAM service will get you constrained access (ideally) to a specific resource instead of a kerberos ticket you can take with you as part of your ticket collection. It's supposed to be better but granting the resource permission like GCP does is probably better (but messier too).


for sure; the real failure in this setup was again, having a single credential with access to so many other critical secrets. I have yet to see a secret vault that had good analytics for this kind of thing - it assumes you have designed your secret hierarchy and permissions appropriately.

ideally, there would be a warning for identities with access to too many secrets.


I'm curious what's the alternative if the script must have those credentials to do its job.


As another commenter pointed out: you authorize the executor of the script, not the script itself.

Consider how an AWS instance runs code that is able to ... talk back to the rest of the AWS system.

For code that is not being directly run by a tethered meatball, use some form of workload identity [1].

When you are talking to another system that that can't understand your workload identity (legacy apis, etc.), keep those credentials in a tool like Vault[2], Secret Manager[3], etc. That system can/should handle credential rotation wherever possible, but it also ensures that the workload running the script is authorized to access the credentials in question. This is far superior to passing via env vars, but even that is better than hard-coding in the script itself. Oh, and using a memory-backed mount that contains those vars is better than env vars because there's less risk of leaking those when you fork.

Key points:

- externalize all secrets

- prefer workload identity

- prefer a workload identity aware secret store / manager

- fall back to fs mounted secrets and then env vars

[1] https://spiffe.io

[2] https://www.vaultproject.io

[3] https://docs.aws.amazon.com/secretsmanager/latest/userguide/...

edit: formatting now that I'm on a desktop


As sibling points out: Don't authorize the script, authorize whoever is executing the script.


Environment variables are an easy first pass at ensuring a script at rest isn’t dangerous.


Besides what the sibling comments said, it's very unlikely to that any script needs keys to the kingdom. Credentials should be created with limited access, just enough for the script to do its thing.


Not for PAMs like tychotic their service accounts are domain admins


encryption based authentication (keys or certs) that can be revoked.

welcome to 1996.


Stuff like...tychotic helps prevent that :P


There’s a trend of storing MFAs in password managers like 1Password. If the password manager is compromised then what was the point in having MFA…


So that you're protected from data breaches of the service itself (e.g. revealing a reused password)


That doesn't have anything to do with MFA. If for some reason your 1Password masterpass is compromised, the hacker has access to your passwords and your MFA tokens.

If you use 1Password and say Authy (Assuming your Authy pass isn't in 1Password) or Google Authenticator. Then all services with MFA wont be compromised if the 1Password masterpass is...


Hi there!

Not quite. An attacker would need either your account password AND an already authorized device, OR they would need both your account password AND Secret Key. If you have 2FA enabled for your 1Password account, and the attacker doesn't have one of your authorized devices, they would also need your second factor (TOTP or hardware key).

Additionally our Principal Security Architect, Jeff Goldberg, wrote some thoughts on this subject, here: https://blog.1password.com/totp-for-1password-users/

- Ben, 1Password


So you're banking on the idea that in order to login to 1Password you need an authorized device as your layer of security.


I used to think that way but then I had a phone die and lost my Google Authenticator - I lose access to many servivces and had to go through the pain of resetting so many MFAs.

Then I was like "1Password will sync these to my new phone so I never lose access to everything again? Fine."

I also started to get questions from my wife around how she could access things in the event of my death and it seemed having a 1Password printout in a safe deposit box she could access - and having that include the MFA too - was a good idea.

My master password in 1Password is quite secure (A long obscure sentence and with special characters etc.), I have it auto-lock pretty quickly (thanks TouchID!) and I guess that will have to be enough unless I shift to something like a Yubikey on my keychain for things down the road...


The benefit over SMS or Authenticator apps is that it doesn't pre-fill codes (and passwords) if the URL doesn't match. But yeah, I also have mixed feelings about it. Just slightly better than SMS maybe.


The point is to have the same amount of security of a strong password, with the same amount of hassle as a strong password.

Not every little SaaS needs MFA.


Passwords are not "real" secrets. Don't put real secrets into password managers.


Because auth to the VPN should have required a device cert and/ or unphishable 2FA. Also because the SMS phish was one of the first details leaked. Obviously access to the VPN shouldn't also be a full system compromise. There are many things to criticize here, we can point all of them out.


Maybe that was still required with the vpn, access vpn can also mean compromise a device, they only said yes to an attack chain someone was asking them about in the screenshot. I agree with you to the most part but at the time of posting the discussion was entirely around initial access and mfa which is not inline with how security is done these days.


I'm curious what your take is on the incident detection side of things. Would Grapl have helped Uber detect anomalies?


It is irrelevant though. If your whole multi-billion company folds like a wet towel when one user is compromised, the question isn't how that user is going to get pwned, it's when.

Relevant xkcd: https://xkcd.com/538/

If you really really want one user to control everything, maybe they should work on a desktop station with a guard at the door.


> Infostealers for Mac are a thing (Uber is a mac heavy shop I hear)

Block unknown executables on company machines. Google developed Santa to protect themselves: https://github.com/google/santa

> and that's all it takes to steal cookies and tokens post-mfa,

Make post-MFA cookies and tokens short-lived. Require MFA re-authentication at least daily.

> or why even bother with that, if you're running code just make it a reverse shell.

All outbound connections should be strictly monitored, especially from production servers, which should have no ability to connect to the Internet at all. With modern dependency management, that's harder for build servers, but still doable.


Unconfirmed method of breach: https://twitter.com/hacker_/status/1570582547415068672

- Socially engineer an employee to get on their VPN (could have been prevented with webauthn / hardware 2fa)

- Once on VPN, scan their intranet and find a network share

- Network share has powershell scripts with admin credentials for their PAM vendor, Thycotic

- From there can get full access to all systems


> - Socially engineer an employee to get on their VPN (could have been prevented with webauthn / hardware 2fa)

Zero-trust may be a security meme at this point, the whole point of zero-trust is to make it so that once on your VPN, all of your stuff isn't immediately pwned. You're supposed to have authentication at all layers, not just the corporate VPN edge. An insecure network share is a ticking time bomb, even if it's behind a VPN, because you now have to hope and pray all of your VPN users aren't bribable, aren't morally bankrupt, aren't disgruntled, etc. The same thing could have happened via insider from the lowest level of employee with VPN access if that report is true.


People who call ZTN a meme are usually just ignorant. It's a very simple and effective solution. Mutual authentication, explicit authorization, attestation, and auditing. Not exactly buzz word soup.


I was recently on a call with someone who said their company was "totally zero-trust" because they disabled copy & paste on all workstations. I definitely believe in the concept, but it's definitely turning into a parroted term that's applied to anything.


> People who call ZTN a meme are usually just ignorant.

It's become a "meme" because it is used in endless irrelevant ways. So many companies are pushing zero-trust solutions that don't have anything to do with the basic idea of requiring access control at every endpoint.

Also, while it is a very sound practice, there's also the hyped idea that it replaces everything else. Which is not very wise. You still want defense in depth over and above requiring acess control everywhere.


It shouldn't be buzzword soup, that does not mean that companies and vendors don't use it that way.

Pretty much any time a new security concept comes up vendors race to say that they implement it, whether they do or not.

With Zero trust, it's hardly a new idea (e.g the Jericho forum de-perimeterization) but it's hard to implement well across all systems.


If you're stupid enough to fall for vendor buzzword garbage that's kind of on you though, right? It says nothing about ZTN.


By that definition of stupid, pretty much every large company in the world is stupid in someway or another :)


Ding ding ding. This whole incident is symptomatic of eggshell security - crunchy exterior with a delicious gooey inside once you break through it.

Hedgehogs dont have this problem.


> - Network share has powershell scripts with admin credentials for their PAM vendor, Thycotic

This is gross incompetence.


Thycotic and Centrify have been rolled up into Delinea fwiw.

https://delinea.com/


It's just multiuser keepass with integrations for active directory and such. It's not surprising that three startups building the same thing would realize they could merge and charge 10x the price they could get if they were competing. Since Delinea's rep just took a hit (due to customer error), there is easily room for two or three new players in this space.


>scan their intranet and find a network share

Did their IDS/IPS not go off on this? I wonder if this was a sophisticated scan designed to go slow and evade detection or if it was just nmap lol

I can't wait for the post-mortem, hopefully lots of good lessons to learn.


>scan their intranet and find a network share

Assuming screenshot is real[0], they have over 1PB in their Google Drive, so chances are everyone just uses Google Drive with shared drives, and employees use Drive for Desktop (previously drive file stream)[1]. Shared drives are pretty powerful and access to them can be gated at the same level as you can regular Drive files.

My theory is that some high-level IT person either got phished and didn't have hardware 2fa, or that high-level IT person downloaded malware / got RAT'd and the Google Drive scanning was done in the background on their machine. Depending on the hierarchy, it might not have even been a scan, could've been the attackers sating their curiosity by browsing through all their internal files and happening to find some PAM credentials.

0: https://twitter.com/praise_terryd/status/1570583105123258369...

1: https://support.google.com/a/answer/7491144?hl=en#zippy=%2Cw...


Maybe just clicking around until they found something. That's what many employees do on a daily basis looking for files on network drives, so nothing that would be noticed easily.


Someone forgot the “and then watch that basket” part about putting all your eggs in one.


>Socially engineer an employee to get on their VPN (could have been prevented with webauthn / hardware 2fa)

Even using certificate based authentication for VPN along with their existing MFA would have prevented this. Unless the attacker compromised the employee's laptop in which case nothing would stop that.


Thoroughly basic and preventable attack.


Assume breach. I just hate it when you can access anything on networks just cause you are logged on to it.


Former Uber employee. I'm not a fan of the company. But don't shit on the efforts of the security team please. They were actually quite thorough.

We used online MFA (you had to respond to MFA requests on your phone). Not even sure why this is a discussion as the hacker confirmed it was a case of social engineering. No MFA protects against social engineering (no, not even ____ - don't try to convince me).

And yes, at least when I was there, there was pretty good training on SE deterrence.

Further, OneLogin was used, Yubikeys were phased out early on. I'd be surprised if they had brought them back, as I remember the security team being somewhat averse to them. I'm sure OneLogin is also investigating.

The security team at Uber was quite good. Constantly under stress. Constantly overworked. The last thing they need are knowitalls speculating about how stupid they are on HN. Cut them some slack - this could happen to any company (yes, it could, even yours - don't try to convince me otherwise).


>No MFA protects against social engineering (no, not even ____ - don't try to convince me).

Certain MFAs can protect against more types of attacks than others. You covering your head in the sand when people point that out doesn't change that fact but merely indicates you prefer feeling right to being right.

>as I remember the security team being somewhat averse to them

So you're saying that the security team was averse to the thing that would have prevented this hack? And that means we shouldn't put blame on them?


Oh, cloud-based MFA. Dream stuff where you SaaS can reauthenticate at any time, and it just sends a request to the users, without having to rely on them to initiate anything. No idea what could go wrong with that. /s


> this could happen to any company (yes, it could, even yours - don't try to convince me otherwise)

There's a lot of cognitive dissonance in discussion around this story IMO. Nowadays I assume everyone has been or will be pwned, because no breech surprises me anymore. Any small gap can and will be exploited, and as organisations grow the surface area only gets larger and larger. The only way to truly secure data is to not put it on the internet from the jump. For every breach that's published, there's likely a dozen that we never find out about.


OneLogin is fine and all, but why not protect your OneLogin with a hardware key?


> No MFA protects against social engineering

That's true - some kinds of social engineering cannot be prevented by technical means. BUT hardware keys prevent an entire class of extremely common attacks that every other form of MFA is vulnerable to. It would have prevented the method of compromise used here.

Any company not using FIDO/WebAuthn in 2022 is behind on best practices.


> Yubikeys were phased out early on

What security team on earth would be against these?


Wasn't their decision. Was finance's. Blame them.


Blame greyballing on finance too? It seems unlikely for finance to really own the final call here.


> The last thing they need are knowitalls speculating about how stupid they are on HN

Don't you know that sh*tting on everyone else and saying that how you could do it with only 5 people are the traditions of HN...


[flagged]


You clearly don't understand how SE can be used even if Yubikey or WebAuthn are used here.

Perhaps you'd like to explain instead of insult someone you know nothing about (which violates HN guidelines).


I mean, "social engineering" is pretty broad; saying MFA can't stop social engineering is like saying password managers can't stop hacking, or HTTPS can't stop spying. I mean, sure... but Webauthn would have in fact stopped this type of social engineering attack (which was a fake login page). And scanning internal networks for hardcoded secrets would have stopped this type of privilege escalation afterwards.

Security is never absolute, but we're not talking about a nation-state/APT attack here; current reports seem to indicate this was a bored 18 year old acting alone.


I get what you are saying now. Agree if the right actors are on it, all those doesn’t matter. Sorry about that.


They could be fake, of course, but this thread[1] of screenshots is pretty bad... internal tools, Slack Admin, Google Workspace admin, an AWS account showing admin permissions.

[1] https://twitter.com/Savitar0x01/status/1570580235716014081


And as other people have already written, that's the main issue. Not that someone got compromised, but that passwords for admin accounts to all those services were stored on a network share.


Security is like an onion and in this case every layer was rotten.

* Social engineering successfully got someone

* MFA approach did not protect from a simple fake webpage tunneling hack

* VPN was based on a password rather than a certificate

* Network scan was not detected and stopped

* High level credentials were stored in a public file and not detected

* Abnormal credential usage was not detected and stopped

I probably missed a few but point there were many ways to stop this hack and all of them were broken. This wasn't some highly funded government operation that bypassed layers through clever approaches and expensive zero-day exploits.


What happens is that as a company scales to this sort of size, all kinds of shortcuts are made, all kinds of compromise decisions are made because there is still little non-financial cost for getting it wrong.

Imagine deciding to add some extra floors to your office made from cardboard because, "we are scaling too quickly to build them out of concrete". Wouldn't happen. Why? The legal and marketing fallout would be fatal.

In the corporate world, especially in ones at the extreme levels of inefficiency/size/etc. it is easy just to blame the previous job holder, the previous culture, "we have made improvements since then" etc. as if that makes it alright.

tbf, we also see this in Healthcare and Government so I think it is Human Nature rather than corporate greed.


I'm still confused. Why did people have username/password logins to the AWS console? Either require SSO login, or require HW tokens to get in as an AWS user. Then it doesn't matter if someone finds the password file, it's useless.


From information floating around on Twitter it looks like they had the password to the SSO account of an employee and then social engineered their way to get the employee to accept the push MFA prompt to add a new device.

At this point it appears that they found more credentials on the internal network and owned SSO, MFA and AD giving admin access to everything.


> found more credentials on the internal network ... giving admin access to everything

That's my hangup. The fact that admin/root level accounts can be accessed with "credentials" alone, rather than only via SSO/MFA/Yubikey. Were these service accounts, what happened to least privilege?


It depends on the employee you target. If it is someone working on internal IT systems, chances are high that you gain pretty wide access after owning their SSO.

SSO can go down or get owned so having break glass credentials isn't unheard of. The last place I worked at had them on paper in a safe in their headquarters. The Twitter threads show that they were stored in a password manager but the hacker was able to find credentials to access it which could have been one of the responsiblities of the employee which was targeted.

If you have your password manager on SSO it will be even easier.


Those screenshots look pretty convincing to me.


This is where all the secondary hackers will come into play - I expect to get “new uber signup” random texts later tonight.


Pour one out for our fellow sys admins & security teams that are going to be working late into the night. They took down their Slack so I wonder what out-of-band comma they’re using.

EDIT: Comms not comma. I'll leave the typo because I LOVE bombcar's comment about the semicolon. Confused at first, but I smiled


Probably a semicolon!

(I’d assume going to text messaging to set something up, which is why you should know enough about your immediate coworkers to verify their identity via non-public information.)


A reasonably sophisticated attacker could arrange for the entire team to get SIM-swapped and suspended from Facebook when they launch an attack. If only there were some way to have a central rallying point for everyone to meet at. Perhaps some sort of a structure, with the company's name on it, and it would have places to sit inside, with computers connected to the company's infrastructure to use.


In the old days you’d have physical restrictions on access to the datacenter - in a major breach you’d get there physically and shut it down and disconnect it.

With everything cloud now, how do you recover your cloud account of the master got compromised?


For someone Uber’s size, you call your AWS rep and co-ordinate with AWS security.


Could the CTO or some other employee go to Amazon's office to do something like this? I genuinely wonder.


How do they verify your identity?


That's the fun part, you don't!


Who controls the database of RFID cards allowed to open the doors?


Usually the datacenter was manned (sometimes by guys with guns) and they had various mechanisms for ID verifications.

And they had an entirely separate IT setup that wasn’t related to yours.


My not super-secure datacenter requires either a palm-print plus RFID card, or showing ID to a human.


At Dropbox we had an explicit fallback on the DFIR team for the situation where our normal communication methods were considered compromised (not going to give details, obviously). I would hope that most security teams at a company of similar size have discussed this situation, it's not at all uncommon for attackers to sit in on calls, read messages between security people, or even access the SIEM to see what alerts are going off.


Yup, gonna be a rough night for all involved.


Generally speaking infra and sec folk tend to have OOB comms setup because of frequency of Slack outages


> "Feel free to share but please don’t credit me:

> at Uber, we got an “URGENT” email from IT security

> saying to stop using Slack. Now anytime I request a

> website, I am taken to a REDACTED page with a

> pornographic image and the message “F** you wankers.”

From: https://twitter.com/samwcyo/status/1570583182726266883


Ah, an old fashioned troll/artist rather than someone who just wants to make money with ransomware. How refreshing.


We need more LulzSec


> at Uber, we got an “URGENT” email from IT security

> saying to stop using Slack. Now anytime I request a

How does an employee know if that message is legitimate or not? If you break into a secure system, mass-emailing all employees saying "URGENT: WE HAVE BEEN HACKED. PLEASE EMAIL YOUR PASSWORD AND SSN TO THIS ADDRESS IMMEDIATELY." is sure to get some percentage of success.


It does make me wonder whether we’re headed towards some kind of “breach via chaos” scenario. Clearly the attackers have the cell phone numbers of employees. Suppose they started mass texting conflicting information? It’d be noisy as hell, but take 1000s of employees getting a never ending stream of texts, purporting to be from their employer, saying “don’t use Slack,” “don’t use email,” “here’s a Zoom bridge for incident response,” “oh and here’s an MFA notification you should accept.”

This could lead to a scenario where no one knows what to believe, internal systems are down, attackers are setting up fake IR channels to get even more info, etc. There’s no way most companies could weather an onslaught like that.


If you wait for an emergency to set up a continuity of operations plan and train your employees for it, then you won't get great results for that particular emergency.


In general you can trust a “stop doing the thing” email blast that appears legitimate but should be highly suspicious of the same asking you to do the thing.


I see somebody else got hit with a cancellation fee and took it personally.


The common thread in all of these recent attacks (Uber, Twilio, Okta, etc) is the “phishability” of the authentication methods involved. Remote attacks like this only work when you can socially engineer an employee and phish sensitive credentials like a password from them (or a SMS one-time passcode during the MFA step).

What’s head-scratching to me is why tech-forward enterprises haven’t been faster to adopt unphishable forms of authentication like WebAuthn. WebAuthn supports both built-in device biometrics like FaceID/TouchID and external hardware keys (eg YubiKey), which are inherently incapable of being phished — there’s no text-based secret for an attacker to deceive a user into sharing with them.

I’m biased as I run an identity and access management company (stytch.com), but I hope more companies will consider integrating WebAuthn to support unphishable MFA.

Today, WebAuthn introduces some nuances that can discourage a B2C company from supporting it today (e.g. account recovery with lost devices), but it’s a clear win for corporate network/workplace authentication and B2B apps. I believe some of the lack of adoption is due to complexity to build (more complex than traditional MFA) and cost for off-the-shelf solutions (Incumbents like Auth0/Okta require ~$30k annual commitments to let developers use WebAuthn). If you decide to build with Stytch, WebAuthn is included in our pay-as-you-go plan (https://stytch.com/products/webauthn)


Seeing these huge companies with practically infinite resources get owned one after another sure makes me wonder if we even have any chance at all to do this correctly in our small business.

Perhaps they just don't care about security?


You know, the longer I'm at this, I see more and more effort thrown at developing security and one thing remains the same - you've got a user sitting at a machine with network access and the ability to execute code, and sometimes you can trick that user into executing code. I guess the bigger the company, the more users which means more targets/chances.

For decades I've been told that security through obscurity is no security at all, but in the back of my mind, I think it might be the best thing I've got going for me working at a small place. Though I should say, that's far from being our only security - we do work at it too.


Obscurity, by itself, may be an ineffective security strategy, but it does provide an additional layer on top of other layers of security to improve things, overall. There's a Spafford quote on this, but I'm failing to find it. Let's just pretend it's like what I said, but more eloquent.


The best approach is to assume there's a renegade employee constantly trying to screw the company over. Granularity of permissions should be set to minimize the blast radius to the absolute minimum they need to do their job.


Hell, you should offer an internal bounty to any employee who reports “I got access to something I shouldn’t need”.


Part of what I do first at any new employer is ask myself the question, "if I wanted to burn all of this to the ground, how would I do it?" I generally don't share the fact that I'm going through this little thought experiment with my management, but it helps triage what's currently "broken", and gives me a clearer focus on what needs to be fixed.

If I'm thinking about it, I can be assured that someone with differing motivations likely already has, or soon will be thinking about the same.


This approach is possible but increases the complexity of your problem by enormous amounts. I know of only a very tiny number of companies that have an active goal of preventing rogue insider threats in a serious way. And the solutions do meaningfully inhibit developer productivity.


The thing about security is that it's perfectly fine to not have it most days. Suddenly, all at once, it's not fine at all. A very large company has an exceptionally bad time and a lot of people are affected.

Small startups and businesses can absolutely get it right. It's usually much easier, with a small number of people and systems involved. You just have to approach it knowing it will take work every day. Some things will be harder than you want them to be. It will be worth it to avoid this kind of stuff.


It's the weakest link problem. Uber can have near perfect security but all it takes is a single one out of 20K+ employees to click on the wrong link, install the wrong app or trust the wrong person and suddenly the entire system is compromised. So in that sense your small business is more secure since there are way fewer possible targets.


>Uber can have near perfect security but all it takes is a single one out of 20K+ employees to click on the wrong link, install the wrong app or trust the wrong person and suddenly the entire system is compromised.

In a well run organization it takes a lot more than that. There were a dozen steps in this exploit chain where it could have been detected and blocked. Likely Uber didn't care about security and their security team lacked both political power and resources.


In this case it took both the one employee out of 20k+ getting tricked and the entire (supposedly world class) engineering org that allowed admin authentication credentials to get hardcoded into a globally accessible power shell script exposed on the intranet.


You'll always have one extra layer of security that those companies can't buy... obscurity.

Just don't rely on only that layer, and watch out for oddly quiet individuals named Sam Sepiol.


Yes, you're correct. They don't. And it's pervasive - it's not just Uber, it's the developers of the software Uber writes. Shake out the tree of any Uber service and you'll find that maybe 0.1% of the code is written by someone who cares about security, and maybe 10% of that code was written by someone who knows about security.

Developers do not give a shit. Security is not something they're trained in, interested in, or competent in (though they often think they are).

Security is a couple of people trying to bucket out the water as fast as they can from every sinking ship while developers are taking a piss on the floor and poking holes in the hull.

I think the bar for devs is extraordinarily low and we'll keep seeing this sort of thing until it we collectively raise it. Thankfully it seems like, very recently, this is starting to maybe happen. Packages requiring 2FA is the first thing I've seen that seems to indicate that developers are going to have to do the bare minimum for security in order to participate.


Being big means more money and resources, yes, but this also means having more employees, and more assets, ie. a much bigger attack surface.

Believe it or not, it's much easier to successfully phish a big company where you have unlimited pool of emails to tap into.


It may be easier for a smaller company to be secure. Usually people are the weakest link.


Wife has been locked out since afternoon! This is a bad one, she tried accessing internal doc…got redirected to dick pick


That's not bad. Bad is if my credit card that's in Uber starts showing interesting charges...


Think about all that information you trusted uber with because now you're trusting organised crime.

You /have/ to treat uber and the like as though they are organised crime even if you think they are and will always be in league with rainbows, fairies and unicorns will never put your interests behind theirs.

edit: wave to uber's PR flunkies.


If they really did get in hope they grabed the internal company communications etc so we can finally criminally prosecute the people that have systematically and knowingly broken the law.

Additionally even if the information might not be admissible in US courts, it is in other countries making travel for those individuals very uncomfortable.


Given that Uber routinely tracked politicians and journos and shared it around the company, and had stood up toolsets to track and evade police so as to facilitate drivers dodging law enforcement, they always were organised crime.


> evade police so as to facilitate drivers dodging law enforcement

Isn't this illegal as fuck?


It's only illegal if you get caught! :D

This was part of Uber's early strategy when cities responded with citing drivers of rideshare services. At this point, I've seen a rideshare stop in the center of an intersection with traffic to let people out, so maybe the cities were right in the end...


Most of the high-profile "tech startups" are mired in "doing illegal shit quicker than you get caught, with computers".


The law doesn't apply if you're a big company.


Eh, based on the screenshots and claim that they're doing some trolling, it might be a teen or small group of hackers that happened to gain such a wealth of data and their first instinct is to "download all the data, we'll figure out how to sell it later". If they had shopped around for a nation state buyer this certainly would've been more covert and a bigger hack.


Our security just send around an email that they're disabling Google Chrome's password manager.

While I understand not wanting those passwords in Google's hands, the reality is that they do have the $$$ for security.

But instead of being able to leverage that functionality, and the Generate Password functionality we now have to resort back to name_of_application_my_name or something like that.

Do not ban things just because it has an issue. Provide a better alternative.


Disabling a password manager that's built into a browser? That's simply madness. What do they expect people to use? Their brain to remember all their passwords rather than a password manager? And rely on their brain to know they are on a site whose domain doesn't match the domain in the password manager despite looking very similar?

Also this has nothing to do with passwords in Google's hands. They could turn off syncing in Chrome and have a completely local password manager. I personally do exactly that.

Your org will suffer many data breaches due to this policy.


When browser managed credentials are synchronized across devices, an attacker may be able to move laterally into an enterprise by compromising the personally managed device or personally managed account (since it may be without 2FA, or may use a shared/guessable/weak password thats shared across dozens of compromised websites, or be far behind on app/OS patches, etc..)


"Uber reels from 'security incident’ in which cloud systems seemingly hijacked" - https://www.theregister.com/2022/09/16/uber_security_inciden...

"We're told that an employee was socially engineered by the attacker to gain access to Uber's VPN, through which the intruder scanned the network, found a PowerShell script containing the hardcoded credentials for an administrator user in Thycotic, which were then used to unlock access to all of Uber's internal cloud and software-as-a-service resources, among other things.

After that, everything was at the intruder's fingertips, allegedly.

The New York Times reported that Uber staff were told to stop using the corporate Slack, and that the call to quit the chat app came after the intruder sent a message declaring: “I announce I am a hacker and Uber has suffered a data breach.”

The Times stated the Slack message listed “several internal databases that the hacker claimed had been compromised.” Various corporate systems have now been shut down by Uber."

""Instead of doing anything, a good portion of the staff was interacting and mocking the hacker thinking someone was playing a joke," Curry said. "After being told to stop going on slack, people kept going on for the jokes."

Evidence of that misunderstanding has surfaced on Twitter in the form of a screenshot of Uber's private Slack workspace."

The message: https://nitter.net/vxunderground/status/1570626503947485188



Working archive link: https://archive.ph/rBRmn


I heard from a security friend that their sentinel one endpoint detection got popped and the hacker posted screenshots of thousands of unaddressed security alerts in the dashboard. Can anyone confirm? I'm still looking for the proof.


Maybe referring to this image? https://twitter.com/vxunderground/status/1570597582417821703... (3rd photo if it doesn't open)


Interesting timing, and good comms at a time when their former CISO is currently undergoing trial for not disclosing a previous incident


Perimeter based security must die for people to stop doing insecure data sharing. Concept of VPN based lan in this remote work kind of setup is highly insecure and open to all kinds of abuse.

In fact it was always insecure I would say. All you need is wifi password and you have tonnes of super sensitive information on windows shares


That hardcoded secret in the powershell script really was the key to the Uber ride-hailing kingdom – https://blog.gitguardian.com/uber-breach-2022.


> sent a text message to an Uber worker claiming to be a corporate information technology person

I'm confused. Was the hacker claiming to be an IT person or was it the Uber worker?


The hacker


This is bad :-(


(Edited and removed) Let's start with the basics, many applications do not support webauthn, full stop. Even shops who roll it out are forced to keep holes open for business critical applications that don't support it. Security is not easy, and the entire field is not negligent - the problem is massively asymmetrically stacked against security practitioners, enhanced by poisonous attitudes like the ones expressed here.


An underlying issue is that Microsoft Active Directory does not support MFA of any kind at all. That's why third party PAM vendors exist, but they don't really change the fact that you the most widely deployed authentication service in business is effectively run by password hashes. That's the whole reason we so commonly see the "mimikatz scraped a hash from RAM and it was all over" write up in incidents.

Smartcards exist, but their use is clunky in practice (I'm aware Yubikeys may now function this way). For everything else, Microsoft's current MFA solution is "just use the cloud".


Worse: un-salted hashes.


Worst: The hashes are actually the passwords (for all services that allow NTLM authentication).


> many applications do not support webauthn, full stop

You don't need the application to, only your IDP. Everything should be SSO from there.


What if that application doesn't support that setup either? So many services online barely manage to let you setup TOTP, nothing like this...


Stick it behind something like authentik before exposing it


[removed by author]


Fair play.


> Security is not easy, and the entire field is not negligent - the problem is massively asymmetrically stacked against security practitioners, enhanced by poisonous attitudes like the ones expressed here.

Is remaining in a role in which it's not possible to be effective negligent?


What is your alternative? Should we do nothing instead? Should all SWEs quit because they can't stop writing security bugs?


I don't have a specific alternative, but I think that if it's not possible to be effective in a role, one should decline it. It's possible to be an effective SWE while still writing (and hopefully also sometimes fixing) bugs.


One could argue the security industry exists because of the failings of computer science.

Maybe swes should start facing legal liability for thier failings like most other engineering disciplines. We would see this problem change overnight.


By productivity going completely down the drain.


that is not happening in commercial engineering though is it?


Should doctors quit and find another field because people keep on breaking their legs?


If doctors not only had to treat broken legs but also somehow prevent people breaking them in the first place...


Forgive me for being frank, but how do people seriously fall for phishing scams? How do you work at a company like Uber and do something like click on a link in an email to claim a gift card? It’s insane to me.


First, people have fundamental drives that can override logical reason. Gift cards probably aren't your button. Maybe your buttons aren't even ones that are easily poked at by e-mail, I dunno. But EVERYONE has buttons somewhere that make them exploitable, and a lot of them ARE e-mail accessible... maybe as easy as offering free money, which is a pretty common one, and it's why marketers have been obsessed with it for ages.

Another factor is that a lot of people have jobs where they're really busy and deal with a lot of e-mail from people with all kinds of bizarre communication styles. Catch one of them with the right e-mail on the right day, and you'll get a careless click.

Black hats get to try every day across lots of people, and they only need it to work one time against one person to score.


> careless click

Careless click is not enough to compromise someone unless they are also running software that is not up to date. For example, how do you compromise someone if password login is disabled in all of the systems?


> How do you work at a company like Uber and do something like click on a link in an email to claim a gift card?

This is bottom of the barrel phishing. Attacks against big companies get _far_ more sophisticated. Things like complete mocks of internal login sites, realistic internal emails. There's big money in hacking big companies, and plenty of shady characters willing to invest in a potential payoff


Not everyone is paranoid and jaded. It's pointless to judge the stupid ones.

Bottom line is Uber got pwned, and the dirty laundry is now out in the open for all to see and inspect. Tomorrow it'll be for sale on the darkweb.


If the theory posted elsewhere in this thread is true there was definitely gross negligence elsewhere in the security chain so it’s not the fault of that one person for sure.


“Paranoid and jaded” is reading as “not stupid” to me here.


https://arstechnica.com/information-technology/2022/08/im-a-... has a good example of how sophisticated phishing attempts can be. Even for less sophisticated messages, they can cast a really wide net and just need to find someone on a busy day with an urgent request from "their boss".


Probably because it was more sophisticated than a gift card.. There are some screenshots on Twitter from the hackers with all kinds of internal uber tools and admin panels, many on non-uber domains (like uber.<third-party>.com). With all the internal email lists that employees are on for different departments in these large companies, it's not unimaginable that they click a link that appears to be some malicious site in disguise of an uber property, and enter their credentials.


This is one of my biggest fears about companies constantly outsourcing easily deployed internal apps as SaaS and just using mycompany.saasprovider.com

Normal users stand no chance, especially when there are URLs that are sketchy because oops, saasprovider already has a customer with your requested url, so you end up with

mycompany0.saasprovider.com or mycompany-1.saasprovider.com

Its terrible practice all around and lazy systems and services administration


Even without SaaS I get weird URLs on login pages. The login page for my personal Chase account is

    https://secure07a.chase.com/web/auth/#/logon/logon/chaseOnline?treatment=chase&lang=en
At least the etld+1 makes sense, but most people aren't going to recognize that generally the etld+1 is what you need to verify and you can ignore the rest.


It can be much more subtle than that and I have seen people like you fall for it (or for other social-engineerings attacks) more than once...


There is nobody that it totally immune to phishing. No one.


everyone is susceptible to it,,, everyone


We don't run internal honeypots and no one has ever been caught in our company, so I disagree. And yes, a reply may be "That you know of...", but considering that we run weekly audits and nothing has leaked, I can be 100% sure of it.


Do you really know for sure though? That is what keeps me up at night. The irony is that one of the leaked screenshots is of an internal security auditing/monitoring tool.


The other thing of note with this is timing as yesterday an ex attorney testified against the ex security chief for the 2016 breach cover up. And the next day there is this breach. So based on the damaging nature of the testimony where further discovery could be needed it seems a bit too convenient to have a breach the next day. So is it possible this is a fake breach in order to scrub further damaging evidence of others involved in the original 2016 cover up? Something to think about plus the notice seemed to go up in record speed.


> So is it possible this is a fake breach in order to scrub further damaging evidence of others involved in the original 2016 cover up?

Can you elaborate what scrub means in this context? In what ways would this breach cover up the 2016 breach? Would the prosecutors suddenly lose their memory of the 2016 breach? Would evidence suddenly go missing?


By locking out your company and chat mediums you can search and purge evidence or artifacts in hopes to not expose further corruption or others involved in the past one. While treating it as a real threat to feds and public. In the org cover up they hired the attackers as employees with modified NDAs as per the testimony given on the 14th. So with that info I wouldn't put it past them as will as the ftc passing gig worker protections which would also financially impact uber on top of the lawsuits that will come out of the 2016 breach cover up. So with those factors and their history this could be used as a cost savings measure in a way to limit the money they are going to pay out in both areas. So with the testimony it would in theory open discovery for new evidence which this could prevent if used as a clean up job for lack of a better term.


> In the org cover up they hired the attackers as employees with modified NDAs as per the testimony given on the 14th.

Do you have citation for this?


https://www.courthousenews.com/fired-uber-attorney-testifies... Sorry hired as part of their bug bounty program which is part of where that case is important as it could lead to the death of bug bounty programs potentially


I actually really like this theory and it does sound like a possible explanation. I used to know quite a few ex-Uber techs and I wouldn't put it past their ethics to do so.


Looking at the screen shots that could be just to help support it, but sole of those look like the security techs screen shots by looking at the names on the logins but that too can be viewed as questionable evidence given the context and using this theory as the pov when analyzing the "evidence"


ah yes the famous "have a bigger breach so people stop worrying about your smaller breach" strategy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: