> Here’s how the hack went down: Two attackers accessed a private GitHub coding site used by Uber software engineers and then used login credentials they obtained there to access data stored on an Amazon Web Services account that handled computing tasks for the company. From there, the hackers discovered an archive of rider and driver information. Later, they emailed Uber asking for money, according to the company.
I'm surprised Uber doesn't have their engineers set up 2FA for GitHub. Super simple to implement and require organization-wide[1] and would have prevented this. Then again, not storing credentials in GitHub would also have prevented this . . .
Github 2FA has been part of the first-day training/laptop setup for a while now (I joined in may) and there's security-related training in place as well. I was told there are also scanners in place now that check repos, gists, etc for secrets for exactly this type of mistake.
One snippet of the email the article didn't mention was that Sullivan's firing happened pretty much right after Dara learned of the breach and an investigation was conducted. It definitely inspires more confidence in leadership seeing that the CEO will not tolerate unethical behavior.
I think many people don't realize this, but the majority of the leadership team from like a year or two ago is now gone, including Travis.
Also, Uber has been hiring a lot of new people - the ratio of new people vs old timers is really high. I'm obviously just one anecdata point, but I believe new hires (and a lot of old timers) want Uber to be an ethical company, and many have joined the company specifically to tackle that challenge. One great example that comes to mind was when one board member made a sexist remark on an all-hands meeting a few months ago and by the end of that same day, Liane Hornsey (who had just joined as the new head of HR) had him give up his seat.
There's a big push towards trying to make things right, with the holden report, the 180 days of change campaign, the implementation of new training courses, anonymous complaint hotline for employees, etc. And the unspoken message right now is pretty clear: inappropriate conduct _will_ get you fired, even if you are the head of your org.
Obviously there's still a lot of work to be done, but I think we're at least in the right track now.
I really like how your description gets at these policies creating a tipping point in the culture. Hearing about any one individually always sounded like a bandaid, but hearing about them together and then how you and other employees react to them is very encouraging. Good luck to you and the rest of the company.
This is good for Uber and their employees in the short term, but I can't help but think it's bad for their ideals in the long run. There are a lot of scenarios that look very bad for Uber economically and it would be a shame for a culture shift to coincide with the realization of one of them.
Honestly, I find that a lot of economic discussions on the media are highly speculative (and dissonant to what I've seen circulated internally), and things get just downright sensationalist on some topics, so I've been taking news about Uber with a large grain of salt.
> it would be a shame for a culture shift to coincide with the realization of one of them
I think everyone at Uber has at least some idea about the P&L situation, but there's no doubt in people's minds that we need to drop the go-fast-and-dubiously culture and embrace a do-things-properly culture. If anything, I think it's more likely that a major crisis would continue to drive home that idea.
The downvotes are likely because you're taking an Uber thread veering it off to GM's management and your children, neither of which have any relevance here.
Except for the CEO being changed and having a toxic corporate culture that didn't change and produced the same deadly car across CEO's after promising change but did nothing different--including not stopping production of a deadly vehicle.
I probably should have spoonfed the readers more. They grew up in a world that doesn't need critical thinking anymore so it's probably too much to ask for their brains to activate while reading on a website and have them put distinct ideas together to form a grander one.
Must. Downvote. Comments full of facts but from people I dislike. Must.... errooorrrrroooorrrrrr. 505.
It's okay. Every time I see downvotes here, I know I said something great but I just pissed someone in power off. I'm used to being a minority oppressed by a majority in power. It's no big deal. The system just builds people like that these days.
Still not making the connection from Uber to GM that you are trying to make. Because the GM CEO could not prevent teens across America from joyriding, the new CEO is going to be unable to reign in the behavior of his own supports?
Either make a valid point or let your comments stand. Leave the /r/iamverysmart tandems at the door
We primarily use private phabricator and gitolite instances for internal stuff, but we also have OSS things in regular public GH repos. We do have a few private GH repos, but AFAIK, you're not supposed to version control internal stuff on GH, and there's no real reason to use a private GH repo, except for legal review prior to open sourcing.
I don't have any context on why someone would have put production secrets in a GH repo. If it had happened in my team, I would definitely have sounded the alarm at code review.
Well, I am one, but the things I say here are my individual opinions and observations. I just think that as an insider I get some insights that you'd normally not get from the media, and I figured I'd share them.
I'll believe it when they stop having stories like this every few months. They had over a year to report the breach, and they paid hush money instead. Typical Uber
Yep, but think of all of the private keys and tokens used in automation servers (think CI) for pulling down source. Those don't have 2FA - because they don't login - but they have full access to most source.
In an organization of about 200 engineers across various products, 1000+ github repos, and 10 or so different CI systems. We enforce 2FA at github. I can still easily see how someone could easily gain access to source code with secrets in it.
> In an organization of about 200 engineers across various products, 1000+ github repos
Wait, what? That's 5+ repos per engineer. What on earth would warrant that level of granularity? I've only worked once in my career in a place that used more than 2-3 repositories total, and that was a "MegaTechGiant" with thousands of engineers.
Depends on the company you work at, but most tech companies I've been at have gone the "micro" services approach.
Example:
- 1 repo for the frontend
- 1 for each api
- 1 for the infrastructure terraform scripts
It's good for CI / CD and general code base organization. Also easier to track changes and handle security. You give devs access only to the repos they need to do their job.
Our team has a product with multiple integrations and internal apis, so we easily have 40+ repos.
I know that mentioning downvotes usually invites more downvotes, but...
I'm surprised you're being so heavily downvoted for your question. Engineering teams (and software companies) come in all shapes and sizes. It is absolutely reasonable for even an experienced engineer to have only worked at companies with a handful of repos.
Rather than downvoting, it would have been helpful to explain why your company has opted for such granularity (perhaps engineers or teams have a high level of autonomy, or your software is highly componentised and built from a great many, separately managed, parts).
Some CI setups benefit from a one-repo-per-service approach, as it makes it easier to figure out when an individual app has changed. In orgs where everything is in one giant repo, it can be difficult to establish what subset of your applications needs to be rebuilt when a commit is pushed.
I personally don't have a strong opinion about either way - they both have tradeoffs.
There are just three of us in my company and after 10 years I worked on close to 80 projects for 30 different clients. Each project has its own repo. So +3 per engineer is really not that much;)
It's normal and expected. I have a few dozen. git makes it great to create little repos for lots of different things. They don't have to be production apps. They can be libraries, utilities, documentation, scripts, or just random crap I may want to refer to someday.
Could also be a company using clone - pull request workflow.
10-20 project repo an then each developer has a bunch of projects clones, including a few shared one - like the common infrastructure stuff, ...
I can see that with a company that has grown day 1 around Github, especially during early startup stages with a variety of contributors but no formalised "organization".
How about, if Uber stores all data across those git repositories (1000+)? Perhaps they use git as a multi-versioned data storage? Perhaps better than Kafka (event sourcing thing?). Just a thought :)
You couldn't enforce 2FA on GHE for the longest time. GHE version 2.8.0 lists [0] "Enforce two-factor authentication" as a feature. 2.8.0 was released November 2016. According to the article,
> Kalanick, Uber’s co-founder and former CEO, learned of the hack in November 2016, a month after it took place, the company said.
I don't know if they were using GHE. If they were, at the time it did not come with a good way for them to enforce 2FA for users.
Yeah this was such a PITA several years ago... To solve the problem we ended up building a small proxy in Perl for the express purpose of adding 2FA to Github Enterprise.
> I don't know if they were using GHE. If they were, at the time it did not come with a good way for them to enforce 2FA for users.
Well, sort of - at the application level, that's true, but GHE is typically run behind a VPN. Certainly that should be the case for a company the size of Uber.
Even before GHE added 2FA, it shouldn't have been possible for a leaked set of login credentials to be used to access GHE, without some other sort of compromise (VPN cert, physical compromise of hardware, etc.).
At my company (mostly a Windows and Microsoft shop), my domain credentials are used to log into the VPN, and TFS, and Octopus. Compromising just that one set of credentials could effectively "own" our company. And I'm just a senior-ish developer.
Lateral movement by an attacker is a real thing. And while credential reuse is something most security focused web companies are trying to mitigate, a push for "sso"-like account management is seemingly undoing most of that effort inside the network if not done properly (specifically, auditing and monitoring of behavior).
> my domain credentials are used to log into the VPN, and TFS, and Octopus. Compromising just that one set of credentials could effectively "own" our company.
This is why 2FA is important! I worked for a company that had a very similar setup: I essentially had a single "LDAP" password. But: everything web-browser went through a single sign-on site, and it required 2FA (and so, you were never entering your password into even random internal applications: there was exactly one page where you should log in). Terminal stuff had a similar flow that also required 2FA (e.g., for SSH). As a user, the experience was not painful at all.
It does seem like, however, from an operations standpoint, getting such a setup in the first place is not trivial.
If they are/were using GHE, I would expect (hope?) that they require some sort of VPN to get access to it, so my guess would be this was stored on github.com.
This is so gob-smackingly uncommon I started asking "do you require 2fa for your github accounts" as part of my interview questions when I was looking for jobs (i.e. I'd ask my interviewers).
I don't know how to feel knowing that there is even one software-focused company out there that doesn't enforce 2fa on its github accounts. Like... how?! Why?!
I really don't think using 2FA and the direct hacking of an individual developer's machine are all that comparable here.
Who cares about access to individual dev's machines if the credentials to access code on github are obtained - 2FA at least offers some degree of protection in this scenario. The scope for attack is extremely different.
> A backdoor can sit around and wait for the user to press the button.
There exist 2FA protocols[1] that permit tying the 2FA challenge to a particular context: you can't just take the response from the 2FA hardware and use it anywhere. In this regard, the malware doesn't get anything more than what they already have, and the 2FA still adds protection: if the malware is able to compromise your password (e.g., through keylogging) it doesn't immediately get access to everything you have access to. Now, of course, if you 2FA for some resource, then yes, at that point, you're probably doomed, but I don't believe that gets the malware anything new (e.g., once the auth is complete, if that results in a "user is logged in" cookie, the malware could just read that, and go to town.)
Compromise of a local machine is definitely bad, and not what you want, but 2FA tokens are not useless, even in that situation.
The hackers wanted access to the code to look for Amazon keys. For them it doesn't matter if they get the code from the internal GitHub or from a developer machine.
If you have an ultra-secure door, the thiefs will just enter through your regular window.
How do you know they "wanted" access to look for Amazon keys? Do you know it wasn't from a blanket scan of github?
Sure, there are only 13 projects on https://uber.github.io/, but there are 169 on https://github.com/uber, and it only takes a short while to scan for access keys. There are plenty of open tools that will scan github for keys.
This may not have been targeted at Uber but a net for all of github with Uber being just one company that was hit up for cash. Unless you're saying that you know the motivations of the attackers.
You only need the ability to generate TOTP or U2F tokens. This is often done using a smartphone app, but can also be done by a desktop app like 1Password or a hardware device like a Yubikey:
https://github.com/blog/2071-github-supports-universal-2nd-f...
You can also record the TOTP secret in your automated login script, next to your password, and generate the token on the fly right there.
It's things like that that make me wonder why TOTP tokens are supposed to be conceptually different from passwords. A TOTP scheme involves knowing a master password, and nothing else.
Recording a TOTP secret next to your password would make 2FA worthless, true. That’s why you should use hardware generators whenever possible. However, Github supports Fido/u2f which is conceptually superior to TOTP: The authentication secret is bound to the domain and the token generator verifies this. So even a software u2f implementation protects against phishing for example, while TOTP does not.
> use their personal phones seems like a very bad solution
Why? You're not any less secure by using a personal phone. What are the odds that an employee is going to be phished and have their phone compromised by the same entity.
IANAL, but here is my thinking: The problem with personal phones is they are hard to audit. When a phone belongs to the corp, corp owns the phone, and "probably" can audit it as it wished.
In order to install my work Gmail account on my phone, I had to install a program on my personal phone that let admins wipe it remotely. This is not something that bothers me, because I expect to lose the phone almost anytime, so the contents on it are backed up continously on a system I control.
Whereas that bothered me so much I refused to put email on my phone and told my employer they needed to provide me with a phone if they wanted me to always be on email.
I'm already answering emails out of office hours which is for my employers benefit and they want to functionaly own my phone because of it?
Unless you're talking about a 3 person start-up, wouldn't the use of github itself be a red flag? If you're a software company, you live and die by your source code. Why on earth would you rely on some other company to hold it for you? This seems as ridiculous as doing your bookkeeping on Google Docs.
I've never once worked in a company that permitted source code to leave the company network.
Because you trust their security better than your own, which at any organisation without a dedicated security team seems like a reasonable decision. I live and die by my money, too, and I give that to a private company to hold rather than protect it myself.
It's not just about who knows more about security. It's a trade-off, and you need to account for other factors like cost, availability/uptime, data integrity, total attack surface area and others. Honestly, I'm surprised this is such a controversial point of view, but judging by the downvotes it appears it is. You learn something new every day, I guess.
The point is that the trade-offs usually come down in favor of using GitHub Enterprise (or whatever other well-regarded, trusted enterprise system). The availabilty and uptime are your own, because it’s self-hosted, like git. The data integrity is also your own. The security is better than probably any other VCS interface over git, with the possible exception of GitLab, and almost certainly better than what an organization could come up with on their own if it’s not their core competency. Unless you’re literally using straight git, GitHub Enterprise (or again, whatever other competitor) usually enhances team productivity. The attack surface is larger than git, sure, but the rational solution to that would really be to use no interface over git, because GitHub Enterprise is as safe as they come.
I think you’ve misinterpreted people’s reactions. It’s not at all controversial to use other companies’ services for your most sensitive assets, it’s your opinion that appears controversial to them. If you’re in control of your own servers, what remains is to trust GitHub Enterprise not to literally phone home your source code or to enable remote code execution on your own server. There are myriad information security policies and compliance methodologies for compartmentalizing, quantifying sharing that risk.
For what it’s worth, having personally performed security assessments for over 50 different companies across the gamut of size/maturity, nearly all of them use a centralized VCS hosted or produced by GitHub or Bitbucket (and nowadays, occasionally GitLab too).
GitHub Enterprise is a different beast, as it's self-hosted. My comment was in response to the parent's mention of companies storing their source code on GitHub, which might imply external hosting. I suppose it was ambiguous.
Right, but none of those things is necessarily a home run for self-hosting your central git repository. Particularly in today's world, where you likely have remote workers and don't necessarily have any other servers you're managing, anything you could call a "local" network or even a VPN.
I've been surprised how many commercial, closed-source projects have opted for Github in recent years. While I would probably prefer to self-host (Gitlab, or similar) in order to reduce dependencies, I do see the benefits. Having recently worked at an organisation hosting exclusively on Github, it made collaboration with remote contractors and third parties very straightforward and helped eliminate much of the maintenance burden on our small team.
You have a full checkout on your laptop and probably a whole bunch of other developers laptops. With git you can also have random backup computers do the same thing! You don't have to rely on github alone, for this.
uber engineer here, we have 2fa set up for everything. Starting my day takes about 5 different 2fa checks (ssh access, aws, phabricator, team chat, etc)
I know Uber has a strong engineering culture, which is why I was so surprised. I think philsnow's assessment that organization-wide required 2FA wasn't available for GitHub Enterprise at the time of the hack is probably correct.
Although more and more applications support SAML for SSO, much of the SaaS world is disparate and siloed. There's definitely something to be said for centralised user management on a homogeneous system. User leaves your organisation? Just retire them in LDAP.
2FA wouldn't have necessarily solved this, if the hackers had access to an engineer's ssh keypair (e.g stolen laptop) they could clone repos as they pleased. 2FA isn't a silver bullet.
Maybe it's just me, could "private GitHub coding site" have meant a private GitHub repo with GitHub pages turned on?
If that were the case, there would be no authentication whatsoever to access the closed-source site; the hacker would have just needed to guess the right url.
The most I've ever personally seen a company do is require a VPN for their privately-hosted repos. For others using GitHub or Bitbucket? Never anything beyond a standard login.
Two factor won't protect you from a spear-fishing attack.
The attacker can submit your info to GitHub the moment you submit to the malicious site. You receive the token via SMS as expected, enter it on the second page of the malicious site, granting them access.
I assume it was password reuse from one of their engineers or something similar. If you could compromise GitHub itself there would probably be higher value targets (source code for upcoming AAA games, Coinbase, government organizations, etc.)
AAA games have budgets in the millions. Threatening full release would likely net you much more than a few hundred thousands, and without requiring any secondary attack.
We use a tool under a Linux Foundation project called anteater https://github.com/opnfv/releng-anteater, which does the same thing (but is for a jenkins / gerrit workflow). A key difference from looking at talisman, is anteater uses standard RegEx rather then code to seek out strings, so anyone can add their own strings / file names easily into a simple yaml file. Like wise they can use regex to provide a waiver, should something be incorrectly reported.
I am thinking now would be a good time to port it to working with webhooks as well.
There are a few SaaS offerings that will let you do that. LastPass or onepassword are two commonly used.
One you can use something like keypass to store a database in a shared location if you don't trust the SaaS offerings.
If you are looking at storing credentials for automation purposes, and don't have a secret store built in, you could look at something like Hashicorp Vault to help provide this for you
We're using Keepass / MacPass password protected vault shared with the team using Dropbox. It's really good and essentially free to use if you use a free Dropbox account.
They key part is "Warning: Once you have pushed a commit to GitHub, you should consider any data it contains to be compromised. If you committed a password, change it! If you committed a key, generate a new one."
Removing the secrets from the repository is nice to have, but not that necessary - what is mandatory is to ensure that the compromised secrets are no longer useful, since they aren't secret any more and won't be ever again.
I am rather disappointed in github for publishing this guide. The portion at the top stating
> Warning: Once you have pushed a commit to GitHub, you should consider any data it contains to be compromised. If you committed a password, change it! If you committed a key, generate a new one.
Is a good argument as to why you shouldn't let users erase this data from history, it's already out there so no matter how painful or convoluted your process is for regenerating auth credentials is, you need to do it if you've published them into your SCM. If the process is painful you might want to simplify it because you'll probably need to do it sometime in the future again... yes even you large corporate workers who have no control over credential regeneration, an arduous process leads to credential sharing between projects which is another horrible thing.
Yeah, but hopefully they can't do much if they just have your code base. If the secrecy of your code is the only thing stopping hackers from exploiting you, you're missing some gaping holes in your infrastructure. With that said, nothing wrong with using secrecy as a additional barrier, but shouldn't be the only, and if it's not the only, you're not "so owned at that point".
“Just” leaking full source could be enough to destroy a lot of IP-based companies. A lot of companies stay wealthy because their IP is so huge than nobody can afford to develop competitive alternatives anymore (Adobe, Microsoft Office, Salesforce etc). Some of them have actual “secret sauce” that they cannot afford to share (suggestion engines, biotech processes etc). Even a service like Github, which relies on others entrusting their work to them, would take a humongous reputation hit from a leak like that.
I don't think either of those companies would cease to exist if their code bases leaked online today. Sure, someone might get something to build, but there is surely A LOT of things around the code bases to support all of this, which means the code bases would mostly serve as a study for software in general (and finding holes obviously).
Github is a bit unfair comparision, as their business is literally to make your code private, so if it leaks then of course it would be a hard hit. For the general company, I think leaking access credentials is a much bigger (but easier to fix) problem than leaking the source code itself.
> I don't think either of those companies would cease to exist if their code bases leaked online today.
A serious Photoshop clone that can match PS feature for feature would wipe Adobe, people cannot wait to get rid of them. 25% of MS revenues comes directly from Office and another 25% from Windows or other commercial offerings that are basically driven by Office, so yeah, MS would survive a working Office clone, but they would be deeply wounded; they pulled all the dirty tricks in the book to keep competitors from integrating seamlessly... having the real code responsible for their formats available in the open, would hurt them massively.
These companies are as big as they are because they did the right moves at the right time, and now they have spent so many man-decades on their codebases that nobody can realistically hope to catch up starting from scratch; but having a good look at their codebases would likely kickstart oozes of competitors with very good chances to replace them in a very short time.
> For the general company, I think leaking access credentials is a much bigger (but easier to fix) problem than leaking the source code itself.
Credentials are a mean to an end: protecting something. If you are Ashley Madison, your valuable IP is your database of users and their preferences; but if you are Microsoft or Adobe, what credentials are protecting is your source code. Adobe survived their user credentials being leaked, like so many other companies. They would have hurt much more had they leaked the entire PS codebase.
But a competing company can't just give a copy of the leaked source code to their developers and tell them to go to town. Even by employing clean room design, you can't get around all the patents that likely protect many of the features that Photoshop users consider crucial.
"If the secrecy of your code is the only thing stopping hackers from exploiting you"
I hate these types of arguments. Yeah no one said that ever.
Losing your code base is terrible. I view it as losing a journal. What your company tries, tests you run, funny comments, or funny mistakes. I mean they post it on the net, blackmail team members, imposter team members, forge for leaks, sell it, pushes to prod from compromised accounts, CI systems, -- seems bad to me. Sure don't have aws keys in there.
Glad to be talking with you too! :) I didn't mean to imply you said something you didn't, only that I would consider access keys to various services be of much more importance the code base itself. I read you comment as "Doesn't matter about the access keys, if they have your source code, you're screwed no matter what", which in that case would seem a bit strong.
Also "pushes to prod from compromised accounts, CI systems" seems more related to access keys and account security rather than the actual code base.
But hey, in the end I'm no security expert so what do I know.
If they have access to the code inside Github, would they have been able to push their own changes to the code without anyone noticing?
Maybe pushing something that was labeled as a "security patch" but was actually a disguised vulnerability? I could see not even checking into that, and just downloading it. But I'm on a small team. Do big companies have procedures to protect against this?
Depends on how they get access. If they got control of one of the user accounts with push access, they could surely push code (but unsure about "without anyone noticing", depends on their own development processes I guess). However, if they got access to the code by reading some part of the memory/storage holding the code, without actually gaining access through authentication, they wouldn't be able to change it.
Really surprising to see that sensitive credentials were checked in to VCS. Apart from peer code review, how can a company avoid developers checking in sensitive data to VCS?
I really wish AWS would stop enabling master API keys by default. As soon as you create an AWS account you are given API keys which basically have SUDO permissions to your entire account. That is super dangerous and is probably the same key set that these hackers got ahold of. AWS needs to disable these full access API keys by default and instead should encourage users to generate keys for specific access to limit what they can do.
Store credential information where it is used. It is not used by the repository, so it is an improper location for it.
If someone gains access to a system that uses the credentials, then there is, in principle, no difference between puppeteering that system versus stealing its credentials.
Don't check secrets into VCS, folks!