Hacker News new | past | comments | ask | show | jobs | submit login

> Here’s how the hack went down: Two attackers accessed a private GitHub coding site used by Uber software engineers and then used login credentials they obtained there to access data stored on an Amazon Web Services account that handled computing tasks for the company. From there, the hackers discovered an archive of rider and driver information. Later, they emailed Uber asking for money, according to the company.

Don't check secrets into VCS, folks!




I'm surprised Uber doesn't have their engineers set up 2FA for GitHub. Super simple to implement and require organization-wide[1] and would have prevented this. Then again, not storing credentials in GitHub would also have prevented this . . .

[1] https://help.github.com/articles/requiring-two-factor-authen...


Github 2FA has been part of the first-day training/laptop setup for a while now (I joined in may) and there's security-related training in place as well. I was told there are also scanners in place now that check repos, gists, etc for secrets for exactly this type of mistake.

One snippet of the email the article didn't mention was that Sullivan's firing happened pretty much right after Dara learned of the breach and an investigation was conducted. It definitely inspires more confidence in leadership seeing that the CEO will not tolerate unethical behavior.


Uber will not tolerate unethical behavior, you got to be joking!?!?


I think many people don't realize this, but the majority of the leadership team from like a year or two ago is now gone, including Travis.

Also, Uber has been hiring a lot of new people - the ratio of new people vs old timers is really high. I'm obviously just one anecdata point, but I believe new hires (and a lot of old timers) want Uber to be an ethical company, and many have joined the company specifically to tackle that challenge. One great example that comes to mind was when one board member made a sexist remark on an all-hands meeting a few months ago and by the end of that same day, Liane Hornsey (who had just joined as the new head of HR) had him give up his seat.

There's a big push towards trying to make things right, with the holden report, the 180 days of change campaign, the implementation of new training courses, anonymous complaint hotline for employees, etc. And the unspoken message right now is pretty clear: inappropriate conduct _will_ get you fired, even if you are the head of your org.

Obviously there's still a lot of work to be done, but I think we're at least in the right track now.


I really like how your description gets at these policies creating a tipping point in the culture. Hearing about any one individually always sounded like a bandaid, but hearing about them together and then how you and other employees react to them is very encouraging. Good luck to you and the rest of the company.


This is good for Uber and their employees in the short term, but I can't help but think it's bad for their ideals in the long run. There are a lot of scenarios that look very bad for Uber economically and it would be a shame for a culture shift to coincide with the realization of one of them.


Honestly, I find that a lot of economic discussions on the media are highly speculative (and dissonant to what I've seen circulated internally), and things get just downright sensationalist on some topics, so I've been taking news about Uber with a large grain of salt.

> it would be a shame for a culture shift to coincide with the realization of one of them

I think everyone at Uber has at least some idea about the P&L situation, but there's no doubt in people's minds that we need to drop the go-fast-and-dubiously culture and embrace a do-things-properly culture. If anything, I think it's more likely that a major crisis would continue to drive home that idea.


I think the commenter meant the new CEO will not tolerate unethical behavior.


The new CEO will not tolerate new unethical behaviour.

Hopefully he will also slowly eradicate the existing unethical behaviour.


The new CEO will fix everything just like the last 3 GM CEO's changed their corporate culture and stopped them from making cars that kill teenagers...

... crap. My kids won't be buying a GM car.


The downvotes are likely because you're taking an Uber thread veering it off to GM's management and your children, neither of which have any relevance here.


Except for the CEO being changed and having a toxic corporate culture that didn't change and produced the same deadly car across CEO's after promising change but did nothing different--including not stopping production of a deadly vehicle.

I probably should have spoonfed the readers more. They grew up in a world that doesn't need critical thinking anymore so it's probably too much to ask for their brains to activate while reading on a website and have them put distinct ideas together to form a grander one.

Must. Downvote. Comments full of facts but from people I dislike. Must.... errooorrrrroooorrrrrr. 505.

It's okay. Every time I see downvotes here, I know I said something great but I just pissed someone in power off. I'm used to being a minority oppressed by a majority in power. It's no big deal. The system just builds people like that these days.


Still not making the connection from Uber to GM that you are trying to make. Because the GM CEO could not prevent teens across America from joyriding, the new CEO is going to be unable to reign in the behavior of his own supports?

Either make a valid point or let your comments stand. Leave the /r/iamverysmart tandems at the door


Americano Saviour Complex at work: Someone will come, a stranger in our midst, and will make the problems go away. Preferably with a gun and a swoard.

Its always a person, its never a institution, or organisation, never a boring measurement like bureacratic oversight or well made laws.


its possible and even likely that this happened post hack.


True, I just wanted to shed some light into the current state of affairs in here.


Found the newest marketing hire...


Hah, setting the example himself I remember him yelling / and cursing at an Uber driver in NYC, very ethical.

Good luck and I hope you're doing it for the money, cause nobody should buy the "Uber is an ethical company" bs.


Dara Khosrowshahi is the CEO now, not Travis Kalanick... Maybe catch up on the facts before reaching for the pitchforks? :)


Ha, the trouble is that pitchforks are more fun than facts.


Are you using Github Enterprise? Is it available from outside of the uber network?


We primarily use private phabricator and gitolite instances for internal stuff, but we also have OSS things in regular public GH repos. We do have a few private GH repos, but AFAIK, you're not supposed to version control internal stuff on GH, and there's no real reason to use a private GH repo, except for legal review prior to open sourcing.

I don't have any context on why someone would have put production secrets in a GH repo. If it had happened in my team, I would definitely have sounded the alarm at code review.


Just curious, are you speaking as an Uber employee


Well, I am one, but the things I say here are my individual opinions and observations. I just think that as an insider I get some insights that you'd normally not get from the media, and I figured I'd share them.


I'll believe it when they stop having stories like this every few months. They had over a year to report the breach, and they paid hush money instead. Typical Uber


> They had over a year to report the breach, and they paid hush money instead.

Yeah, I'm totally with you there. Not cool :(


Yep, but think of all of the private keys and tokens used in automation servers (think CI) for pulling down source. Those don't have 2FA - because they don't login - but they have full access to most source.

In an organization of about 200 engineers across various products, 1000+ github repos, and 10 or so different CI systems. We enforce 2FA at github. I can still easily see how someone could easily gain access to source code with secrets in it.


> In an organization of about 200 engineers across various products, 1000+ github repos

Wait, what? That's 5+ repos per engineer. What on earth would warrant that level of granularity? I've only worked once in my career in a place that used more than 2-3 repositories total, and that was a "MegaTechGiant" with thousands of engineers.


Depends on the company you work at, but most tech companies I've been at have gone the "micro" services approach.

Example:

- 1 repo for the frontend - 1 for each api - 1 for the infrastructure terraform scripts

It's good for CI / CD and general code base organization. Also easier to track changes and handle security. You give devs access only to the repos they need to do their job.

Our team has a product with multiple integrations and internal apis, so we easily have 40+ repos.


Plus one repo for every open source dependency you fork


I know that mentioning downvotes usually invites more downvotes, but...

I'm surprised you're being so heavily downvoted for your question. Engineering teams (and software companies) come in all shapes and sizes. It is absolutely reasonable for even an experienced engineer to have only worked at companies with a handful of repos.

Rather than downvoting, it would have been helpful to explain why your company has opted for such granularity (perhaps engineers or teams have a high level of autonomy, or your software is highly componentised and built from a great many, separately managed, parts).


Some CI setups benefit from a one-repo-per-service approach, as it makes it easier to figure out when an individual app has changed. In orgs where everything is in one giant repo, it can be difficult to establish what subset of your applications needs to be rebuilt when a commit is pushed.

I personally don't have a strong opinion about either way - they both have tradeoffs.


There are just three of us in my company and after 10 years I worked on close to 80 projects for 30 different clients. Each project has its own repo. So +3 per engineer is really not that much;)


It's normal and expected. I have a few dozen. git makes it great to create little repos for lots of different things. They don't have to be production apps. They can be libraries, utilities, documentation, scripts, or just random crap I may want to refer to someday.


It depends upon the culture. Some places favour a project repo others a repo per microservice/job.


Could also be a company using clone - pull request workflow. 10-20 project repo an then each developer has a bunch of projects clones, including a few shared one - like the common infrastructure stuff, ...

I can see that with a company that has grown day 1 around Github, especially during early startup stages with a variety of contributors but no formalised "organization".


Agencies may create multiple repos per client / project.


Good point—I can see the point of having numerous repos if you have multiple unrelated clients’ code bases.


How about, if Uber stores all data across those git repositories (1000+)? Perhaps they use git as a multi-versioned data storage? Perhaps better than Kafka (event sourcing thing?). Just a thought :)


This is almost certainly what actually happened.


You couldn't enforce 2FA on GHE for the longest time. GHE version 2.8.0 lists [0] "Enforce two-factor authentication" as a feature. 2.8.0 was released November 2016. According to the article,

> Kalanick, Uber’s co-founder and former CEO, learned of the hack in November 2016, a month after it took place, the company said.

I don't know if they were using GHE. If they were, at the time it did not come with a good way for them to enforce 2FA for users.

[0] https://enterprise.github.com/releases/2.8.0


Yeah this was such a PITA several years ago... To solve the problem we ended up building a small proxy in Perl for the express purpose of adding 2FA to Github Enterprise.


> I don't know if they were using GHE. If they were, at the time it did not come with a good way for them to enforce 2FA for users.

Well, sort of - at the application level, that's true, but GHE is typically run behind a VPN. Certainly that should be the case for a company the size of Uber.

Even before GHE added 2FA, it shouldn't have been possible for a leaked set of login credentials to be used to access GHE, without some other sort of compromise (VPN cert, physical compromise of hardware, etc.).


At my company (mostly a Windows and Microsoft shop), my domain credentials are used to log into the VPN, and TFS, and Octopus. Compromising just that one set of credentials could effectively "own" our company. And I'm just a senior-ish developer.

Lateral movement by an attacker is a real thing. And while credential reuse is something most security focused web companies are trying to mitigate, a push for "sso"-like account management is seemingly undoing most of that effort inside the network if not done properly (specifically, auditing and monitoring of behavior).


> my domain credentials are used to log into the VPN, and TFS, and Octopus. Compromising just that one set of credentials could effectively "own" our company.

This is why 2FA is important! I worked for a company that had a very similar setup: I essentially had a single "LDAP" password. But: everything web-browser went through a single sign-on site, and it required 2FA (and so, you were never entering your password into even random internal applications: there was exactly one page where you should log in). Terminal stuff had a similar flow that also required 2FA (e.g., for SSH). As a user, the experience was not painful at all.

It does seem like, however, from an operations standpoint, getting such a setup in the first place is not trivial.


If they are/were using GHE, I would expect (hope?) that they require some sort of VPN to get access to it, so my guess would be this was stored on github.com.


> I don't know if they were using GHE.

They don't use GHE, they use Phabricator.


This is so gob-smackingly uncommon I started asking "do you require 2fa for your github accounts" as part of my interview questions when I was looking for jobs (i.e. I'd ask my interviewers).

I don't know how to feel knowing that there is even one software-focused company out there that doesn't enforce 2fa on its github accounts. Like... how?! Why?!


2fa is just another hurdle. Good to have, but by no means a silver bullet.

Just one of the many ways to bypass it in this case: hack a developer machine and look at the local checkout.


I really don't think using 2FA and the direct hacking of an individual developer's machine are all that comparable here.

Who cares about access to individual dev's machines if the credentials to access code on github are obtained - 2FA at least offers some degree of protection in this scenario. The scope for attack is extremely different.


Laptops and desktops are by far the weakest link and a trove of passwords, tokens, code, logs, chats, emails.

They run browsers, communication tools, all sort of product experiments and testbeds, and they even connect to random airport/hotel wifi.

Attack a laptop and all software and hardware 2FA tokens are useless. A backdoor can sit around and wait for the user to press the button.


> A backdoor can sit around and wait for the user to press the button.

There exist 2FA protocols[1] that permit tying the 2FA challenge to a particular context: you can't just take the response from the 2FA hardware and use it anywhere. In this regard, the malware doesn't get anything more than what they already have, and the 2FA still adds protection: if the malware is able to compromise your password (e.g., through keylogging) it doesn't immediately get access to everything you have access to. Now, of course, if you 2FA for some resource, then yes, at that point, you're probably doomed, but I don't believe that gets the malware anything new (e.g., once the auth is complete, if that results in a "user is logged in" cookie, the malware could just read that, and go to town.)

Compromise of a local machine is definitely bad, and not what you want, but 2FA tokens are not useless, even in that situation.

[1]: https://developers.yubico.com/U2F/Protocol_details/Overview....


The hackers wanted access to the code to look for Amazon keys. For them it doesn't matter if they get the code from the internal GitHub or from a developer machine.

If you have an ultra-secure door, the thiefs will just enter through your regular window.


How do you know they "wanted" access to look for Amazon keys? Do you know it wasn't from a blanket scan of github?

Sure, there are only 13 projects on https://uber.github.io/, but there are 169 on https://github.com/uber, and it only takes a short while to scan for access keys. There are plenty of open tools that will scan github for keys.

This may not have been targeted at Uber but a net for all of github with Uber being just one company that was hit up for cash. Unless you're saying that you know the motivations of the attackers.


The ones that care about the security of their code base host it internally anyway.


To use 2fa on github you need a mobile phone.

Do you give every enployee a mobile phone, or do you ask your employees to use their own personal phones?

Asking them to use their personal phones seems like a very bad solution. Many software companies do not routinely give developers mobile phones...


> To use 2fa on github you need a mobile phone.

This is incorrect.

You only need the ability to generate TOTP or U2F tokens. This is often done using a smartphone app, but can also be done by a desktop app like 1Password or a hardware device like a Yubikey: https://github.com/blog/2071-github-supports-universal-2nd-f...


You can also record the TOTP secret in your automated login script, next to your password, and generate the token on the fly right there.

It's things like that that make me wonder why TOTP tokens are supposed to be conceptually different from passwords. A TOTP scheme involves knowing a master password, and nothing else.


Recording a TOTP secret next to your password would make 2FA worthless, true. That’s why you should use hardware generators whenever possible. However, Github supports Fido/u2f which is conceptually superior to TOTP: The authentication secret is bound to the domain and the token generator verifies this. So even a software u2f implementation protects against phishing for example, while TOTP does not.


Do you know of any open source software implementations of u2f.


Firefox includes one IIRC and there’s githubs SoftU2F for Mac https://github.com/github/SoftU2F


> use their personal phones seems like a very bad solution

Why? You're not any less secure by using a personal phone. What are the odds that an employee is going to be phished and have their phone compromised by the same entity.


IANAL, but here is my thinking: The problem with personal phones is they are hard to audit. When a phone belongs to the corp, corp owns the phone, and "probably" can audit it as it wished.


In order to install my work Gmail account on my phone, I had to install a program on my personal phone that let admins wipe it remotely. This is not something that bothers me, because I expect to lose the phone almost anytime, so the contents on it are backed up continously on a system I control.


Whereas that bothered me so much I refused to put email on my phone and told my employer they needed to provide me with a phone if they wanted me to always be on email.

I'm already answering emails out of office hours which is for my employers benefit and they want to functionaly own my phone because of it?


Pretty high actually.. I mean it's a lot of money at stake.


It's actually getting more common to give out phones, at least in companies that really care about security.

For companies that don't do that Github also offers the option of FIDO U2F compatible keys.


It works with u2f as well.


Unless you're talking about a 3 person start-up, wouldn't the use of github itself be a red flag? If you're a software company, you live and die by your source code. Why on earth would you rely on some other company to hold it for you? This seems as ridiculous as doing your bookkeeping on Google Docs.

I've never once worked in a company that permitted source code to leave the company network.


Because you trust their security better than your own, which at any organisation without a dedicated security team seems like a reasonable decision. I live and die by my money, too, and I give that to a private company to hold rather than protect it myself.


What makes you think you (or most devs for that matter) know more about security than Github's security team?


It's not just about who knows more about security. It's a trade-off, and you need to account for other factors like cost, availability/uptime, data integrity, total attack surface area and others. Honestly, I'm surprised this is such a controversial point of view, but judging by the downvotes it appears it is. You learn something new every day, I guess.


The point is that the trade-offs usually come down in favor of using GitHub Enterprise (or whatever other well-regarded, trusted enterprise system). The availabilty and uptime are your own, because it’s self-hosted, like git. The data integrity is also your own. The security is better than probably any other VCS interface over git, with the possible exception of GitLab, and almost certainly better than what an organization could come up with on their own if it’s not their core competency. Unless you’re literally using straight git, GitHub Enterprise (or again, whatever other competitor) usually enhances team productivity. The attack surface is larger than git, sure, but the rational solution to that would really be to use no interface over git, because GitHub Enterprise is as safe as they come.

I think you’ve misinterpreted people’s reactions. It’s not at all controversial to use other companies’ services for your most sensitive assets, it’s your opinion that appears controversial to them. If you’re in control of your own servers, what remains is to trust GitHub Enterprise not to literally phone home your source code or to enable remote code execution on your own server. There are myriad information security policies and compliance methodologies for compartmentalizing, quantifying sharing that risk.

For what it’s worth, having personally performed security assessments for over 50 different companies across the gamut of size/maturity, nearly all of them use a centralized VCS hosted or produced by GitHub or Bitbucket (and nowadays, occasionally GitLab too).


GitHub Enterprise is a different beast, as it's self-hosted. My comment was in response to the parent's mention of companies storing their source code on GitHub, which might imply external hosting. I suppose it was ambiguous.


Right, but none of those things is necessarily a home run for self-hosting your central git repository. Particularly in today's world, where you likely have remote workers and don't necessarily have any other servers you're managing, anything you could call a "local" network or even a VPN.


> Honestly, I'm surprised this is such a controversial point of view

HN users tend toward a very pro-SaaS stance.


I've been surprised how many commercial, closed-source projects have opted for Github in recent years. While I would probably prefer to self-host (Gitlab, or similar) in order to reduce dependencies, I do see the benefits. Having recently worked at an organisation hosting exclusively on Github, it made collaboration with remote contractors and third parties very straightforward and helped eliminate much of the maintenance burden on our small team.


You have a full checkout on your laptop and probably a whole bunch of other developers laptops. With git you can also have random backup computers do the same thing! You don't have to rely on github alone, for this.


uber engineer here, we have 2fa set up for everything. Starting my day takes about 5 different 2fa checks (ssh access, aws, phabricator, team chat, etc)


I know Uber has a strong engineering culture, which is why I was so surprised. I think philsnow's assessment that organization-wide required 2FA wasn't available for GitHub Enterprise at the time of the hack is probably correct.


That sounds really inefficient


That sounds reasonably secure and quite common for a big tech company.


Although more and more applications support SAML for SSO, much of the SaaS world is disparate and siloed. There's definitely something to be said for centralised user management on a homogeneous system. User leaves your organisation? Just retire them in LDAP.


2FA wouldn't have necessarily solved this, if the hackers had access to an engineer's ssh keypair (e.g stolen laptop) they could clone repos as they pleased. 2FA isn't a silver bullet.


Could use a Yubikey (or similar) for SSH access.


Unless 2fa was bypassed with the token you get from GitHub in order to use the git client via https.


Maybe it's just me, could "private GitHub coding site" have meant a private GitHub repo with GitHub pages turned on?

If that were the case, there would be no authentication whatsoever to access the closed-source site; the hacker would have just needed to guess the right url.


Working at another large tech company, this does not surprise me.

Edit: I mean it would surprise me if it wasn't recommended practice, but it would also surprise me if it was somehow strictly enforced.


The most I've ever personally seen a company do is require a VPN for their privately-hosted repos. For others using GitHub or Bitbucket? Never anything beyond a standard login.


2FA doesn't help if they used SSH access


It’s also required for SSH access to Uber’s servers.


that doesn't protect you from GitHub employees snooping around.


Couldn't you say the same thing about any commercial web platform? Like AWS?


yes of course


Or anyone who manages to breach GitHub's defenses.


Two factor won't protect you from a spear-fishing attack.

The attacker can submit your info to GitHub the moment you submit to the malicious site. You receive the token via SMS as expected, enter it on the second page of the malicious site, granting them access.


Do we know how the attackers accessed the github repo? If it was via malware on the employee's machine, or cookie theft then 2fa wouldn't have helped.


No, 2FA would not have prevented disclosure of credentials in GitHub. The fix for that is to not check credentials in to GitHub. Nothing else.


I mean they don't say how they accessed the GitHub repo or whether there was a vulnerability in Github itself that allowed access


I assume it was password reuse from one of their engineers or something similar. If you could compromise GitHub itself there would probably be higher value targets (source code for upcoming AAA games, Coinbase, government organizations, etc.)


> If you could compromise GitHub itself there would probably be higher value targets (source code for upcoming AAA games

I'm intrigued. Why would that be a higher-value target?


AAA games have budgets in the millions. Threatening full release would likely net you much more than a few hundred thousands, and without requiring any secondary attack.


Are many (any?) AAA studios using private Github repos for development?


I mean 100k is a lot of money and there is no saying they didn't hit those guys also


You can use tools like Talisman which registers a Git hook to check if you are checking in anything that looks like secret.

https://github.com/thoughtworks/talisman


We use a tool under a Linux Foundation project called anteater https://github.com/opnfv/releng-anteater, which does the same thing (but is for a jenkins / gerrit workflow). A key difference from looking at talisman, is anteater uses standard RegEx rather then code to seek out strings, so anyone can add their own strings / file names easily into a simple yaml file. Like wise they can use regex to provide a waiver, should something be incorrectly reported.

I am thinking now would be a good time to port it to working with webhooks as well.

The tool would have blocked the aws credentials from being checked in: https://github.com/opnfv/releng-anteater/blob/master/master_...


It's not foolproof but this tool needs to be more widely-known - it would've saved me on countless occasions.


Dumb question: What's the best practice to share authentication credentials across the team for services that don't have an IAM feature?


I've never used it in production (my last shop was heavily AWS based and relied on IAM), but I always like the look of Hashicorp's Vault [0]

https://www.vaultproject.io/


When it comes to security, there are no dumb questions.


There are a few SaaS offerings that will let you do that. LastPass or onepassword are two commonly used.

One you can use something like keypass to store a database in a shared location if you don't trust the SaaS offerings.

If you are looking at storing credentials for automation purposes, and don't have a secret store built in, you could look at something like Hashicorp Vault to help provide this for you


LastPass has a terrible track record in security, that was nicely edited out from wikipedia by a fresh user: https://en.wikipedia.org/w/index.php?title=LastPass&action=h...

The user in question has some specific interest in editing LogMeIn, parent of LastPass, pages: https://en.wikipedia.org/w/index.php?limit=50&title=Special%...


I think that something like Stack's Blackbox is the best idea. This ansible-based setup also explains the concepts pretty well: http://ansiblecookbook.com/html/en.html#how-do-i-store-priva...


In person I use a thumb drive. You could encrypt the credentials using PGP and send it to a coworker if they are remote.

Sometimes I just go on google hangouts and share my screen if I'm feeling lazy.


We're using Keepass / MacPass password protected vault shared with the team using Dropbox. It's really good and essentially free to use if you use a free Dropbox account.


Then make sure you use 2FA on the Dropbox account. And you should use a key + password to unlock keepass.


Keepass and keybase team repo to sync.


We launched EnvKey[1] a couple months ago to offer an easy-to-integrate solution to this issue.

1 - https://www.envkey.com


We use 1password for teams.


Just pigging-backing on your comment. If you did, here's a guide from Github on how to remove it: https://help.github.com/articles/removing-sensitive-data-fro...


They key part is "Warning: Once you have pushed a commit to GitHub, you should consider any data it contains to be compromised. If you committed a password, change it! If you committed a key, generate a new one."

Removing the secrets from the repository is nice to have, but not that necessary - what is mandatory is to ensure that the compromised secrets are no longer useful, since they aren't secret any more and won't be ever again.


I am rather disappointed in github for publishing this guide. The portion at the top stating

> Warning: Once you have pushed a commit to GitHub, you should consider any data it contains to be compromised. If you committed a password, change it! If you committed a key, generate a new one.

Is a good argument as to why you shouldn't let users erase this data from history, it's already out there so no matter how painful or convoluted your process is for regenerating auth credentials is, you need to do it if you've published them into your SCM. If the process is painful you might want to simplify it because you'll probably need to do it sometime in the future again... yes even you large corporate workers who have no control over credential regeneration, an arduous process leads to credential sharing between projects which is another horrible thing.


They are doing the right thing by letting the users control their own data, and at most they can make it more complicated to do but not impossible.

There are cases- such as complying with court orders- where removing the data is appropriate (even if a bit futile in the long run).


There is sensitive data that isn't a password, and can't be changed.


"Don't check secrets into VCS, folks! "

I suppose? But at this point they have your code base. You are so owned at that point.


Yeah, but hopefully they can't do much if they just have your code base. If the secrecy of your code is the only thing stopping hackers from exploiting you, you're missing some gaping holes in your infrastructure. With that said, nothing wrong with using secrecy as a additional barrier, but shouldn't be the only, and if it's not the only, you're not "so owned at that point".


“Just” leaking full source could be enough to destroy a lot of IP-based companies. A lot of companies stay wealthy because their IP is so huge than nobody can afford to develop competitive alternatives anymore (Adobe, Microsoft Office, Salesforce etc). Some of them have actual “secret sauce” that they cannot afford to share (suggestion engines, biotech processes etc). Even a service like Github, which relies on others entrusting their work to them, would take a humongous reputation hit from a leak like that.


> Adobe, Microsoft Office, Salesforce

I don't think either of those companies would cease to exist if their code bases leaked online today. Sure, someone might get something to build, but there is surely A LOT of things around the code bases to support all of this, which means the code bases would mostly serve as a study for software in general (and finding holes obviously).

Github is a bit unfair comparision, as their business is literally to make your code private, so if it leaks then of course it would be a hard hit. For the general company, I think leaking access credentials is a much bigger (but easier to fix) problem than leaking the source code itself.


> I don't think either of those companies would cease to exist if their code bases leaked online today.

A serious Photoshop clone that can match PS feature for feature would wipe Adobe, people cannot wait to get rid of them. 25% of MS revenues comes directly from Office and another 25% from Windows or other commercial offerings that are basically driven by Office, so yeah, MS would survive a working Office clone, but they would be deeply wounded; they pulled all the dirty tricks in the book to keep competitors from integrating seamlessly... having the real code responsible for their formats available in the open, would hurt them massively.

These companies are as big as they are because they did the right moves at the right time, and now they have spent so many man-decades on their codebases that nobody can realistically hope to catch up starting from scratch; but having a good look at their codebases would likely kickstart oozes of competitors with very good chances to replace them in a very short time.

> For the general company, I think leaking access credentials is a much bigger (but easier to fix) problem than leaking the source code itself.

Credentials are a mean to an end: protecting something. If you are Ashley Madison, your valuable IP is your database of users and their preferences; but if you are Microsoft or Adobe, what credentials are protecting is your source code. Adobe survived their user credentials being leaked, like so many other companies. They would have hurt much more had they leaked the entire PS codebase.


But a competing company can't just give a copy of the leaked source code to their developers and tell them to go to town. Even by employing clean room design, you can't get around all the patents that likely protect many of the features that Photoshop users consider crucial.


> you can't get around all the patents

Just open a shop in China and obfuscate a bit. Job done.


"If the secrecy of your code is the only thing stopping hackers from exploiting you"

I hate these types of arguments. Yeah no one said that ever.

Losing your code base is terrible. I view it as losing a journal. What your company tries, tests you run, funny comments, or funny mistakes. I mean they post it on the net, blackmail team members, imposter team members, forge for leaks, sell it, pushes to prod from compromised accounts, CI systems, -- seems bad to me. Sure don't have aws keys in there.


Glad to be talking with you too! :) I didn't mean to imply you said something you didn't, only that I would consider access keys to various services be of much more importance the code base itself. I read you comment as "Doesn't matter about the access keys, if they have your source code, you're screwed no matter what", which in that case would seem a bit strong.

Also "pushes to prod from compromised accounts, CI systems" seems more related to access keys and account security rather than the actual code base.

But hey, in the end I'm no security expert so what do I know.


If they have access to the code inside Github, would they have been able to push their own changes to the code without anyone noticing?

Maybe pushing something that was labeled as a "security patch" but was actually a disguised vulnerability? I could see not even checking into that, and just downloading it. But I'm on a small team. Do big companies have procedures to protect against this?


Depends on how they get access. If they got control of one of the user accounts with push access, they could surely push code (but unsure about "without anyone noticing", depends on their own development processes I guess). However, if they got access to the code by reading some part of the memory/storage holding the code, without actually gaining access through authentication, they wouldn't be able to change it.


Really surprising to see that sensitive credentials were checked in to VCS. Apart from peer code review, how can a company avoid developers checking in sensitive data to VCS?


You could have a git hook (even remote) that would check for pre-configured patterns and reject the push if it contains them.

Quick google yielded this https://github.com/awslabs/git-secrets



I really wish AWS would stop enabling master API keys by default. As soon as you create an AWS account you are given API keys which basically have SUDO permissions to your entire account. That is super dangerous and is probably the same key set that these hackers got ahold of. AWS needs to disable these full access API keys by default and instead should encourage users to generate keys for specific access to limit what they can do.


Totally agree. I am moving secrets onto consul/vault - would like to hear what others use for the same.


git-secrets is a pre-commit hook that regexp's out secrets and blocks commits

https://github.com/awslabs/git-secrets


Things like this make me feel much less concerned about the confidence gap.


I'm not even mad, thats a good bug bounty


From what I hear it's pretty common...


It's very common, but there are lots of ways of addressing it.


But you have to put them somewhere; how is idk, AWS credential management secured?


Store credential information where it is used. It is not used by the repository, so it is an improper location for it.

If someone gains access to a system that uses the credentials, then there is, in principle, no difference between puppeteering that system versus stealing its credentials.


> Don't check secrets into VCS, folks!

Ok, how do you handle a bootstrap problem?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: