Hacker News new | past | comments | ask | show | jobs | submit | twistedpair's comments login

Should API access keys be stored in plain text such that they can easily be recovered from backups or clones to data warehouses?

Best practice would be to store such keys in an encrypted state, to prevent such breaches from non-production datasource access, or _even_ direct production database access.


Better than when Planet's DC actually exploded [1].

Restoration is hard when health and safety are in question. Good luck to these ops folks <3

[1] https://www.datacenterknowledge.com/archives/2008/06/01/expl...


"recession" talk provides cloud cover for many a company to pare back and prune products, processes, and people in a way that would normally cause more out cry. It's not that such changes aren't needed, but they can pile up until a forcing function triggers a hard look and it's acceptable to do them.


I prefer LastPass' featureset over 1Password, when I've trial them both, but I certainly don't prefer LastPass' opsec.


A good reason to give engineers PGP keys and turn on the "required code signing" feature on your org. Alas, security and productivity are perpetual odds.


Just a friendly reminder that both GH and GL now support using SSH keys for signing commits, and 1Password (and KeePassXC, FWIW) will safely store those SSH creds off-disk:

https://docs.github.com/en/authentication/managing-commit-si...

https://docs.gitlab.com/ee/user/project/repository/ssh_signe...

https://developer.1password.com/docs/ssh/agent/

https://keepassxc.org/docs/KeePassXC_UserGuide.html#_ssh_age...

Although in full transparency, I still use GPG for my needs, since I better understand its workflow


Note that your GPG key is discarded, and GitHub signs your commit itself with GitHub.com's own GPG key when anyone uses the GitHub UI to merge your PR.

All those "verified" buttons you see on a typical repo history tend to actually be for the GitHub.com signing key, which is shared by everyone. Your GPG signature is only used to convince GitHub to sign the final commit with its key.

It is possible to put your GPG signature on the merged commits, so that people can trust the commits came from you. That may be especially appropriate for security software. But you have to do the merges (or rebases as you prefer) outside GitHub for that, and push those merges directly to the main branch. That's what I do when I can, but it's not common practice. Many orgs require all merges to be done via GitHub, so end up with GitHub.com's shared signature on everything instead of their own.


Just wondering - would this meaningfully impact productivity beyond causing engineers to have to learn how to sign a commit (which would presumable take less than an hour, once)?


Actually generating a key and signing commits is pretty easy. I think the harder part would be ensuring all devs safely store the keys, rotate them regularly, etc.


I'm excited for FGPATs, but they're still in beta and still have a lot of shortcomings at this time.


IAM on GitHub needs so much <3. So broad, much ow.

For example, I trialed major security vendor's enterprise product. They required their app be granted Admin on the GitHub org. All they needed to do was create issues, PRs, and read source code for analysis. There are scopes for that.

I was eventually on a call with a principle engineer in this company, who kept saying they needed this permissions, and I kept showing him the API docs that showed that wasn't so. Eventually he said, "well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way." Sure, I'll give you the ability to change all my code, add/remove users, drop repos... etc, and trust that some day, when you're hacked, someone will not use those over granted permissions maliciously?

Security is hard. Be careful what permissions you give your 3rd party GitHub integrations.


>>"well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way."

Devs have been doing that since the dawn on computing. Ohh your App needs to be able to write to a protected folder on windows. Dont document what folder just force the app the run as Admin.

Early Android Apps asked for all the permissions, all the time because of lazy devs

security is hard, and gets in the way of what the devs wnat to do so they just find ways to bypass it


Early Android had pretty coarse permissions IIRC. It wasn't quite "root or nothing" but somewhat closer to that than it is today.


"because of lazy devs" :thinking:


Definitely not because of the pesky product managers and sales teams who want a new feature to sell yesterday to boost their EOY bonus...


Ask them to sign a document accepting all liability in those situations. I think the conversation will quickly change..


"Sorry, we don't accept redlines or riders for accounts that are less than $750k ACV."


> I trialed major security vendor's enterprise product

> just give them to us anyway, because it's easier this way

Wow. The state of security is still sad in our profession, if even major security vendor(s) don't adhere to basic principles like "principle of least privilege".


Heh. Reminds of one of Symantec's "Enterprise" products.

Turned out, if you're logged into the central (on prem) server it has the ability to run commands as root/superuser on any of the connected clients (generally servers themselves).

The commands run this way are _not logged_ and don't show up in any system audit logging.

After we pointed this out as a security problem in itself, they released a new version that _apparently_ had this functionality removed (was in the release notes).

But digging into the new release, they'd just moved the functionality into different binaries and hoped no-one would notice. :(

The mind boggles at what some of these places will try.


"Required functionality..." They're just not telling you who the requirements come from.


It's not just security vendors, it's everyone.

You can't even set up popular software like Tailscale with a github login without it requiring access to your organization's private repositories.

It's like mobile phone permissions in the old days where your calculator needs access to your contacts and location.

I thought technology companies learned this lesson a decade ago, apparently not.


My experience with security vendors is that there's a lot security vendors who check checklist compliance solutions that on paper helps to be compliant but in reality are extremely insecure malwares.


It was only recently that PATs got the ability to be scoped per repo, and even that's still in beta.


But frustratingly fine grained scoping doesn't work for repos part of an Organisation! Like, what!


You need to choose from a drop-down that the token owner is the organization, not you. Then you can create a token for a repo of the org. Your org must opt-in to beta.


Definitely - also its pretty easy to lose track of all things. Started a tool to audit github apps and misc permissions for an org which is currently basic atm but hopefully in the future more checks will be added - 3rd party integrations and apps are up there: https://github.com/crashappsec/github-analyzer. Any issues or feature requests are welcome and hopefully will expand it soon!


I had exactly this same experience with Vercel, and we backed out of using them for our major open source repo as a result.


Ugh. Doesn't raise the trust in their competence of protecting admin access credentials to GitHub. The same mindset leads to "We use just one shared ssh cert, because it is easier. And our VPN solution is a 2nd factor in any case".


Name and shame.


I'm still confused. Why did people have username/password logins to the AWS console? Either require SSO login, or require HW tokens to get in as an AWS user. Then it doesn't matter if someone finds the password file, it's useless.


From information floating around on Twitter it looks like they had the password to the SSO account of an employee and then social engineered their way to get the employee to accept the push MFA prompt to add a new device.

At this point it appears that they found more credentials on the internal network and owned SSO, MFA and AD giving admin access to everything.


> found more credentials on the internal network ... giving admin access to everything

That's my hangup. The fact that admin/root level accounts can be accessed with "credentials" alone, rather than only via SSO/MFA/Yubikey. Were these service accounts, what happened to least privilege?


It depends on the employee you target. If it is someone working on internal IT systems, chances are high that you gain pretty wide access after owning their SSO.

SSO can go down or get owned so having break glass credentials isn't unheard of. The last place I worked at had them on paper in a safe in their headquarters. The Twitter threads show that they were stored in a password manager but the hacker was able to find credentials to access it which could have been one of the responsiblities of the employee which was targeted.

If you have your password manager on SSO it will be even easier.


Ghostery browser on Android is pretty easy to use.


It had been excellent. Have you kept track of their ownership and business model since the mid 2010s?

- monetization strategy involves affiliate marketing and the sale of ad analytics data

- shows advertisements of its own to users


Running your whole company cloud off a credit card is OK pre-seed round, but then you should get a proper business account setup w/ ACH payments/invocing.

Better still, after you grow a bit, use a GCP reseller (SADA and DoIT are great) or a usage commitment with your account manager for a nice discount. I've never heard of one of those accounts getting turned off in the middle of the night.

Sure, it sucks to hear of account lockouts like this, but productionizing your finances is part of the startup lifecycle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: