Hacker News new | past | comments | ask | show | jobs | submit login
SendGrid employee’s account was compromised and used to access internal systems (sendgrid.com)
110 points by lxm on May 1, 2015 | hide | past | favorite | 53 comments



> evidence suggests that the cyber criminal accessed servers that contained some of our customers’ recipient email lists/addresses and customer contact information. We have not found any forensic evidence that customer lists or customer contact information was stolen.

Which doesn't mean the customer list data wasn't stolen. The wording on this is too cute. Accessing data from privileged accounts doesn't necessarily leave evidence, depending on the type of system access or server logging involved.

I think SendGrid should clarify what could have happened here, worst case.


Thought exactly the same, they lost me as a customer over the wording on their released statement and email.

Hell it wasn't instantly clear why you needed to reset your password and the link to make it clear was way smaller than the link to do anything else, pretty dirty pool.


The customer targeted has got to be Coinbase, right? There was an email sent around a few weeks ago, clearly spam/scan but seemingly sent by legitimate coinbase.com servers. I received something like this:

  subject:      colin, We've got a message for You
  mailed-by:    em.coinbase.com
  signed-by:    coinbase.com



Two Factor Authentication. Please, all companies that have any of my data, don't just give me the ability to use 2FA, please force all of your employees to use it on their email, GitHub repos, everything. It should be a requirent right alongside https.

My personal favorite service is Authy, now part of Twilio.


> Two Factor Authentication. Please

My understanding is that Sendgrid isn't too keen on dongles.


Maybe they should just publicly fire someone to make this all go away.


How will that solve anything? The damage is already done. No point someone losing their job over this.


It was a reference to this drama from a couple years back: http://arstechnica.com/tech-policy/2013/03/21/how-dongle-jok...


too soon :)


Been long enough that it took me a while to get it...


Yes please. One of the thing that annoys me about these notices is that they all recommend that we turn on 2FA, when the fucking root cause was that THEY DIDN'T.



A service for 2FA?

I prefer a stand alone app or device for that. Why would Authy demand that I give them a phone number and an email address if not for monetizing (as in selling to anyone) my personal information?


Secondary factor for authenticating your authy account? In addition to your own data key. 2FA for your 2FA.


Are you serious? This makes no sense. Does that mean Authy won't work if it has no internet connection? That would be an important limitation. I hear they do backup the seeds for 2FA. But why is that not optional?


It is optional... but by validating your phone number, when you install the app, on for instance your tablet, you can authenticate the device. Backup is optional, with a client-side passphrase for encryption. It's not like it's the only app for this, 2FA/OTP is well documented with libraries in a number of languages. Is far as invasive goes, given what it is for, it's a pretty decent application.


This. It's not unreasonable to flat-out require 2FA.


2-factor auth might be good for SendGrid but depending on another device (be it your phone or a token-device) sucks big time in the real world.


Your data growing legs also sucks big time in the real world.


Exactly. If I'm trusting a company with a core component of my business/livelihood (code repositories, email marketing campaigns, etc.), phrases like "it's a minor inconvenience" are unacceptable excuses for lax security.


Enterprises that are serious about security have been requiring their employees to use security tokens for years. It's not an unreasonable burden to maintain control over account access.


Seriously? People can't be bothered to take the extra 45 seconds to open an Authenticator app and type in the current 6 digit code?


Seriously, because getting into the website is one thing, getting into anything that uses an API is something else altogether.

Google and Github both require the use of app specific passwords if you have 2FA on. Which means that accessing anything that uses the API is no longer a matter of username+password+token, it means username+password+log into site+generate huge and ridiculous password that is only shown once+save that into the app.

It's a pain in the ass. Massively.


> It's a pain in the ass. Massively.

Compared to someone gaining access to your Gmail (and the rest of your Google platform services) account or Github access? Hardly.

I agree its not perfect, but its relatively painless for the benefit obtained.


This is an argument that can apply to any particularly annoying security arrangement. What you miss is that by making the process so arduous, people will not use it.

There's no reason I can't generate my own app passwords, nor is there a reason to prevent me from accessing my own passwords later.


I would just assume let the system generate one... usually it's as random/long as anything I would create and people in general create short/weak passwords... if the password requirements where 16+ characters, upper, lower, number and other characters that would piss people off too. In this case, if you didn't copy/paste it into your app/service... generate a new password, and revoke the old one.

The only flaw here is when you aren't able to label your prior passwords and are only given the option to nuke them all. That ticks me off more than anything. I've got 2fa enabled most places that allow it now, and use Authy too. The only times it's been a real pita is when I've had to setup email on a new phone with a generated passphrase (I'm trying to stop using the "password"). Other than that has been one day where I forgot my phone at home and didn't go back to get it after I was in my car...


You should never use an external 2 factor app, use the Google app since it implements a standard and works offline. You will want this eventually.


TOTP is fairly weak. Convenient, but weak, due to the shared secret. I've seen proofs of concept that trivially lift the secrets from Google Authenticator. On the server side, the service is probably storing your TOTP secret one column adjacent to your hashed password. When done poorly (and most are), TOTP is basically no additional factor. Also backup codes, app specific passwords, etc, etc.

Certain classes of non-shared-secret hardware token are waaaaaaaaay better. Just less convenient. You should look into DoD CAC, for example. Google Authenticator is nice but will never protect Secret information. I know this sounds way out of startup league, but it shouldn't; we should instead study what we can learn from such things instead of blanket advice like yours.

Reiterating: none = bad, TOTP = better, strong tokens/biometric/etc = best.


The thread model TOTP addresses is someone having your password and being able to log in with just that. It's not to protect against people stealing the database, it's not to protect against people having root access to your phone, or anything like that.

Which of the solutions you mention works even if an attacker has actual physical access to the two-factor device?


> When done poorly (and most are), TOTP is basically no additional factor

TOTP protects against the most common threats, and is trivial to implement compared to other solutions you refer to.

Most of your complaints about TOTP security don't make sense under its threat model. Yes, if your phone is rooted, the TOTP secrets can be stolen. The point is that it's unlikely that both your phone and your laptop/point of access device both get compromised.


> I've seen proofs of concept that trivially lift the secrets from Google Authenticator.

Android specific?


Also worth mentioning is FIDO U2F. More convenient and more secure than TOTP.


If the secrets are compromised, new secrets can be generated.


Absolutely, on the condition that you encrypt the device where you have installed gogle auth.

The reason people use authy and the like is because they make it simple to recover if you lose access to your device (lost, stolen, broken). My work around this is to take a screen cap of the QR codes and encrypt them with gpg. You could also save them inside a password manager (especially for less technical people).


What was their password hashing algorithm? The lack of specificity isn't encouraging. The vague "hashed and salted" could mean anything from MD5 upwards.


I'd hope it would be PBKDF2. But then again, they could have just specified that. The lack of specificity makes me think otherwise.


Did anyone get an email about this from SendGrid. We have been searching and haven't found a "We got hacked, please reset your passwords".

I had to find out on HN, also on twitter they seem to be downloading it too.


I got an email on 28th and a reminder yesterday on both of my account emails to reset my password.


Same here.


No, but this was a visited link for me - I changed our SendGrid password a couple of days ago. I believe I saw this on /r/bitcoin, as exploiting Coinbase was impetus for hack.


I am a SendGrid customer and received no notification from them about this. I only found out about it via Hacker News. Very poor form.


I didn't get it from Sendgrid, but Microsoft sent it out to their Azure customers using Sendgrid on April 30th.


> CYBERCYBERCYBER

How about you just give me the details with resorting to propaganda?


Spread the meme: Saying "cyber" means "I have no idea what I'm doing or talking about".


Because that's what the government and DoD calls it?


> used to access several of our internal systems on three separate dates in February and March 2015.

> On April 8, the SendGrid account of a Bitcoin-related customer was compromised

If I can gather this right: SendGrid was fully hacked for 3 months on end. At least that is what they were able to recover from forensics, it may have been longer.

This sounds illogical:

> We have not found any forensic evidence that customer lists or customer contact information was stolen. However, as a precautionary measure, we are implementing a system-wide password reset.

How would a password reset help combat information that was stolen before the reset? The password reset is because the systems accessed contained password hashes. Also it may be to upgrade the hashing mechanism to be more secure than "salted and iteratively hashed".

Sendgrid's privacy policy is cookie cutter, but it contains this:

> For example, our policy is that only those individuals who need your personally identifiable information to perform a specific job are granted access to that personally identifiable information.

Apparently the employee that was hacked needed access to the data of all his colleagues and all users.

> Upon discovery, we took immediate actions to block unauthorized access and deployed additional processes and controls to better protect our customers, our employees, and our platform.

Then the Privacy Policy again:

> We will use at least industry standard security measures on the Site to protect the loss, misuse and alteration of the information under our control. While there is no such thing as "perfect security" on the Internet, we will take all reasonable steps to insure the safety of your personal information.

So apparently there were still some reasonable steps left to take, which were forced by this hack, not by 'industry standard security measures'.

> Two-Factor Authentication: We encourage all of our customers to enable two-factor authentication, which can effectively prevent unauthorized logins.

I think you can better encourage (or force) your employees to enable this, so you can prevent unauthorized logins into superuser accounts.

From the Privacy Policy you'd expect they already did this:

> Likewise, all employees and contractors are kept up-to-date on our security and privacy practices.

Then the unspecified hashing mechanism. Should you worry about the chance of account compromise again?

> salts and iteratively hashes passwords

It would be a breath of fresh air if these companies would just say 'We use bcrypt' in their privacy policy.

> Our Ongoing Commitment to Security

Your 3 month struggle with hackers. Also, three reasonable steps follow that could have been taken before this hack, like your Privacy Policy promised us.

> NOTE: We require passwords to be a minimum of 8 alpha-numeric characters. Make sure any new passwords you set conform to this requirement.

Before or after this hack? Why should the customer make sure his password conforms to this requirement? Is it even possible to set a shorter password?

> Security update: Please reset your SendGrid account passwords today. Beginning today, and in line with standard practice, we are requesting that all of our customers reset their passwords to all of their SendGrid account access points.

Why not force this? Asking nicely? Standard practice would be to force this upon next log-in and temp disable accounts that have not changed their password yet.


Time to decentralize our data, people!


Which means what in terms of using a service for transactional email?


"Our data" does not exclusively mean data we share with others. There are many use cases that don't require our data to live on a centralized system. It is my belief talking about this subject causes a good degree of cognitive dissonance in people who are intent on building systems which rely on centralized storage. The fact is, the only value in centralized storage of data that isn't entangled with other's data is that it brings 'value' to the company and its shareholders. It's not bringing long term value to the users, that's for damn certain.


My company, www.drh.net, provides an SAAS or on-premises Mail Transfer Agent, GreenArrow Engine.

You can run it in your network and tightly lock down the system. We can take care of all of the email deliverability setup and operations tasks, if you want.

One advantage of running licensed software on your network is that you don't pay per-message fees, so for higher volume it's really economical compared to a service like SendGrid.


Legitimate bulk email is not a thing you want to have decentralized - the anti-spam efforts of the past 20 years or so have made that all but impossible.


Not true. We have plenty of clients using their own decentralized email servers and getting great inbox delivery of legitimate bulk email.

To get good results on your own server, you need:

(a) To be sending email that people want to receive.

(b) To have the technology setup correctly. It's not hard; we do it all the time for our clients.

(c) It's greatly beneficial to above ~20k messages/day on your own server. Not required, but being above that point lets the ISPs gather statistical data on your complaint and open rates, which allows reputation filters to click in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: