> TechCrunch contacted TrueDialog about the exposure, which promptly pulled the database offline. Despite reaching out several times, TrueDialog’s chief executive John Wright would not acknowledge the breach nor return several requests for comment. Wright also did not answer any of our questions — including whether the company would inform customers of the security lapse and if he plans to inform regulators, such as state attorneys general, per state data breach notification laws.
Though it doesn't mention a timeline, this does seem like a way to pour gasoline onto a PR dumpster fire.
> But the data also contained sensitive text messages, such as two-factor codes and other security messages, which may have allowed anyone viewing the data to gain access to a person’s online accounts. Many of the messages we reviewed contained codes to access online medical services to obtain, and password reset and login codes for sites including Facebook and Google accounts.
> The data also contained usernames and passwords of TrueDialog’s customers, which if used could have been used to access and impersonate their accounts.
Hence why 2FA tokens and reset links should have a short window of validity, and why shallow information such as knowing account name, or address, or mothers maiden name, should not be used for sensitive purposes.
It is considered a security best practice to use a 2fa app (like Authy), or a physical token instead of SMS. This way when the token generator is lost you at least notice it in a reasonable timeframe.
For critical systems where 2fa is enabled I also do a "simulated" device loss, where I go trough all the steps at least once that I would have to do in case of a real device loss (fetching backup codes, revoking token, resetting password, adding new token).
This way I do not have to constantly worry about losing critical items, because I'm prepared for the worst and I can be calm.
It also helps to have all your data encrypted at rest with Bitlocker or Veracrypt (hard to enforce this rule on yourself for pendrives, but oh well..).
Thanks! That (simulated device loss followed by all the steps you mentioned) is something I plan to do every now and then but so far I've never gotten around to actually doing it.
Encouraged by your post think I'll finally do it (some day, hopefully sooner rather than later).
Something like a minute or two is probably the upper bound. Look at TOTP timers for a sane default (often like 30 seconds), and add a bit more for SMS delivery delays. You can always try to send a new code when the old one expires.
But also: basically the entire security world has been recommending against SMS 2-factor for years, because it's so incredibly easy to steal access. Don't use SMS if at all possible, don't have it as a backup (because "a backup option" == "an option" == "you are only as strong as your weakest link"), etc. Avoid it entirely.
One time I did a "forgot password" reset on an old email account. Apparently young me thought it was a good idea to choose the 'pick your own question' thing and the question I chose was "What?"
...to this day I still don't remember what the answer was.
The dangers of being clever. Briefly used a given LastPass, used another setup for several years, then wanted to go back to that old LastPass for something.
The LastPass password hint I'd set:
> Don't forget your LastPass password.
It wasn't some oblique hint. It was an FU from 20-something me to 30-something me that I distinctly remembered sending as soon as it saw it.
This article massively overhypes the breach. 2FA and password resets have a very short window of validity. The database contained historical messages and did not operate in real-time.
To your point: I can count on two hands the number of companies I've encountered that iterate HOTP on use rather than on issuance.
...which means there are bound to be a few stale but still active SMS codes lingering in there from people who attempted but did not complete authentication e.g. because they entered the wrong number or didn't have access to the number they attempted to use when signing in. Services impacted are any which allow for users to authenticate with _just_ SMS HOTP and which don't expire unused codes. That number is unfortunately high enough for me to think that this is equatable to a small credential breach.
It's not just locked behind the X-Pack...if you choose a trial, it works. Then, when the trial expires, poof...it's wide open. Surely there's a better way to handle that.
Wow is this true? Instead of disabling access or shut down the db server they simply removed authentication and left the db wide open when trial expires?
As horrible at it sounds, it's just business. Large companies have a checklist that databases MUST be protected by authentication, so they always pay for the enterprise edition just to get authentication, and corollary the only way as a software editor to get money is to have authentication behind a pay wall.
I was at a conference, last May I think, where an Elastic employee presented about exactly this topic (insecure defaults in database systems). If I remember correctly, they changed it from a commercial offering to secure by default now (before that talk already I'm pretty sure).
Instead of bothering him/the rest of the audience I left it be (was probably from before his time), but yes, I was also wondering why security was part of the commercial offering...
Anyway, past be past, this is no excuse in 2019 anymore (even if I will be hesitant with Elastic Co in the future).
Like with the last leak a few days ago, I repeat my point: I also blame ES for this as it doesn't enforce authentication. ES people just keep blaming the people who set up those instances ("well duh it should not be facing the public internet") but after we see incident after incident, as a reasonable dev you should try to do your part to make the world a better place. Yes, you were not the one who set up the instance personally, but is it so hard to move a little bit in the direction of "hey a lot of personal information of uninvolved people gets leaked over and over again because of our defaults that stupid admins don't change. Maybe to protect the innocent, we can make ES to require some authentication method to be configured."
Nope. No such thing, no empathy for the people affected by the leaks, all blame shifted, done.
IMO, mining SMS messages for data is by definition going too far in terms of intrusion into people's privacy.
On a related note, I came across a post on the machine learning subreddit[1] recently, where the author claims to have a dataset of 33 million SMSs in Mexican Spanish. I'm half suspecting the OP added the Mexican prefix to prevent anyone from doubting that his dataset was collected in Spain (In which case, GDPR applies). This was likely collected from an Android app which surreptitiously collected with the "Telephony.SMS_RECEIVED" intent, and the author half confirms it[2].
Regardless of the legality of doing so, reading people's private SMSs just reeks of privacy violations. iOS in this specific case does the right thing by not letting apps read incoming text messages (except for the limited case of reading single-factor SMS login codes[3], which was introduced in iOS 12).
> except for the limited case of reading single-factor SMS login codes
Is the app actually reading the code? I thought this was just a UI hint that made it easier for the user to select the code from the suggestion area of the keyboard
Latin American Spanish and Castilian differ quite a bit (and there are pretty obvious differences within Latin America too), so I'm not sure this is necessarily a GDPR cop-out.
Though it doesn't mention a timeline, this does seem like a way to pour gasoline onto a PR dumpster fire.