Hacker News new | past | comments | ask | show | jobs | submit login
23andMe says user data stolen in credential stuffing attack (bleepingcomputer.com)
341 points by nickthegreek on Oct 6, 2023 | hide | past | favorite | 298 comments



Unless I'm reading this wrong all that happened was someone had an existing leaked database of emails/passwords and then tried them on 23andme, and if they worked they took the data they could get. Yes, 23andme has some pretty extensive and personal data, but this attack could be done on literally any website. The issue is people re-used passwords, and also did not have 2fa enabled.

So the database that is for sale is just a list of emails/passwords from other breaches that worked on 23andme, along with the data that 23andme had on those users. Not exactly a 23andme breach.


Even if this is the case, 23andMe should have done better here. Why are you letting people log into an account from a brand-new IP with no additional verification? You have their email, you could have at least done 2FA with that. And as other commenters mentioned, CAPTCHA would have also made this slower / more expensive. At my employer, we use both, and so it is not the case that this "could be done on literally any website."

For such a mature business (that is publicly-traded, no less!) it is shameful to allow credential stuffing on the scale of millions of accounts.


> Why are you letting people log into an account from a brand-new IP with no additional verification?

Is that really feasible today? With widespread use of phones and laptops, most people probably have at least a handful of different IP addresses they regularly use (home WiFi, work WiFi, cellular connection) and then they randomly connect from new up addresses like those from libraries, coffee shops, commute, etc

I think most “normal” apps and websites today allow any random IP to log in without jumping through extra hoops.

Only companies with big budgets (Apple, Google, etc) make regular users jump through extra hoops.

Banks, B2B have users that need extra hoops as well.

But 23andMe. I would not expect them to take any extra steps.


23andme isn't just any small company. They process people's DNA! It's about as personal information as you can get. And the stolen data included information about people's genetic ancestry. They should have very high-class security practices.


General question, but let’s say they get your genetic ancestry information. What could you do with that?


Researchers and scientists could do a lot with such data. A tyrannical government would find many uses in furtherance of their repressive tactics. Blackmailers can find high profile targets where genetic linkages have been obscured by births out of wedlock, incest, etc… Strange question. Data are valuable, it’s like most of our economy at this point.


But you already agree to let 23andme send your data to "research" partners. I can understand the blackmailer, but even that is a bit of a stretch. I just don't see what damage could be done, which is why I asked the question. If a tyrannical government wants DNA of its citizens, it could just force it. I doubt they would go buying it online with bitcoin.


> A tyrannical government would find many uses in furtherance of their repressive tactics.

Strange answer. What do you actually mean by this? "furtherance of their repressive tactics" can mean just about anything - which government are you talking about, and which tactics?


> Strange answer. What do you actually mean by this? "furtherance of their repressive tactics" can mean just about anything - which government are you talking about, and which tactics?

Any government with racist tendencies might make use of this data and decide someone has $GENE which is primarily seen in $ETHNIC_GROUP so should be treated as poorly as the government treats $ETHNIC_GROUP


Please don't make this normal it's absolutely tiresome to get codes for every single task


Or include a setting for users that used a unique password.

When, five to ten years ago, everyone started sending email conformations "is this really you??" when logging in with the correct username and password on the first try, I always contacted support if that can be turned off. I figured the only way they were going to know it's a pain is if people complain. I have yet to learn of the first site where this is actually a choice...

Come to think of it, why haven't I made a Thunderbird plugin yet that recognises these emails and either sends the code to the browser or autotypes it. The credentials are filled in automatically, why not also their stupid email? Does this exist already?


I think most sites doing this use SMS codes, and they works really well on mobile. If they are sending an email it’s more likely to be a magic likely with no password at all.


You don't use twitter, github, amazon, spotify, steam, discord, etc.? Maybe that you can turn on SMS instead of email, but sending people emails for every login is the default for those.

The only ones requiring an SMS for me are organisations with a bank license, which are obviously a minority of all the services out there.

(Fwiw, I avoid all of the above besides Spotify, but a lot of code happens to be on github, audio books are invariably ~3x cheaper on amazon compared to buying from the publisher directly, many game developers insist that you let steam take a cut and don't let you buy it from them directly... that's how come I know these things all insist on sending emails.)


Every time I log into my Chase acccount, it thinks I'm logging in from a new computer. Every single time. Nope, I've had the same computer for 3 years.


Is your browser set to clear cookies when you close it?


Many sites like Google including my banking sites send me an email when a new IP / location is used for login.

This alerts if there is a sudden login without my knowledge and one click to disable.

23&me could have definitely done that to alert logins.

It is 100% on 23&me even though used id/passwords were used.

Genetic data is by definition extremely personal.


It's exposed as "new IP" to the end user but it hides a lot of logic about ISP IP address pools for specific regions, behaviour of other devices, etc. For someone like Google, that's easy to pull off, as a lot of people use it, and people use it daily. But it's harder to get this technology for someone like 23andMe where people log in less often, and its product has low penetration of internet users.


Just do it all the time then? If it's infrequent it's also not much of an hassle.

GoG and Steam do "email 2fa" and while it's annoying they do it anyway as they are a "risky" target, IIUC.


> Many sites like Google including my banking sites send me an email when a new IP / location is used for login

All of whom I already mentioned in the comment you are responding to


2FA would've prevented those logins. I think sites should very much start mandating 2FA imho.


Drop a cookie in their browser and 2FA them if the cookie is not present. It's much less likely the attacker will have the users credentials AND cookies, so this raises the bar for the attacker without annoying the user too much.


Yes and people travel too. Even outside national borders sometimes, a prospect which my experience of having to use vpns to log into my payments apps demonstrates is somehow shocking to product managers driving cybersec policies in these companies.

tl;dr: logging in from an ip address of a strange faraway country should not be its own security flag. /endrant


>and then they randomly connect from new up addresses like those from libraries, coffee shops, commute, etc

And most of the regularly used networks probably aren't using a static IP anyways.


I have to agree.

Why blame the users for a broken by design security model like password auth? Credential stuffing attacks are a known weakness. We cannot reasonably expect everybody to take precautions against them.

I've just become very irate at how people implement absolutely absurdly bad security and people just blame users when the inevitable happens. These attacks have happened for decades. It's not the fault of the users.


> Why are you letting people log into an account from a brand-new IP with no additional verification?

Because having to play a game of "Simon Says" every time I try to log into an account pisses off customers.

Humble Bundle, for example, lost several sales because you can't even buy a game for an e-mail address that has an account without logging into the account, which requires not just the password (stored in my password manager that I may not have with me everywhere I have my credit card), but also logging into my e-mail and clicking a link.

The EU has decided to force banks and payment providers to implement this nonsense because companies like e.g. PayPal decided to rather eat the cost of non-prevented fraud than putting an extra barrier in front of users and losing the users to competitors (by forcing everyone to do it, they prevented companies from competing on this aspect of UX).


This story about genetic data and other sensitive health data being leaked doesn't really make the case for letting the market solve this particular problem without onerous regulations.

I suppose massively increasing the liability would solve the problem by doing a little of both.


Private companies should not be holding such data regardless, so the point becomes moot.


Because many people change IPs all the time between devices and such, and it's a user hostile practice to ask for an email code on login

Instead they could've monitored the password leaks to see if those got exposed


You can scrape email/sms for codes automatically and add them to the clipboard or autofill, and what does user hostile even mean? User hostile is losing all of a users data because you were more concerned with customers liking how easy your service is to use than you were about ensuring your service didn't hurt them.

You can do better than email/sms, especially sms, but they're transitionary technologies. I login to way more things than most people do way more often. I don't use password authentication alone unless it's literally my only option.


> You can scrape email/sms for codes automatically

IF they arrive right away, which isn't guaranteed for either method Also, do you seriously suggest every single user to set up some kind of x-platform scraping service (how would you scrape an SMS code to a computer's clipboard)???

"user hostile" means that you impose a cost on users without consent and in many cases without benefit

> I don't use password authentication alone unless it's literally my only option.

That's fine, but this isn't a conversation about you. I'm fine with a high-entropy auto-generated password for a huge bunch of services


Reading passwords from SMS is already in Android and iOS, passwords from emails is in iOS (with mail). For that matter, there is no reason TOTP codes can’t be autofilled along with your username/password. The tooling around this stuff keeps getting better and more widespread because it’s getting more prevalent.

>How would you scrape an SMS code to a computer’s clipboard

https://support.apple.com/en-us/guide/safari/ibrwa4a6c6c6/ma...

There’s no technical reason this same idea can’t work with every OS.

>impose a cost on users without consent

We have 1.3 million people who had their personal information leaked by an anti-Semite. More people are impacted by the breach in privacy than just the people who reused their passwords. The level of security was not appropriate to the context. Forcing costs on users can be good when said users are handling sensitive PII.


> The tooling around this stuff keeps getting better > There’s no technical reason this same idea can’t work with every OS.

And until it gets to good and working on every OS you have no argument

> Forcing costs on users can be good when said users are handling sensitive PII.

No it can't, why do you think you can impose your personal oversensitive value judgements re. PII on every single user???


> Why are you letting people log into an account from a brand-new IP with no additional verification?

The opposite is the bigger WTF, why are the letting so many different people log in from the same IP at the same time. That’s a red flag on every fraud detection system I’ve seen. Not to mention there would be may failed logins for different accounts which is also a pretty strong warning.


Because some people don't get a static IP from their ISP and they don't want to go through e-mail verification every day. At this point, some sites require this workflow from me:

    1. Solve CAPTCHA for log-in form
    2. Log in with valid password
    3. Open E-Mail client, maybe even log-into your e-mail with the same workflow if not done yet
    4. Verify the IP via E-Mail 
    5. Surf to website log-in form again
    6. Solve CAPTCHA for log-in form again
    7. Log in again with a valid password
    8. Verify with 2FA code
Thanks, I hate it. It feels like step 1 to 7 could be skipped.


> Why are you letting people log into an account from a brand-new IP with no additional verification?

Because they knew the password! That's what passwords are for. Please don't try to make life any more difficult for your users than it has to be.


Forcing people to use an E-mail address as a user ID is so amateur-hour that I don't even know where to begin dismantling it. You don't see banks or brokerages doing this.

Why is it so dumb? Because the vast, vast majority of people have no idea how any of this shit works. So, when a company demands that you sign up with your E-mail address and enter a password, a great many people are going to think they have to use their E-mail password too. This makes every one of these sites a gatekeeper to its users' E-mail accounts. If their security practices suck and they're hacked, or a disgruntled employee steals their records, or whatever... now a ton of their users' E-mail accounts are open for mining.

The failure to think this obvious scenario through is appalling. It's also appalling to see companies like Apple perpetrating this stupid behavior, especially AFTER the fact. Apple IDs originally did not have to be E-mail addresses. And later on, they did not have to be FUNCTIONING E-mail addresses. Now they've regressed all the way and they have to be both. And so Apple, per its usual M.O., has had to tack on various extra measures since then to try to shore up security.

In case you couldn't tell, I absolutely detest this policy.


> Because the vast, vast majority of people have no idea how any of this shit works.

Then don't let them use it. We don't let people drive who don't know how to safely operate a car. We don't let people make food in commercial kitchens without training. We let users run free with no knowledge, then build systems to stop them from hurting themselves, it's absurd.


> Why are you letting people log into an account from a brand-new IP with no additional verification?

Loosening this requirement to new country / carrier would make life easier for users at small cost to security.


Because IP addresses change frequently. I’m much less likely to use websites that require me to wait for a code in my email each time I use them and I don’t think I’m in the minority. Email/SMS codes are a useless checkbox in the security audit that companies need to stop implementing.


A friend of mine lost access to an email account of theirs, even though they remembered the username and password, since the IP address changed and the recovery methods weren't accessible any more (old phone number).


That website sounds like a lot of fun to use for people who travel (and often have a new IP).


I don’t think you are reading this correctly. People could access (most importantly) the full raw DNA profile. And many of those breached were from people who opted in a “Relatives” feature even if their account was secure.


23andme doesn't sequence DNA, it just checks some genomes.

https://customercare.23andme.com/hc/en-us/articles/227968028...


23andMe uses a SNP array, or SNP chip, to look at SNPs (single-nucleotide polymorphisms, or more generally, single nucleotides that vary within a population). Basically what it gives you is a diff against a reference genome. So yes, while not a full genome sequence you can still get a VCF file out of it, impute sites that are not on the chip, use it for genealogy analysis, look at someone's disease carrier status, genetic disease likelihood, etc.


Yeah, I mean this is about 1 million SNPs, right? It's very very personal indeed. I could make good guesses at the chance of you going to university; your height; your risk of depression; what you look like....


I don't see where in the article is suggests that the attackers were able to obtain the raw genotype data of anyone other than the compromised account.


The pics of the offer and pricing suggests uniform DNA info rather than names or passwords for some and raw DNA profiles for others. Also this from the article:

“ The compromised accounts had opted into the platform's 'DNA Relatives' feature, which allows users to find genetic relatives and connect with them.

The threat actor accessed a small number of 23andMe accounts and then scraped the data of their DNA Relative matches, which shows how opting into a feature can have unexpected privacy consequences.”

Edit: maybe you are right about the lack of genetic info, if this account is correct (unless the researcher didn’t pay full price, and only got the metadata): https://therecord.media/scraping-incident-genetic-testing-si...


I guess one question is: should

> did not have 2fa enabled

be allowed to coexist with

> pretty extensive and personal data


It's the user's data, its not on 23andme to baby the user. If the user wants to trade ease of login with risk of getting hacked, that's not 23andme's fault.


> that's not 23andme's fault

Yes it is. It's their fault for giving the user a choice. Google requires (some) users to enable 2FA, why can't 23andme?

https://www.tomsguide.com/news/google-forcing-2fa-users


Because user aversion to 2FA is often rational. The expected cost of learning how to use 2FA plus risking losing access to your account and not being able to get it back through support is often higher than the cost of having your account compromised.


> user aversion to 2FA is often rational.

The account recovery process should be setup at the start of the 2FA setup - e.g., you get emailed a bunch of backup codes (easiest way imho).

The site should not be using their own 2FA app, but use a standard OTP implementation, and let the user use their own OTP app (most people default to google's authy, but there's a couple out there that are common too).

Or, as an alternative, delegate the login to email and use a password-less login mechanism (effectively delegating the account security to the email's security). I argue this is actually more convenient, but some people (esp. young people?) have an aversion to email which i don't understand.


“I argue this is actually more convenient, but some people (esp. young people?) have an aversion to email which i don't understand.”

Uhaul does this and it’s maybe the only good I can say about Uhaul. I think the catch is that some people don’t use email (or much of anything) on their mobile phones. Most will get sms immediately wherever they are at. Not everyone uses email that way.


Emailing backup codes doesn't sound like a good idea. You give the keys to the kingdom to email provider or anyone who would be able to access your mailbox.


If the email is busted open, then it would already have been possible to do a a forgot password recovery (which i presume uses emails).

Therefore, backup codes are no less secure than that.


But that's only because companies, like google, offer no human support for lost accounts. Somehow I wonder if 100 years from now personal data will be handled by something like a bank. If you lose your password you call your personal data bank - which can get you back online or something like that.

Maybe that's the next big thing - local, personal companies that are your "online power of attorney" that have the right to reset your shit, make claims about your identity. I have no idea. But the current state of things is just a mess.


It won’t be your bank, it will be google. To some extent it already is.


It's really not man.

Maybe for some irrelevant social media site I can understand doing password-only auth because who cares, but this has your DNA on it. Even if the person who has all their personal information leak doesn't care, they fucked over their entire family. I guess that's not 23andMe's fault though because they were just satisfying a rational user aversion!

Not only that, but the aversion to using methods of logon other than passwords are less rooted in passwords being easy, and more in passwords being STANDARD. Passkeys for instance are faster to use than passwords. The ONLY thing that makes passwords "Easy" is peoples refusal to start using something better because of one-time switching costs and inertia.


Plastic cups and discarded napkins also have DNA on them, and yet most people are willing to leave those lying on the table in an airport food court. If an entire family gets "fucked over" by this leak, they're going to get...medically invasive spam?

Which is bad, obviously, but I think everyone is catastrophising it.


I really hope this is not the prevailing attitude in software security.can someone from that field please chime in?


Google can do a lot that no one else can. Think of user conversion rates if you require that they install an app and set up some TOTP stuff some never heard about just to access your platform.


this attitude is why almost all online services are absolutely insufferable to use now and it gets worse every day


Google’s standpoint likely saved many from identity theft given how getting access to the average person’s Google account can compromise half the services they have or more if they’re using gmail.


if they’re hosting sensitive data, it isn’t “babying” the user to take some responsibility for the data your company exists on.

if they can’t take responsibility for it, then they’re too irresponsible to make money it.

it would be entirely reasonable for them to say “we don’t want anything to do with this data, we don’t want to profit from it, we don’t want to use it in anyway, therefor we will not retain it at all.”

babying the user by taking responsibility for the very data they profit from? unreal.


Checkout the data in the screenshot. This is not sensitive data. Pretty useful data.


> The information that has been exposed from this incident includes full names, usernames, profile photos, sex, date of birth, genetic ancestry results, and geographical location.

i would absolutely argue that having my

1) genetic ancestry,

2) full name,

3) date of birth,

etc… is sensitive information.

even removing genetic information, if a company is too irresponsible to catch millions of users info being stolen, then they’re too irresponsible to have that data.

again, either it’s important to your business or it isn’t. if it isn’t important, then refuse to store it.


Birthdate/name is not sensitive data nor is a public profile photo. Facebook will display this in a public profile. And you are not getting genetic data. This info is public in other dna sites even if it's private on 23andme


It's not always that clear cut, though; after all, wouldn't this argument apply to e.g. laws requiring seatbelts? One could argue that in this early-ish stage of electronic data, vendors that hold very sensitive data are being irresponsible. Not just about not requiring more secure authentication, but also for pushing less secure authentication like SMS-based authentication factors.


The car makes an annoying beep beep sound, but it doesn't force the user to use a seatbelt. The onus and responsibility is ultimately on the end user.


The inability of the car to safely enforce this is probably the main reason why this works this way. The responsibility is split, though: cars are required to be designed in ways that discourage or prohibit some unsafe behaviors entirely. Not too different from services requiring 2FA: doesn't mean the TOTP secret is necessarily stored safely.


A friend of mine rented a larger, newer Jeep SUV when in town. It would not go into gear unless seatbelts were buckled. It was awful - not a future I want to live in. I'd rather have the choice than have it made for me in the name of safety.

> Not too different from services requiring 2FA

That is another practice I find awful for the above reason.


Another underlying problem with that kind of gatekeeping are the dangerous scenarios that can happen when the Jeep decides erroneously not to start due to a sensor anomaly.

It's a virtue that a car continues to operate when all the warning signs and buzzers are going off; this means that the human in charge is left in ultimate control of the situation and there doesn't need to be any complex umbrella structuring of liability -- this allows a driver to safely drive away from a dangerous tidal wave/assault/lava flow/whatever even if their seat belt sensor is broken ; this is very important for numerous reasons.


There are cars from the 90s that put the seat belt on automatically when you close the door. It looks awful but it works.


You are assuming that all users consider this data to be especially sensitive as opposed to something that your body leaves about wherever you go.


This stance is reckless and negligent. Pragmatically, you can be found liable. Ethically, it's cut and dried.


... Do you really believe this? There's countless services out there that don't require 2fa by default. Honestly it's probably easier to list the ones that do.

If you think that means the company can be held liable, I'd honestly start leaking my information on the internet if I were you. You have millions of dollars of lawsuits to go win apparently.


I absolutely believe this. If you think your service shall perform no due diligence that it is correct, accurate, and safe, then you have no business providing it to the user, who has little or no knowledge of the domain, which you are selling to them. That is your job, to sell a sophisticated service to someone who would enjoy the benefits but cannot begin to do it for themselves.

If you don't think so, then I think you're beyond reprehensible, and so will the courts. There is no disclaimer that can protect you. Good gravy, this is the easy part.


Passkeys, baby!


Give me an implementation I can self-host, without Google, Apple, etc. having effective control (including claws in my relevant software supply chain) and with an easy user experience, where I can maintain secure backups (on my own infrastructure, thank you) and smooth transition to future devices, and ideally, if needed, securely export root keys (cause if I don't control them then someone else owns them), and maybe I'll be interested.

In the meantime plain old high-entropy passwords with a good manager gives me all those features and a simplicity that's hard to beat.

In my 30+ years of computing I've suffered more harm from failures of other companies than I have from any failure of my own diligence. The whole lesson learned is to reduce trust in them and, maybe I'm wrong, but everything I've read about passkeys and the like seems to put me at liberty of the companies developing and pushing the implementations of them down my throat. It will take a lot of trust before I give up my ability to copy/paste my credentials.


Keepassxc and bitwarden should both be getting support for passkeys soon. Bitwarden sometime in October (vaultwarden already has support for storing them), keepassxc there's been an open PR for it that's been tested and iterated on for awhile, but I'm not 100% sure how close it is to landing.


Thanks! Last time I checked it out the top hit said "Closed as Not Planned". Could you point me to some good details or any article on how it's being implementef (e.g. does it act as a third-party store or something to avoid being locked behind a TPM or whatnot?). Genuinely interested.


I'm not sure about any specifics beyond that both are getting support for them (for the keepass ecosystem I'm sure about other mobile clients, but I don't think the feature request to support passkeys has been acknowledged by the keepass2android dev sadly). Here's the keepassxc PR with some details about the implementation, and what should be done in future work on passkey support: https://github.com/keepassxreboot/keepassxc/pull/8825

Bitwarden has a few blogs if you search for bitwarden passkeys, but from skimming one it didn't seem to go into technical details (though I didn't watch the videos). I guess you could look through the PRs: https://github.com/bitwarden/clients/pulls?q=is%3Apr+passkey... but I don't really feel like doing that.


Try convincing all the anti-passkey folks first


You're not wrong that this wasn't a sophisticated attack. What's disappointing is that it worked well at scale.

> this attack could be done on literally any website. The issue is people re-used passwords, and also did not have 2fa enabled.

While possible to execute at scale on some websites, this type of attack tends to be quite loud on the receiving end once appropriate metrics are selected for monitoring and alerting.

> "We do not have any indication at this time that there has been a data security incident within our systems."

They should probably work on that, given that those systems were used to extract their customer's data, and that they only noticed when their customer's data was being sold.

Given how far behind they are on disclosure I'd guess they may have only found out from media inquiries.


Websites should mitigate credential stuffing by checking against known cracked passwords. All you have to do is download Troy Hunt’s hashed password database, check it when someone logs in and if it’s cracked do your email password reset flow. Or you can use their API.

It’s very simple, and I believe has been an accepted best practice since like 2017. This is 100% on 23andme. They are responsible.

1. https://haveibeenpwned.com/Passwords


This and noticing a bunch of accounts are suddenly being logged into in mass in a way that is obviously an attack. It cannot be hard to detect such an event if you cared to notice. So it’s 100% negligence and 100% the result of putting profits over safety. A terrible management failure.


Shouldn't they have noticed that an ip or a set of ips were trying to log into a bunch of different accounts?


Depends on how they tried to get in. They could have used a large amount of residential proxies to get around this.


If someone is trying to log in to "Account A" from fifty different places, that should be a red flag


They're not.

They have a large set of different emails + passwords, and a large set of IPs.

Each IP can check a single set of credentials, so you never get a single IP in a short timeframe with too many login attempts, and never trying to brute force a single account. If the attacker rented time on the botnet for a long enough period, they can fly under the radar for quite a while. 23andme sees lots of failed logins, but no real way to pin it down.

reCAPTCHA would be the answer here. What's interesting/concerning is that it appears Google's reCAPTCHA (assuming 23andme was using it, and they should've been) was defeated.


Captcha still means you get to do the cred stuffing attack, just potentially more slowly which still doesn’t protect the user.

I think for sensitive data where you want to protect the user, it makes even more sense to just generate passwords for them. It’s even simpler than 2FA. Some online casinos do this.


If your attacker is stuck manually passing the captcha time after time, they're probably not going to bother.

The thing that worries me more is the possibility that newer AI tools are allowing attackers to beat reCAPTCHA with automation. If that's the case, a lot of folks are going to be caught with their pants down.

Edit: looks like it's more than a possibility[1].

[1] https://twitter.com/sw33tlie/status/1710409035030122731


The linked post isn’t reCAPTCHA, it’s just some random bad CAPTCHA that’s been easy to defeat with OCR for ages. The real fundamental flaw is that human time is cheap enough: see Amazon Mechanical Turk. Many bulk, human-powered CAPTCHA-solving services have existed for years.


And this is why you should never force people to use their E-mail address as a user ID.


If it was a known leaked database, they should have invalidated the passwords from the database before attackers exploited them.


While it's probably not a horrible idea to do something like this I don't think any or at least many does this currently? It wasn't a 23andme database that the attacker used, it was just some other random site/sites. So every time any website is hacked should every other website invalidate the credentials of those users on their site too?


It is a lot of hassle, and the user isn't really protected because the invalidation relies on public releases of email/password combinations; there's obviously going to be plenty of private releases, which means it's actually just security theatre.

2FA, or passwordless logins, are the solution. Forcing the user to change their password (at the most inconvenient of times - right after they logged in, but before they're able to use the site) is annoying at best, and does nothing at worst.


How is it a theater to save a lot of users, but not all?


a theatre is where you have the feeling of security, but you don't really have it in reality.

You cannot claim that just because some users are 'saved' as evidence that this is an effective security measure, because if a password was leaked, and not discovered, then this measure doesn't prevent it. But it is imposing a cost, which cannot be measured against effectiveness.

Change the whole process to 2FA is secure because there's provable guarantees for the costs imposed, and therefore, you can make an objective decision on whether it is worth implementing.


But you do have security, this measure saves actual people in actual reality

> You cannot claim that just because some users are 'saved' as evidence that this is an effective security measure

Why not? Saving people from insecurities is almost by definition a measure of effectiveness

> you can make an objective decision on whether it is worth implementing.

You can't since the value factors in your "provable guarantees" and costs involved are subjective and also depend on the users' characteristics


> this measure saves actual people in actual reality

no it doesn't. The claim is that by removing publicly leaked passwords, the user is prevented from having their logins stolen. But you didnt know if that password was going to be used for stealing - it's an assumption. You also dont know if private leaks are already being used, and is undetected.

It's the same type pf claim that the TSA (transport security authority) is saving people from terrorism.


> But you didnt know if that password was going to be used for stealing - it's an assumption

But you do know for a fact that these leaked passwords are used for stealing, so forcing a password change would prevent that, ergo, save some users from having their data stolen. Private leaks have no impact on this


> you do know for a fact that these leaked passwords are used for stealing

no, the passwords are revealed, but it might not be used for stealing. And passwords that are stolen but not revealed publicly will continue.

My point is that the site will force an update, but the user's quota of inconvenience is used up - therefore, a more effective measure such as 2FA will be seen as unnecessary by the user, and thus, lower the user's security.

This is why the solution is to not spend the effort/cost on trying to detect password leaks. It is to make 2FA.


I hope they encrypt passwords and are unable to do this.


> Unless I'm reading this wrong all that happened was someone had an existing leaked database of emails/passwords and then tried them on 23andme, and if they worked they took the data they could get.

So... basically exactly what the title says. 23AndMe says user data stolen in a credential stuffing attack.


Use a password manager with long, random passwords. Pick your own passwords and you're leaving your door unlocked.


Good luck trying to convince anyone who is not already using it. I've tried super hard to get my friends and family to use a password manager but they brush it off as a joke. Even when they lose their account it doesn't seem to bother them. They just create a new one. It's a dead race.


This has been my experience as well. You can even show them that they have been a part of breaches but not even that is motivating enough.


I don't think you have to tell that to people on HN, but regular people will not be able to use most password managers. Not even 1Password is really user-friendly and it's the most mainstream one.

The included one on macOS is hidden in some setting panel.

For non-technical people the best authentication method is probably their phone (Passkeys, or tokens sent to their email address).


The included one on macOS is hidden in some setting panel.

As long as you stay in the Mac/iOS walled garden, you really don’t need to access the Settings page/app. Safari and most apps will happily pull the user/pwd from the manager for you. I’ve used for a few years now (after tiring of the mediocre UX of several other managers).


Most (all?) major browsers now have built-in password managers which are intuitive enough for regular people and provide sufficient security against these attacks.


And yet, passwords get guessed, stolen, re-used all the time. If you talk to regular people they still use pet names + a number because they want to be able to type it in everywhere.

It's not a solved problem, even if a rudimentary password manager is in most browsers.

Personally I don't know a single person outside of my tech bubble that uses passwords that you can't keep in your head, or write down on a piece of paper on their desk.


There's a simple trick to having a password that's easy to type, easy to remember, and is pretty darn secure: repetition. Just take your pet's name or whatever, type it several times, and then finish it off with a number or whatever. Should be resistant to typical dictionary and brute force attacks.


And you already identified the main problem with this strategy: "repetition".

As it's not possible to remember n passwords for n sites, if one of them gets hacked "darn secure" isn't so secure any more. The main point of password managers is that you don't have to remember your password and if it leaks out on one site, it doesn't matter as it's only used on that one site.


In this case, unfortunately, at least as it's being described publicly, your detailed information was at risk if someone you are (even distantly) related to failed use a long, random, unique password.


I wonder if companies will seriously start to rethink "transitive permissions" or "network permissions". This is very similar to what bit Facebook in the ass years ago: I have permissions to see all the data of my friends, but in the past I could also click a button to let someone who requested see not just my own info, but also all the info from my friends.

From a "computer science" perspective this makes sense: if I say you can view all my data, I lose control with who else you share that data with. But from a "human" perspective, most people don't think that if I give you access that I'm essentially giving access to the rest of the world.

These types of network permissions make any company who holds them a prime target because it means bad guys only need to hack a few accounts to get exponentially more data.


> also all the info from my friends

They would only see the subset of what your friend shared, the set configured by its author as visible to friends of friends, right?


No, IIRC this predates that entire permission model. "Friends of friends" today means just what it sounds like - my friends and all of their friends can see the data Iark that way.

Back in the late 00s/early 10s when lots of "Facebook apps" were a bit of a craze (think Farmville), you could give an app maker permission to view your personal data and all of the data that you could see about your friends. This is how Cambridge Analytics was able to build profiles of 87 million Facebook users when only a few hundred thousand actually installed the "your digital life" app: https://www.theguardian.com/news/2018/mar/17/cambridge-analy...


According to this tweet, the hackers likely got ALL of the data but only leaked a subset, 1.3 million records (only the ashkenazi Jews)

https://x.com/mattjay/status/1710370423311888724?s=20


That would explain how they got so much data with a probably small number of compromised accounts. Ashkenazi Jewish people were relatively genetically isolated from the rest of Europe and started from a relatively small founder population. Genetically, then, they're all detected as distant cousins of each other using standard metrics.

This means a typical Ashkenazi Jewish customer of 23andMe has a lot more relatives than most other customers, and so they'd be able to view that many more profiles.


as a 23andMe user with a high percentage of Ashkenazi genes I expected my data to be in the leak but it wasn't so it's probably not all of the Ashkenazi data


How did you verify? I am ashkenazi


assuming your last name is the same as in your HN username you are not in the file either

(as to how I verified: the file is still available on the leak site)


OK, that CSV is just a subset of the data they claim to have.... so our data might still be out there even if it wasn't in this file.

These are the headers:

  profile_id; account_id; first_name; last_name; sex; birth_year; has_health; ydna; mdna; current_location; regions; subregions; population_ids
First entry is

  Elon;Musk;Male;1971;...


Glad I used 23andMe with a fake name and UPS PO Box


What did you use to pay for all of it?


Prepaid visa that I bought with cash at the supermarket. It’s a lot of work but it… worked.


There's a whole debate about whether Elon Musk has Jewish ancestry. Are you saying he is?


I have almost 6% myself, but to me it feels like, if I don't even know about who it might have been in my family tree, it doesn't really count.

But yes, for EM it said "Ashkenazi;Balkan;British Irish" - no percentages in there - and I believe 23andme has confirmed the authenticity of the data.


That tweet has zero evidence this is anything more than using reused passwords. Certainly not all data.


Oh god, of course it’s a Jewish thing

If I live a hundred years I will never understand people’s obsession with the Jews


Great, more neo-nazi terror. I love the 20's /s


This is precisely why, though I find it to be a fascinating idea, I have steadfastly refused to do one of these genetic tests.


Not only that, you have to make sure all of your extended family members don't take the test too.


that's impossible. All you can do is talk to them and tell them it's a bad idea. The data will exist from now until the end of the current human age. Everyone wants it, 23 and me may be decent custodians, but eventually it will end up in the hands of every government, corporation, and entity interested in getting DNA information through breaches and some eventual buyer of the company selling it.


I remember a Simpsons joke where Homer finds out the government has everyone's DNA on file and asks about it and they say "Yep everyone who's touched a penny since 1932" or something.

Turns out it didn't need to be that elaborate, you could just ask folks to mail it in ;)


you could just ask folks to mail it in (and pay for the privilege)


Same here, I wish there were some regulations in place to safeguard this data. It seems like the cat is out of the bag though:

https://www.pbs.org/newshour/amp/science/dna-ancestry-search...


Couldn't you just do it but with a fake name?


That is exactly what I did. And I got a temporary mailbox at the local UPS Store so they could not get my real address. And signup on their website was with a new email address. It’s a pain but it was worth it for me. Of course, I opted out of the DNA Relatives feature.


They can pretty easily connect it to you via family members, hopefully your family avoids it.


My sister didn't have an account when we sent the packages. When the data got analyzed it showed it as "My Sister" with close to 100% accuracy. The packages were anon, so they couldn't know which sent which.


Yeap, same here. Don't think I'll ever try any of them.


I've said it countless times now: until there is criminal liability for negligence with regard to data warehousing/safekeeping, this is going to keep happening. because they do not care, and will not care (regardless of what their PR puts out) until their toosh is on the line.

Every time there's a breach, they pull a British Petroleum "we're reallllly sorrrrrry" and buy a bunch of people LifeLock. Its absolute bullshit.


Why would 23andMe face any criminal liability? Per the article, they were never breached; only individual accounts with reused credentials exposed in other breaches. They should have had 2FA, but I don't think not having 2FA should be criminalized.


If a bank allowed people to log in to their bank account and make transfers based on only email+password and someone stole money from a bunch of accounts, would the bank face any criminal liability?

I don't know the answer, but I would say your DNA sequence should be secured similarly to your bank account.


I don’t know about criminal liability, but they’re certainly at fault for not implementing a check against known compromised passwords[1]. I believe it’s been an accepted best practice since something like 2017.

1. https://haveibeenpwned.com/Passwords


I visited the site in the screenshot and saw someone peddling NATO leaks from their Philippines visit including "PLANCTON, CRONOS, CA SIRIUS, EMADS, MCDS, B1NT etc" And one more list of some ukrainian citizens database from 2023.

please don't kill me CIA! I swear I accidentally saw it.

welp! time to head back to my work.


That's exactly what a spy would say. Get'em boys!


what site is it?


$75k. Tell me the government doesn’t take privacy seriously without telling me that government doesn’t take privacy seriously.

> Three weeks ago, genetic testing firm 1Health.io agreed to pay the Federal Trade Commission (FTC) a $75,000 fine to resolve allegations that it failed to secure sensitive genetic and health data, retroactively overhauled its privacy policy without notifying and obtaining consent from customers whose data it had obtained, and tricked customers about their ability to delete their data.


I already find the narcissistic "welcome to you" message on the package inducing of extensive amounts of vomit. And then they only get $75k for this? I want them go DOWN.


The FTC takes the its_not_about_the_money.jpg meme very seriously.


We’re not actually seeing the kinds of boogeymen people like to trot out when this kind of data is leaked. Nobody is conducting banned genetic research, nobody’s insurance rates are going up, nobody is getting ethnically cleansed as a result of this info…

A few stolen identities, some bank fraud, but largely the systems in place can handle it. It’s caught at the other end.

If you want big fines, prove big consequences.


A single stolen identity can cause years of emotional harm and turmoil. Someone’s life is often completely uprooted from this. That one person alone should receive significantly more than $75k


A single car accident can end a life, yet we drive cars. The value gained by technology like 23andme is vastly outweighs the cost of some occasional negligence or theoretical harm.

Besides, if you can find a specific person who was specifically harmed by this exact breach, I bet you could sue for damages, and get more than $75k.


If I significantly harm someone with my car, even unintentionally, I do in fact get sued successfully for far more than $75k


Which is my point; there's no "significantly harm" here at a large scale, and if there is one at an individual scale, that person can sue.

The $75k fine is exactly proportional to the complete lack of concrete harm done. Nobody gets fined for cars existing.


Capitalism being backwards as usual. If we really take privacy seriously we should fund them $75K to fix their privacy problems.

If you take away $75K from their engineering budget they will only do a worse job, and more data will leak.


I'm just going to do my monthly HN login to say, and possibly skirt ethics here because your comment truly deserves it, that this is the dumbest thing I've read on here in a long time. I can't tell if this comment is satire or being real.


That sounds like a good way to ensure monthly data leaks


What? No. If we really take privacy seriously, we might consider giving them a discount on their use of our genetic data once they have shown responsible care in handling that data -- similar to how no-claim bonuses work in insurance.


Wouldn't this incentivize insecure practices and bad practices so they can get 75k? Wouldn't that be the effect, everyone tries to as little as possible until they get paid?


That's a fairly unconventional approach. Not a subscriber to traditional incentives-drive-behaviours theories I guess?


Err, no. If you give them $75k then everyone else will be incentivised to leak data so they too can get a free $75k.


I wouldn't. If I leaked data due to honest coding bug and someone gave me $75K with even a handshake agreement to put it towards fixing the problem I would put 100% of that money towards fixing the problem. That's my moral standard, if money with even a verbal agreement to put it towards a certain purpose, I either honor that purpose or don't take the money.

If they took away $75K I might be forced to lay off someone, possibly one who could have fixed the problem.


Capitalism brings abundant choices. Many or most people don't care enough to protect themselves by choosing differently.


$75000 is a lot less than buying even 1 security expert. It's just the cost of doing business if you don't charge them some substantial % of their revenue for a year. Say 20% - 50%. It has to sting or there will be no change in their processes.


And if fined $75000 the first thing they would do is lay off that security expert.

Provide the security expert to them at no cost, taxpayer funded, as a collective effort to stop identity leaks.


While we're talking about the privacy implications of sending your DNA to be sequenced 'for fun'. 23andMe already work with law enforcement, and the data science is getting good enough that they can figure out your identity if one of your third cousins turned in a DNA sample and you never did.

The average person has around 200 3rd cousins.


Can you elaborate on how that works?

One of my 200 third cousins did a 23andMe swab. I then committed a crime, possibly on the other side of the country*.

Law enforcement collects DNA evidence. What now?

* edit: Previously this said "or world" but that felt unreasonable for the question


The list of potential culprits reduces from perhaps tens or hundreds of thousands to about 200. Family trees are very easy discover. Check out your local Mormon genealogy center if you don't believe that. At that point they can apply standard gumshoe investigative techniques to quickly narrow it down to you.


Can law enforcement ping the 23andme database with any DNA they swab?


Yes and it appears to be some kind of weird work around where 23andme doesn't explicitly work with them. This exact technique was used to readdress some serial killer cases recently. https://www.cbc.ca/news/world/dna-from-genealogy-site-used-t... https://www.latimes.com/california/story/2020-12-08/man-in-t...


> What prosecutors did not disclose is that genetic material from the rape kit was first sent to FamilyTreeDNA, which created a DNA profile and allowed law enforcement to set up a fake account to search for matching customers. When that produced only distant leads, a civilian geneticist working with investigators uploaded the forensic profile to MyHeritage. It was the MyHeritage search that identified the close relative who helped break the case.

Basically they send in the genetic material and create an account on the site, then see which relatives it matches with and go from there.


There are some genealogical services out there that do Leeds-Collins charts that to my knowledge work by asking their users for their 23andMe credentials, much like some third party banking services that work by asking their users directly for their banking credentials. There's a lot of really bad security in this sector.


so sad given that there was a 23andMe OAuth2 API for some time that had SNPs as OAuth scopes!


gonna name and shame geneticaffairs.com here for this appalling practice:

https://geneticaffairs.com/faq.html

> Genetic Affairs is able to retrieve DNA matches for several DNA matching companies. To download DNA matches of these companies we need to store your login credentials. See the next section concerning the secure storage of these credentials.

> Since we have to use login information for 23andme and FamilyTreeDNA, we use an isolated database in which we encrypt and store the passwords of these websites. This database is only available in a private network in the cloud and not exposed to the Internet.


bizarre... I wonder why they don't just accept a raw data upload from 23andMe?


Maybe they could send me my dad's account password he lost years ago and no longer has the email address.


You should be able to lookup the email on https://haveibeenpwned.com/. If it's there, they might have your password.


It was only a matter of time. I feel sorry for all the people that were tricked into trusting a private company with their genetic information.


After reading the article, I think it's more that people were tricked into [reusing passwords across websites instead of having a password manager and randomized passwords].

It's not on 23andMe, or anyone (other than the user) for that matter, to ensure the passwords used by the user are not copied passwords from other credentials.

Seems to me like passwords need to be regulated on a governmental level, but that's a can of worms of an idea that I am not ready to defend.


I mostly agree with you, but 23andme could have prevented this by requiring 2FA for all accounts, or at least for accounts with an email in the HaveIBeenPwned database.

Credential stuffing is most preventable by the user (who can simply not reuse passwords), but platforms have a responsibility as well. They can at least mitigate it through rate limiting, and mostly stop it with 2FA requirements.

If an attacker is able to exfiltrate millions of records from a platform with credential stuffing, that means they tried to login to multiple millions of accounts. It shouldn't be difficult for a service to detect and stop such a sustained level of load on its login infrastructure. You can't get millions of proxies.


None of this is surprising. I was a sw engineer at 23andMe about 5y ago. Their backend consisted of some of the worst python/django spaghetti code I’ve ever worked on. There was also no engineering culture whatsoever.


> but 23andme could have prevented this by requiring 2FA for all accounts

They could have prevented that by not keeping the data longer than needed to send it to the user.


Actually, if the attacker is determined enough they can use enough IPs and make it really hard to detect and defend against.

I work on combating credential stuffing on a regular basis... it's quite challenging.


> It's not on 23andMe, or anyone (other than the user) for that matter, to ensure the passwords used by the user are not copied passwords from other credentials.

In my opinion, it is, actually, on 23andMe. At my tiny startup, I implemented a simple check against Troy Hunt’s compromised password database.[1] If I can do it, 23andMe can.

If anyone reading this is in the business of making web apps and there’s literally anything of value behind your login, prioritize this mitigation. OWASP recommends it too. [2]

1. https://haveibeenpwned.com/Passwords

2. https://cheatsheetseries.owasp.org/cheatsheets/Credential_St...


What I make my password should be entirely up to me, with the knowledge that if my data is stolen because I reused a password, it was entirely my fault.

I don't really think this needs to be regulated; government-standard guidelines are probably sufficient, with companies knowing that deviating will expose them more to litigation in the event of a problem.

Not trying to argue with you, I did read your last sentence, just tossing in another POV.


In this case, due to the "genetic relatives" feature, one user's choice to use poor security (e.g., "passw0rd") enabled the bad guys to get the data of other users who did use good security.

Sort of a "negative security externality"...


"Tricked into"? Who is tricked into reusing a password?


This is my personal take: People are sometimes tricked by natural human instinct into performing actions that give benefit to a simple human need ("it's just easier for me") at the detriment of higher-level outcomes (in this case, password security).

It's simply easier for me, as a human, to remember that my password for all websites is Hunter2, rather than spend the extra time, create a password manager account, store passwords, utilize best password management practices, etc. Not saying this is what I do, but for many people, this is how they remember their password(s).

Maybe I should have changed the "tricked into" to "trick themselves", but I'm just a human and this was easier for me.


Not just those people, but people who never even heard of the company or even visited the website due to the fact that their DNA and personal details have been shared to the site thanks to relatives' DNA.

Source: https://www.linkedin.com/feed/update/urn:li:share:7116053429...


Veritasium has an interesting video explaining this, mentioning 23andMe on multiple occasions: https://www.youtube.com/watch?v=KT18KJouHWg


> I feel sorry for all the people that were tricked into trusting a private company with their genetic information.

And all their relatives (who share a lot of their DNA after all)


Much of healthcare is provided by private, for-profit companies. Even public, non-profit healthcare providers will use services (including genetic testing) from private, for-profit companies.


I have my genetic info on 23andMe and I could not care less.

Seriously — what are people going to do with it? It's illegal for insurance companies to discriminate. I'd post it myself on GitHub if anyone showed the slightest interest in using it.


> It's illegal for insurance companies to discriminate

For how long ? Being jewish on record in Germany in 1930 was fine, in 1940 not so much

Data is forever, laws, regulations, governments, &c. aren't



I will bet you $10,000 that it will be illegal to tier health insurance on your genetics in 5 years, 10 years, or whatever timeframe you want.

If you want to take that bet, let me know and I will send you my contact info.


Did you also predict, 10 years ago, that so many states would an abortion ?

Get back to your crystal ball and tell me when the war in Ukraine will end and how much will a btc be worth in 5 and 10 years

If people in this forum believe Musk when he says fully autonomous vehicles will be there in two years (since 2012) and that the AI singularity is coming this decade, the possibility of genetic testing being extended to pre conditions isn't so crazy


Sounds like easy money then. Are you taking the bet?


What they are saying is that it is not easy money for anyone.


Insurance companies don’t care so long as all their competitors have to play by the same rules. Insurance co decisionmaking is sometimes counterintuitive.

Genetic data will definitely be used to limit freedom of movement somewhere on Earth in the next 25 years. We’ve already been mass-swabbing for COVID for the past three years, so it won’t be that big a change.


completely agree, and it was not only Jewish people that were on that list right?


That's great, feel free to do that. Many others would rather keep their ancestry or genetic attributes out of the public eye. Maybe it's a source of pain for them, or is required for personal safety reasons. And just because something is illegal doesn't mean it won't happen. Legal "workarounds" are a thing. How watertight is the legislation and practice around preventing employers from taking a peek? Or having someone else take a peek for them?

A concrete example: What could the consequences in today's USA political climate be of having a massive database be with columns: Firstname, Lastname, y_chromosome_present.


Almost 0, because even in the absence of public data (which is multitudinous) you can infer sex from first name with > 95% confidence.


They're talking about trans people.


> It's illegal for insurance companies to discriminate.

Only for health insurance. Other types of insurance companies are free to use that data to discriminate against you, include life, disability, and long-term care insurance.


The problem with this situation is A) precisely that you don't know what people are going to do with it, and B) that once it's out there it's impossible to undo that.

You're confident enough that nothing can be done, to the point that you'd take the risk for no upside. That... doesn't sound rational to me?


> A) precisely that you don't know what people are going to do with it

We know now, they are going to leak it.

Everyone is going to know that I'm an Ashkenazi Jew who is more likely going to have blue or brown eyes and hair loss.

Whooops.


And we now (hypothetically, in the context of leaking genetic info) know Meryem is Uyghur, James is trans, and Alex is related to a deeply divisive political figure.


> By day, the government pays me to publish their data.


Yes, my background is in publishing open government data, such as budgets, election results, environmental info, economic data, and government program accountability studies. Not personal information. And yes, I do have strong opinions on how data is to be responsibly managed, hence my participation in this conversation. So, I'm curious if you have additional thoughts on the matter.


I see where you're going with your point, but my gut feeling is none of that data is anything new.

We know where Meryem was born (govt records which you probably helped make public), we can read James' twitter feed and we know the relatives of political figures (except for all the illegitimate children).

So yea... I'm still not seeing the issue here.


If you'd like to make an argument that my work led to the publication of personal records, please make it more explicitly. I'd love to hear it fully articulated. Otherwise it's hard for me to read it as anything beyond a swipe.

We are assuming James posted such details to a public twitter feed. That does not account for the others who did not. The issue in Meryem's case is that obtaining birth records is not guaranteed (especially from a foreign country), and that birth location isn't the same as genetic ancestry. Regarding Alex:

> except for all the illegitimate children

That is part of my point. If my absent parent were actually some famous politician, I would personally not want to have that information leaked. Some might not care - that's great. My point is a simple one - just because having private medical info exfiltrated is not really a big deal for many people, doesn't mean that it's ok to give a pass to the parties responsible for the exfiltration.


Sorry, it isn't a swipe directly intended to be a personal attack of any sort. More that govt's, in general, have a habit of leaking personal information for their own benefit.

Your original comment was all about hypotheticals, so yes, it opens up the discussion to assumptions.

Agreed that, in general, it sucks that stuff leaks out. That said, every time someone brings up 23andme, it feels like one of those "the govt is going to shut down in a week if we don't do something!" type of headlines that seem to be on repeat... where in the end it turns out that at the last minute, something is done to prevent it, and everything turns into a giant nothing burger.


No worries. Just wanted to make sure I understood.

> More that govt's, in general, have a habit of leaking personal information for their own benefit.

I 100% agree with this.

Cheers!


Identity theft, especially with today's deepfakery.


> The information that has been exposed from this incident includes full names, usernames, profile photos, sex, date of birth, genetic ancestry results, and geographical location.

Is 23andMe going to actually be held responsible?

I think both our industry and our information infrastructure would be vastly better if companies were forced to be serious about security when they are collecting and holding private data.


As I understand it, only if they can prove some pretty intentional negligence. If some random dude sucks at his job and forget to encrypt so-and-so, or change the default password on such-and-such, it's not really the CEO's fault. Maybe you could chase after the CTO if they had policies which directly lead to this issue.

Again, just as I understand it, that's why nobody gets in trouble for this shit. It's not really fair to blame any one particular person. Whether or not that can be remedied by modifying the corporate system we operate in somehow, I don't know. Probably yes, but that's not my skillset at all.

edit: but in any case, this was due to people re-using passwords, so I doubt you could realistically blame the company.


> Maybe you could chase after the CTO if they had policies which directly lead to this issue.

> It's not really fair to blame any one particular person

These are literally the very circumstances that we were presented with as reason why executive compensation is astronomical. It's all that responsibility they have to assume in times like these, right? They're supposed to fall on their sword, and whoever replaces them is supposed to make damn sure shit like this doesn't happen on their watch. The pay and parachutes ensure they land on their feet.

The reason nothing ever changes is because these clowns never get in trouble. If you want that $10M salary, you better make sure everyone under you is doing their part to ensure events like this don't happen-- or you get dethroned.

Does China still sell melamine-tainted baby formula? We've been conditioned to just let our leaders stay in command after plowing into icebergs-- while they blame and execute the engineers shoveling coal below deck.


Nobody is getting paid that much because anyone is realistically expecting a CEO to watch every action taken by every employee. The reason CEOs get paid so much is because it was discovered that luring them to a company with big paychecks results in higher dividends, and we've swung in that direction on investments.

The solution isn't to randomly start blaming CEOs for things they had no realistic control over, it's to swing in the other direction of putting more money towards workers by taking it away from pure growth-oriented goals.


I feel like the available data is way too scarce to attribute positive results to any individual executive with any degree of statistical significance. Do you know if there’s been research done on this?

My perception is that it’s really really hard to differentiate between someone who’s genuinely a force to be reckoned with and someone who’s just in the right place at the right time. After their first success they can hop around between companies from executive role to executive role playing it safe and riding the gravy train just by not fucking it up. I’d be interested if anyone can provide examples of executives that consistently trigger inflections in a company’s performance within say 2 years of joining across multiple companies. I’m genuinely curious.


I frequently encounter software that doesn't let me reuse my password. It is not an excuse. The company should be held accountable that it allowed clients to reuse their passwords; allowing it is negligence.

Intent is not a sufficient legal standard to address this epidemic of negligence. We need Strict Liability for data protection.


You frequently encounter software that doesn't let you reuse a password that's been used anywhere else? Or just in the same system? I've not seen the former.


I remember software that will stop me from using stuff like "password", how hard is it to grab a copy of any of those password leaks and ban any password found in there?


It is not hard and every web service really should implement this sort of check. I’m actually pretty surprised to see so many comments here that aren’t aware of it!

See: https://haveibeenpwned.com/Passwords


From the perspective of the users/customers/people the harm was done to, the company is the abstraction that they deal with. Companies can be hit with a billion/trillion dollar judgment. The injured parties don't care which executive at the company will get a smaller bonus this year -- they're not unfairly blaming.

Separate from that, if there's laws and regulations, the company could also be hit with fines. Officials could also investigate individual culpability for bad behavior by people within they company, but that possibility doesn't mean that any kind of holding companies responsible would be unfair.


Why would 23andMe be held responsible? The article doesn't indicate that they did anything wrong. You can't really blame them for users refusing the same password on multiple sites.


Probably not as I would guess it'll be just a wrist slap and nothing more.

This not only affects users, but the user's relatives/loved ones as well.


Is 23andme covered under HIPAA?


There needs to be a term to describe this "f*k you I got mine, who cares about the future or future generations" behavior. The normalizing of dna collection, facial recognition, being oblivious to climate issues, screwing up housing market,etc...


Moral hazard perhaps.

Or externalities.

23andme kept the upside benefits - making money, but doesn't have to realize the downside risk - facilitating future conflict.

If anyone thinks that's facetious, I encourage them to read Erin Kissane's description of what went down in Myanmar - https://erinkissane.com/meta-in-myanmar-part-i-the-setup. And then think about how they did that without DNA data.


there is a rumor about bank employees during the 2007 crisis, passing a note that said "IWBTYWBT" meaning "I won't be there, you won't be there".. just what you are referring to..


Like, a government? Mine is too busy squabbling about gender identity and whether Pelosi should have her office back or not.


Gonna take my downvotes on this, but in my circles, we just call this "being a Boomer."

Goes to your head when you're the largest generation during your formative years. Nobody has any choice but to do what you say.

So you take what you can get and screw over everyone else coming after you.

And then you belittle them for resenting you over it.


In some years the boomer gen will be gone. The behavior, I think, wont.


If I were operating a SaaS platform or any other online service, I'd be inclined to automatically reset passwords for users whose credentials have been compromised in a data breach. Has anyone here developed an automated system to handle this? I'm particularly curious about how one would automate the gathering of leaked databases and cross-reference user passwords against these lists, both at the point of signup and periodically thereafter. Seems like a compelling problem to solve.


I don’t know if there’s a service that does that, but I do know big tech companies do exactly this for their accounts (user accounts, not just employees). Additionally many password managers will warn you, including the built in ones in iOS and Chrome.


How do you exfil data on 1.3M users by guessing a few passwords?


Well, it seems they took advantage of a feature that indicates who you may be related to, so they must have guessed Genghis Khan's password.


MONGOLIAN71682@HOTMAIL.com and password "SHADOW_RAIDERZ123" that was easy.


Each valid login+password got the scrapers many many profiles.

23andMe has a feature that lets you see people you're related to and view their profiles. My guess is this feature had few rate limits and allowed you to view the profiles of people very distantly related. So perhaps with a couple thousand valid account logins you could eventually look up the profiles for 1.3M users.


They're not guessing passwords, they have a list of e-mails and passwords from other leaks and are hoping the users are using the same credentials on all their accounts. Since password managers aren't mainstream yet it works.


How do you hide authenticating 1.3+m unique accounts? A distributed system? A mess of VPN's? Or they don't hide it because the auth system is not checking for 1.3 million auth attemps?


The latter. Forget tracking auth attempts:

> The researcher added that he discovered another issue where someone could enter a 23andme profile ID, like the ones included in the leaked data set, into their URL and see someone’s profile.


Ah, so they were able to use a few accounts, then fuzzed the URLS to victory...

Amazingly incompetent.


I recently had to explain to a tech lead that you can "never trust the client," because any dedicated party can just curl around your UI and send whatever HTTP request they want.


I remember when this first occurred to me from me deciding that I didn't want to click download a series of things on some website where this was the intended use. I wrote a small shell script to curl it for me, and somewhere during the process of writing the script, I realized the true "power" of this. Ever since then, GET with search queries were protected against in everything I wrote from that point forward. Luckily, that was in the late 90s, so it's been a minute.


> The initial data leak was limited, with the threat actor releasing 1 million lines of data for Ashkenazi people.

> The information that has been exposed from this incident includes full names, usernames, profile photos, sex, date of birth, genetic ancestry results, and geographical location.

Not good. Really not good.

This is why I never volunteer any PII to link with my DNA. Sampling DNA is not that hard (we leave traces literally everywhere) but credibly linking it to PII is another thing.


All that ethnicity data will be a goldmine for future genocidal dictatorships.


Yep. Protect your precious bodily fluids people.

In other news, this is surprisingly not very well known but california takes, and saves, a DNA sample of any baby born in a CA hospital. There is no consent nor opt-out, not even notification. You can write a letter to have the sample destroyed and sometime later you'll get confirmation of such. You can only hope that it wasn't sequenced and saved already and/or that it was properly destroyed.

https://www.cbsnews.com/news/california-biobank-dna-babies-w...

huh. "many states" do it, I had no idea. Apparently all 50 states are required to do a genetic screening but I guess above and beyond that some states save the sample.

I wonder what would happen if a parent or family member physically intervened to prevent taking the sample.


They actually will destroy it,if you have the paperwork. https://www.cdph.ca.gov/Programs/CFH/DGDS/Pages/nbs/nbsconse...

(Or you know, give you a certificate that says they destroyed it.)


Glad I never did any of these tests, and never will, good luck changing your DNA after the leaks!


Of course it could be massive credential stuffing. The problem is that an inside job would look completely identical. Or negligent security.

If you sell your DNA profile to someone, they are free to give or sell it to someone else. At best that's a breach of contract, but what are you going to do? A successful class action only changes the price retroactively.

(I am told some people sell their DNA data at a negative price, which I suspect may be the same people who pay to have to have a remotely controlled microphone at home. That I don't understand, and accept that I probably never will. But it doesn't change the underlying premise and market dynamic. The above is still true.)


Remote-controlled microphone is embedded into every smartphone. This doesn't prevent billions of people to buy and use them :)


I hope someone investigates whether 23andMe adjusted user profile preferences or exposed this data through new features instead of assuming users chose to share the information now available in this leak.


> The initial data leak was limited, with the threat actor releasing 1 million lines of data for Ashkenazi people.

When the evil kind of hackers disrupted a children's hospital, did someone ask how they could be as evil as that, then had the idea, "Hey, how about selling a list of Jewish people specifically?"

(They have to know that no one's going to buy that for marketing, but rather for targeting due to hatred and insanity.)


This is why no matter how many interesting ads or tiktoks, I will never do these genetic testing kits. I wouldn't be surprised military is working on CRISPR like infections that target your specific DNA when sprayed in the air.


A friend of mine who did this says they don't record (any?) personal information and instead give you an id number. Sounds dubious to me. Anyone have any information on how effective their anonymization attempts were?


Why keep calling the principle criminal entity a "threat actor", when in fact there is no 'act' here. They are openly selling stolen data, and it's been verified as stolen. Not deflecting the stupidity and criminality of 23andme...how stupid can a corporation working with peoples biological data be...oh wait...yes, repeatedly...its embedded in the business model apparently...operate for N+1 years, then provide a breach. Wish I could wake up from this recurring nightmare of groups with more money than they know what to do with, pretending to move humanity forward, making stupid mistakes. Nonetheless, please stop naming it a "threat actor". They are not 'acting' in any sense of the word.


> Not deflecting the stupidity and criminality of 23andme...how stupid can a corporation working with this sensitive type of data be

Really? If the article is accurate, 23andMe did not have a security breach; credentials leaked in other breaches were used to compromise accounts that reused those credentials on 23andMe. Now certainly 2FA would have been advisable, but I think it's a bit much to suggest this rises to the level of criminality.


Appreciate the note, and agree 'criminality' is a bit strong. But this is our biological data at stake. 2FA is a minimum now given the integrated credentials sharing across platforms today.


Sounds like dystopia is nudging closer. But who gives their DNA to a company?


People that want to know very, very useful information about themselves.


I very much doubt the usefulness of the information you get from 23andme. I would say it's useful with a grain of salt when it comes to clinical accuracy, maybe it would enable someone to actually go to the doctor and get properly and accurately tested. But on the other hand, it can also enforce a false sense of security when the tests are clean. 'FDA approved' does not actually mean 'clinically useful'[1].

From the ancestry standpoint, I think it's only useful for a very limited set of individuals/scenarios. In my case, I'm 90% Eastern European which is... kind of basic considering I know where I was born. Yeah, I share a lot of my DNA with all the people who were born in roughly the same geographic area. Big fucking surprise?

I never used 23andme and not planning to (the example is from a close friend of mine who did). I think the privacy concerns far outweigh the benefits.

But I am interested in genetic testing so if anyone has any pointers for a privacy conscious, not-for-profit (maybe in academia?), non-Law Enforcement friendly entity and preferably does it anonymously, please let me know.

[1]: "Why You Should Be Careful About 23andMe’s Health Test" - https://archive.ph/lpaUU


The entire new world is very mixed. Ancestry tests are useful for people from North and South America, Southern Europe, India, Australia and New Zealand etc. those are a massive number of scenarios.

There’s a lot of health information that’s useful. Propensity to alcohol dependency, propensity for fat retention, propensity for diabetes and high blood pressure etc. You can also download your SNPs and use that information on other sites that give you a lot more, less verified information.


> Ancestry tests are useful for people from North and South America, Southern Europe, India, Australia and New Zealand

Not really. And they're inaccurate. Did they even publish their baselines?

> Propensity to alcohol dependency, propensity for fat retention, propensity for diabetes and high blood pressure

These should be between you and your GP. And I can tell you that you have a propensity to many addictions, just by assuming you're human. Don't test them. Don't rely on 23andme to validate "No propensity for gambling addication? Casinos, here I come!"


Very very useful information like "Our arbitrary data model that isn't exactly based on science says you're like 4% eastern european"


Fodder for those who claim they're 1/17 Cherokee.


They were doing important Parkinson's research maybe 10 years ago. I sent in my dad's sample and mine. No idea what they got from my dad (it was a voluntary thing that didn't give you account access, just participating in their research).

I didn't have any of the known genetic markers for Parkinson's at the time and requested they destory my data and sample.


there are a lot of really valid use cases for genetic testing but to address your point you (these customers, etc) really should be aware of the risk and privacy issues that may occur if that data was compromised.


> But who gives their DNA to a company?

It is amusing to me that this idea was being celebrated here just a few days ago [1].

[1] https://news.ycombinator.com/item?id=37744350


I'm sure the people celebrating it in that thread still stand by their opinion. I don't see how this changes anything.


I did and it was worth it even if my DNA is stolen. I've reconnected with many lost and unknown relatives.


Really? I think if I got an out of the blue email or other contact from someone saying "hey, 23andme says we're related" I would ignore it.


Unfortunately, a few of the people I share some of my DNA with.


have you heard of clayton bigsby?


Based Clayton Bigsby enjoyer.


People give their DNA to anyone who happens to be in their presence, or nearby, or come to a place you've been a few days later... You're shedding your DNA all the time.


That's a bit different than giving the DNA + cost of sequencing + full identifying info, all signed sealed and delivered, wrapped up in a bow with fully authorized permission to monetize for whatever purpose and share with whomever asks. If someone else is going to the trouble of finding my location, stealing a sample, getting it sequenced, and correlating the data, then I suppose they earned the database record the hard way, didn't they!


I feel this is a straw man. The risk profile posed by this leak is not the same as someone going around and collecting people's DNA off the street. The leaked database is a highly concentrated artifact, complete with likely-correct PII across a million people.

If people were regularly surreptitiously collecting DNA samples from "the wild" at similar scale, this would be a different conversation.


How much of that shedded DNA is being sequenced and catalogued in an internationally available online database?


No matter how securely you connect or rotate passwords, the inside-job or supoena risks were too d*** high


Except... this leak was from people not having good password hygiene? Right?


What site is that posted on?

Looking at the data: name, location, dna group. Not very worried


anything that has connections to google is going to be a privacy nightmare


Many years ago I got a 23andMe kit. I was about to swab, when I asked myself, “Why is this thing so much cheaper than a lab?” I sent the unused kit back.


what marketplace is this on? I’ve seen the screenshots but I’m not familiar with that forum


“user data” …


Some background for new readers:

The 23andMe founder (and current CEO) Anne Wojcicki is the sister of Susan Wojcicki, until recently a long time Youtube CEO.

Anne is also the ex-wife of Google/Alphabet founder Sergey Brin. Google/Alphabet is also an investor in 23andMe. Youtube is an Alphabet company.


The Wojcicki-Brin family tree seems like its own mini-map of Silicon Valley!

It’s amusing and ironic to me that the co-founder of a gene-testing company has such an interconnected family network within the tech industry.


> Anne is also the ex-wife of Google/Alphabet founder Sergey Brin. Google/Alphabet is also an investor in 23andMe. Youtube is an Alphabet company.

It was sort of cool that as a 23andme employee some time back (10-12 yrs?), you could use some of Google's amenities because of these relationships.


[flagged]


23andMe is a 17 year old public company with almost $300m in revenue. I’d hardly call that a startup.


No, but you should feel sorry for all the people who refused to entrust 23andMe but whose relatives didn't get the memo. They'll be hurt by this too.


Whoopsie daisy.


Only complete and total retards use these services. People actually paying money to have their DNA data sold, you can not make this shit up.

Apart form that the "research" they do it dubious AF and can't be trusted at all. There are cases where people send in DNA of a banana and get some made up human genetic history back.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: