Hacker News new | past | comments | ask | show | jobs | submit login
FTC fines Twitter $150M for using 2FA phone numbers for ad targeting (ftc.gov)
1362 points by averysmallbird on May 25, 2022 | hide | past | favorite | 325 comments



I don't have the most optimistic outlook for this having any impact, but I really hope this sets a precedent for limiting the use of dark patterns with which companies try to tie your identity to a phone number. I think the total sum for this fine is rather myopic: it ignores the long tail of possible future data leaks and the impact it might have on the people behind the affected accounts.

I created my current Twitter account a few years ago and it remained dormant for a while. It was flagged as "in violation of our policies" despite having not made any tweets or using a handle or nickname that would cause offense to anyone. In order to resolve this, I had to enter my phone number to "secure" my account. I don't know what process triggered this review, but I'll be damned if it didn't smell like an easy way to associate an existing marketing profile with my Twitter account. Of course, it's vitally important to profile a service I used to keep up with industry news and post about Goban puzzles.

I've also run into similar patterns on Discord and similar platforms; "Oops! Something suspicious is happening with the account [you literally just created]. Please add a phone number to your profile to proceed."

Although I follow a reasonable set of practices around identity/password management, I usually architect my risk profile with a "I don't care if I lose this account" approach. If that statement isn't true, then I will happily apply all of the security measures available. However, it seems like the idea of creating "I don't care" accounts is becoming increasingly difficult as we continue to invest in user marketing analytics and lower the barrier of entry to these types of technologies that do not have the consumer's best interests in mind.


I tried to create a Facebook account recently to join a group. I used an Apple hide my email address. Within a day or so the account was blocked, and I was required to provide not just my mobile number - but a photo of me.

I did try uploading a celebrity photo instead, and of course it didn’t work. But I was shocked at the need to post a photo of myself. That is way past my creepy threshold.


As I understand it, if they're unable to tie you into their "social graph", you won't be able to use the service. I tried similar a couple of years ago when I begrudgingly wanted to participate in a group that insisted on using FB as the communication platform. Couldn't get an account to stick unless they had some way to figure out who I was. Eventually just stopped trying.


I don't think that's strictly true. I created an account a few months ago with a made-up name, throwaway gmail address and a (cropped) photo from thispersondoesnotexist.com, which is still working fine.


Mercadolibre ask for your photo with your id card for both sides Not ask, REQUIRE


Not a user, but doesn't MercadoLibre operate as a licensed financial institution in at least a portion of their vertical where such a requirement would not be considered unreasonable?

I can certainly see them playing dark pattern games with sensitive privileged data from one business cross-pollinating into unrelated businesses by way of blind user agreement acceptance though.

EDIT: From their most recent 10-K filing[1], under Government regulation:

In addition, Mercado Pago as a payment institution in Brazil is subject to:

...

(iv) Data Protection Law: In August 2018, Brazil approved its first comprehensive data protection law (the “Lei Geral de Proteção de Dados Pessoais” or “LGPD”), which became applicable to our business in Brazil since August 2020. In December 2018, the former president of Brazil issued Provisional Measure No. 869/2018 which amended the LGPD and created Brazil’s national data protection authority (the “ANPDP”). We have created a program to oversee the implementation of relevant changes to our business processes, compliance infrastructures and IT systems to reflect the new requirements and comply with the LGPD. The LGPD establishes detailed rules for the collection, use, processing and storage of personal data and affects all economic sectors, including the relationship between customers and suppliers of goods and services, employees and employers and other relationships in which personal data is collected, whether in a digital or physical environment.

(v) Secrecy rules: In addition to regulations affecting payment schemes, Mercado Pago is also subject to laws relating to internet activities and e-commerce, as well as banking secrecy laws, consumer protection laws, tax laws (and related obligations such as the rules governing the sharing of customer information with tax and financial authorities) and other regulations applicable to Brazilian companies generally. Internet activities in Brazil are regulated by Law No. 12,965/2014, known as the Brazilian Civil Rights Framework for the internet, which embodies a substantial set of rights of internet users and obligations relating to internet service providers, including data protection.

Law No. 12,865/2013 prohibits payment institutions from performing activities that are restricted to financial institutions, such as granting loans directly. In November 2020, the BACEN approved the application filed by MercadoLibre Inc. for authorization to incorporate a financial institution in the modality of credit, financing and investment corporation (SCFI). In light of the authorization granted by BACEN, we incorporated a new entity (Mercado Crédito Sociedade de Crédito, Financiamento e Investimento S.A.), which operates activities related to the granting of loans and obtains better funding alternatives for our business.

...

The BACEN is also implementing the Brazilian Open Banking environment, to enable the sharing of data, products and services between regulated entities — financial institutions, payment institutions and other entities licensed by the BACEN — at the customers' discretion, as far as their own data is concerned (individuals or legal entities). The Open Banking implementation has been gradual, through incremental phases that take into account specific information/services to be shared, and Mercado Pago is a participant of the Open Banking system since February 2021, when its phase 1 started.

[1] https://www.sec.gov/Archives/edgar/data/0001099590/000156276...


Yea, I hate Mercadolibre for that. And it's even more annoying/impossible to use if you are not a resident or national.



I did think of that. In the end I really wasn't trying very hard. I left facebook many years ago and I wasn't really keen to rejoin. I found the car I was looking for elsewhere.


Facebook is pretty good at blocking that too. The giveaway is the blurry background, so a bit of photoshopping is required.


and use an online service to create a virt phone number.


They don't allow numbers which are registered to VoIP providers.


There is a whole secondary economy around this of people paying for verification photos on gig platforms like Fiverr


You have probably been tagged quite a few times in other people’s photos. Totally creepy, yes, but unavoidable.


> I've also run into similar patterns on Discord and similar platforms; "Oops! Something suspicious is happening with the account [you literally just created]. Please add a phone number to your profile to proceed."

This happened to me the other day, but for an account which is years old, owns a server with 1200+ members, and... is already logged into Discord on another web browser.

I usually use Firefox for chatting, and Chrome for voice chat (since it works better than FF for that). So I usually have FF permanently open, and I log into Discord through Chrome from time to time to voice chat. So this one time I open up Chrome (while, again, I'm simultaneously logged in through FF and can fully use my account), try to log in, and... I get a "verification required" screen.

They allegedly think I'm an abusive user so they're preventing me from logging in into Discord without verifying my account, but they're simultaneously letting me fully use my account? How does this make any sense? Any abuse I can do I can already do because I'm already logged in, from exactly the same computer.

I wrote to their support asking what's up with this, and they basically told me they don't care, and that I will be required to verify. Of course they won't tell you why they're doing it, because "security".

I regularly receive crypto scam DMs on Discord and they're seemingly unable to block those kinds of accounts, but they sure as hell are good at bullying legitimate users like me.


You can use software 2fa for discord at least


> You can use software 2fa for discord at least

They specifically ask for a phone number, and only a phone number.


Same thing happened to me and it's got nothing to do with 2FA.


Fines are generally made small enough not to be overturned, so the agency doesn't have to waste money litigating it, while also typically being large enough to change future behavior. They're not really retributive so much a cold calculations meant to get companies to do what they're supposed to do.


OK, but Twitter is a special case. After the breach the FTC brought charges against Twitter and mandated that the company implement a more robust security program.

This should have hurt Twitter way more.


Yet another way in which justice is not blind when it comes to wealth.


That’s a good point - I hadn’t thought of it that way. Thank you!


I've avoided Facebook vehemently for several years but recently had to get it for a work-thing and seconds after creating it had to supply my mobile number because of something-something suspicious so this is me contributing that anecdata.


Same with discord. “Not giving us enough data” is now suspicious.


>I created my current Twitter account a few years ago and it remained dormant for a while. It was flagged as "in violation of our policies

Same here, linked it to PSN to get images off my PS4 and it was flagged before I could do anything.

Never did add my number and shortly after that they had a leak where any hacker could figure your number out.


Same here although after a year of “come back come back!” Emails almost daily (that went straight to spam for some reason) I tried again and got my account working.

Seriously twitter- go suck an egg. With so much money, how can you betray trust of your users? I get daily calls from car warranty scams because of stuff like this


I signed up on Twitter and before I could even post anything, I got something like "your account has been banned. Enter your phone number to unban it"

It was especially infurating, because I did not have a smartphone, only a land line, and they wanted to send an SMS.


This is an interesting part to me: "[T]he new order[0] adds more provisions to protect consumers in the future: ... Twitter must provide multi-factor authentication options that don’t require people to provide a phone number."

I would like to see this be a more broad-based rule. No, I am not moved by "SMS is easy" or "getting a number that can receive SMS is harder for scammers to do in bulk." If you must, give users the choice but not the obligation to hand over a mobile number.

0 - https://www.ftc.gov/legal-library/browse/cases-proceedings/2...


To further expand on this. 2FA should not rely on SMS at all. It should be an option but not the default one. An Authenticator app should be the default. I know we assume everyone has a cell phone but that’s not the case.


Authenticator apps aren't much better. Look at their privacy policies. Installing Microsoft Authenticator means giving them your location data 24/7 and allows the to collect even more data on you than giving Twitter your phone number did. Do you really think they aren't going to use that data for anything else? I don't believe that anymore than I believed Twitter.

Personally, I'd rather deal with the hassle of carrying around multiple hardware tokens than give companies a continuous stream of data about my personal life to use against me.


Afaik, TOTP is standardized, so you should be able to use any authenticator app for 2FA. Idk about Microsoft, but I haven't encountered any service that doesn't allow you to bring your own TOTP app.


If I wipe my phone, or get a new phone, I still have my phone number and can receive SMS.

If I wipe my phone, I have permanently lost all of my TOTP codes if I wasn't careful and backed them up manually before wiping...

TOTP is great for security conscious and technologically fluent folks... awful for your grandma.


Bitwarden (and probably others) store the TOTP secrets and are persistent. This isn't a recommendation per se, I'm not sure how I feel about it being stored in the cloud, but it is a bit friendlier.


Sure, but this still requires a certain level of "awareness" for this technology.

It's sort of the same problem PGP suffered from. It's technically great, but cumbersome for non-technical people to use (particularly in a safe way), so people will avoid it.

2FA needs to be simple and easy to achieve mass adoption.

Making people install special apps for just one service, or find out one day they're permanently locked out of their facebook account (or far worse) is simply going to hurt adoption.

If your grandmother can't make it work on her own, then it's not good enough. I'm not advocating SMS is the best option for 2FA, I'm just pointing out the alternatives are currently not up to snuff.


> If I wipe my phone, I have permanently lost all of my TOTP codes if I wasn't careful and backed them up manually before wiping...

Do not, ever, store the TOTP seed in your phone! At least not as the one and only location.


I think this is good advice but it also shows why using TOTP as a default 2FA mechanism (instead of SMS) is a tough sell. How many people are set up to store a TOTP seed in a location other than their authenticator app? How many people even know what a TOTP seed is? I would wager that the vast majority of non-HN readers think of TOTP as a QR code that you scan into an authenticator app, if they are even familiar with authenticator apps.

SMS, for all its security shortcomings, is at least something that the vast majority of people understand already.


> SMS, for all its security shortcomings, is at least something that the vast majority of people understand already.

But of course SMS suffers of the same problems as naive use of TOTP: Lose your phone, you're locked out of every account you have.

So in the worst case, TOTP is as bad as SMS. But, with some awareness/education TOTP is far superior if the user doesn't fall into the trap of attaching the TOTP seed to a phone.

i.e. for the aware user, TOTP is far better. For the naive user, TOTP is no worse than SMS. Thus, always favor TOTP.


Email would be preferred. SMS shouldn’t be the default. If I lost my TOTP tokens, I should be able to go through a tougher path with an email verification step to get in to redo my tokens. What I don’t want is for them to send me an SMS to verify me. What if I’m in a different country? What if I don’t have cell service? What if I don’t have access to my phone and that’s why I’m rotating all my stuff?


why? what is your threat model that makes this a problem? are your passwords equally insecure?


Somebody steals your phone. Now you have lost access to all TOTP and you can't log into anything with 2FA.

This isn't about an attacker getting access to your TOTP codes - it's about you losing access to them.


TOTP isn't inherently tied to phones.


No, it's tied to the app because the initial secret is destroyed after you set it up. Every single Authenticator App I've used (which is not all of them admittedly), requires manual backups - typically in some printed form.

All of my other apps automatically back themselves up, or Apple/Google backs things up for me. When I get a new phone or wipe my phone... after logging into all my account I fully expect my Authenticator app to show up on my home screen and have all my codes in there exactly as I left it before.

This is a huge pitfall for the unaware... you will lose all of your codes, and potentially access to whatever services or things they were protecting.


Authy, 1password, bitwarden, and others back themselves up. If not having a cloud backup is a negative for you, pick a TOTP app that does have it - it’s not a failure of TOTP that the few apps you’ve used don’t back up (or you aren’t aware they do).


> No, it's tied to the app because the initial secret is destroyed after you set it up.

TOTP is not tied to any app. When you set it up, save the TOTP seed in a secure place that you control. There is no need to rely on any app, which would be too fragile to consider.


Use 1password or Bitwarden. They correctly back up TOTP secrets to the cloud.

I consider Google Authenticator to be unacceptably bad.


This is good advice, and I will look into Bitwarden for myself personally, but this isn't a great solution for non-techies... which is the problem with anything that is not SMS 2FA.

We all agree SMS 2FA is not as secure as we'd like it to be... but no alternative exists. It's the classic sliding scale between usability and security. The most secure system is one you cannot use... and the most usable system is one with no security. We need something that is very usable, and still secure... perhaps a tall ask but that is indeed what we're after.

Until then... regular people will continue to use SMS for 2FA. We should be happy people are at least comfortable with SMS 2FA instead of not using 2FA at all.


Most of the services provide backup codes when you enable 2FA.

I don't think that's a huge problem.


Even if they don't, you can backup the QR code used to set up the app.


i agree that the authenticator app stuff is fraught for the average user.

> No, it's tied to the app because the initial secret is destroyed after you set it up. Every single Authenticator App I've used (which is not all of them admittedly), requires manual backups - typically in some printed form.

i scan the QR codes with a normal code reader, and then put the information into keepassxc. i can view the secret, generate codes, do whatever, and it's all with decent open source stuff and stored in a file i can back up.


Too many people lose their phone number.

TOTP at least is just a standard so you can either use a client that has backup options, write your own, or whatever. It's better.


It’s much more common to lose your phone than to lose your phone number (in the UK at least - most likely this varies by country).


Do you have Android? iPhones are set to backup automatically to iCloud by default. I don’t even think about backups at all.

I upgraded to an iPhone 13 about 6 months ago and it was almost completely seamless to restore everything to it.


Your authenticator apps won't be backed up. They require you to export them to a QR code or some other printed format... then "restored" once you setup your new phone.

Probably a security policy thing more than a technology thing... but the result is the same. TOTP is dangerous for the wrong user.


For the past several years both macOS and iOS have TOTP built into the password manager. It’s non-obvious how to set it up and doesn’t auto-prompt readily like password management does, but I’ve moved all of my TOTP over and have a backup, it’s synced to all my devices, and I don’t need a dedicated TOTP app.


Wow I never knew this.


That’s up to the developer, it’s not a requirement of TOTP apps.


well that’s not true if you use backups or authy


does your grandma use backups?


Steam is another big one. They require you to use the Steam mobile app and it's the only way to do 2FA - no QR code. I've since dropped 2FA for Steam altogether.


This seems to have changed; you're able to use e-mail MFA (and is what I use for my account).

Let's just hope they don't use _that_ for marketing purposes! ;)


Actually, that option has existed forever, since even before the app-based MFA.

However, if you use the "less secure" email MFA then steam places limits on your account that don't exist with the app MFA, like a forced delay on executing trades.


The Steam 2FA generation has been reverse engineered and you can use it in some password managers. Hardest part is extracting the secret

https://bitwarden.com/help/authenticator-keys/#steam-guard-t...


"upgrade" steam to version 2.1.4 or older and you can use adb backup. Android backup extractor can convert the backup to a standard tar file.


If you've got the steam mobile app for Android for 2fa and want to move to a different app that supports steam 2fa (aegis, winauth, etc), use steam auth on multiple devices, or simply move to a new device without a temporary trade block, version 2.1.4 of the steam app will allow you to perform adb backups; Android backup extractor will allow you to convert the backup to a standard tar file to extract the secrets if you want to.


Unfortunately, Sendgrid and other users of Authy with no alternative 2FA systems in place lock you into the Authy app or SMS as the fallback. There are some, very limited, workarounds for this but still requires you to have the app in Authy.

———————

On a recent find apparently Authy (the app not the sms fallback) has a weird, uh, “feature?”, where my 2fa, for example, for Sendgrid will unlock all of my Sendgrid accounts, which I personally find mildly concerning.


If you load your Sendgrid Authy 2FA on a rooted android phone, you can extract the TOTP secret that powers it under the hood and put it in Bitwarden like you prefer.


Authy used to just be TOTP IIRC - did that change?


Authy has both TOTP functionality and a proprietary system that's different, similar to Symantec VIP or Entrust


Not in consumer context for sure.

Ultimately with any service you’re only protected by your contract and the PR value of a breach of trust. Unless you’re using an open source app and rolling your own sync, an app where trust is paramount (1Password), or one where a misstep is a huge media hit (Apple), you’re at the mercy of that company.

Microsoft fwiw, probably uses location to spot fraud and is unlikely to breach user trust imo.


I have. I worked for an enterprise that used OneLogin could only use the OneLogin Protect app for 2FA. I thought 1Password was broken but I tried a different app with my phone camera and it said the QR code was invalid.


That's configurable in OneLogin, your company just hadn't added more options. I added WebAuthn, Protect and TOTP just this morning.


So don't use Microsoft Authenticator. There are many options without the privacy problems with the MS App (which, IMO are overblown, but whatever). Go run your own if you want to be absolutely private. I'm happy with 1Password for managing it.

http://www.nongnu.org/oath-toolkit/oathtool.1.html


Are you using Microsoft Authenticator in a corporate environment/profile? I just checked my personal install (Android) and it does not require any permissions (location is denied).


From the play store (https://play.google.com/store/apps/details?id=com.azure.auth...):

This app has access to: Photos/Media/Files

    read the contents of your USB storage
    modify or delete the contents of your USB storage
Location

    precise location (GPS and network-based)
Contacts

    find accounts on the device
Storage

    read the contents of your USB storage
    modify or delete the contents of your USB storage
Camera

    take pictures and videos
Identity

    find accounts on the device
    add or remove accounts
Other

    receive data from Internet
    run at startup
    draw over other apps
    prevent device from sleeping
    create accounts and set passwords
    view network connections
    close other apps
    control vibration
    use accounts on the device
    full network access


As with most (all?) Android apps, support for permission requires user consent; Camera, "Files and media" and Location are all set to "Not allowed" on my device. From what I can tell based on Microsoft's help page, location may be a requirement of a work/school account and as far as I can remember, I've never been prompted for the location permission - it's possible I denied access if I was, but the app works without it.

From Microsoft's Authenticator help page:

"You will see a prompt from Microsoft Authenticator asking for access to your location if your IT admin has created a policy requiring you to share your GPS location before you are allowed to access specific resources"

https://support.microsoft.com/en-us/account-billing/common-q...


There's one big data leak which Android/iOS deliberately don't let you control: internet access. TOTP apps don't need it, and yet.

The Microsoft app does have a mode which uses the internet to push a message saying "Is this you logging in?", which is weaker than TOTP but feeds into their "AI threat detection engine" mantra. It seems to fall back to TOTP if there is no network.


The problem with android is that it's designed to leak your data like a sieve, so permissions are overbroad and all or nothing. Most people will accept any and every prompt for a permission they're told is required in order to use the app, even when the app doesn't always need it to function.

MS is clearly using this to their advantage and asking for everything they can provide even a thin justification for, but even if you're just giving them a fraction of what they're asking for it's still far worse than handing over a cell phone number. My work considered requiring Microsoft Authenticator but after enough people balked at handing over so much data to MS they caved and we got simple little keychains that do nothing but spit out numbers and can't collect our contacts, our location, or start listening using a microphone. It's hard to beat that.


There are free, open-source, and privacy-respecting options for TOTP 2FA that don't require a mobile phone plan.

You can use something like KeepassXC (desktop) or something like KeepassDX or Aegis (on F-Droid on Android) for your OTP authentication app to manage 2FA for Google, Amazon, eBay, Dropbox, etc. and there are other options as well.


Just wanted to add emphasis on Aegis. I've been using Aegis for Google, GitLab, PSN, domain management. No issues.

And it has zero permissions needed (aside from camera which is granted on a need basis for scanning qr codes). And also works fine without ever having a Internet connection.



Vaultwarden has TOTP support built in, and there are like a dozen open source TOTP authentication apps out there. There's no reason you have to use an app that invades your privacy for TOTP.


Sadly, it uses rust nightly, making setup bleeding edge.

And node, meaning it's a security nightmare.

There are likely other options I guess, but for something of this level (keys to your, or a company's kingdom!), I'd want to see a project with an arm long history, loads of review, etc.


The Vaultwarden I'm using uses stable Rust 1.60 and Node isn't involved at all.


The build docs talk of the above.


Ah, good point. I'm able to build Vaultwarden with stable Rust, so maybe it's just a requirement for development. Vaultwarden lifts web-vault from Bitwarden, which uses Node, but you aren't required to run it with web-vault.


I use Microsoft Authenticator on iOS, and it doesn't use location. (I didn't even need to deny it--it didn't ask for it.)


They do ask on Android it seems. Not sure if this text is common across all apps seeking location permission.

> Optional App functionality, Fraud prevention, security and compliance

However I’m not surprised such apps keeping double standards between iOS and Androids. Apple spanks (or spanks harder) the apps that ask permissions frivolously or block functionality behind permissions unnecessarily just to collect data.

For e.g I use TrueCaller on iOS without giving Contacts permission, but on Android the app features are blocked without it. Not sure of now but earlier Ola/Uber didn’t work on Android without location permission but on iOS they did and still do. Many such examples.


I believe GP meant authenticator app like authy, duo, or any other TOTP. You're not giving anyone your location by using that.


> Authenticator apps aren't much better. Look at their privacy policies.

For the most part, "authenticator app" means TOTP, which isn't proprietary.

Which is beautiful because you don't actually need any app for it. Just save the TOTP seed. There are plenty open source implementations to compute the one-time code when you need it.


That's not the problem with SMS for 2FA. The problem with SMS for 2FA is that cell phone accounts were never intended to serve as what amounts to a high security authenticator service, and cell phone companies are resistant to this newfound 'responsibility', somewhat understandably so.

By default, someone can call up your cellular provider, claim to be you, pass trivial to no security questions, and request a replacement SIM be mailed out, or that your number be ported to another device. Or they can slip a bit of cash to any employee who works at any cell phone store that sells service for your carrier.

SMS 2FA isn't better than just a password. It's objectively worse, dramatically increasing the attack surface. Compromise someone's cell phone account and you are virtually guaranteed access to their bank/retirement/investment accounts, email, social media, etc. And they are virtually powerless to do anything about it for at least a few hours while they scramble to, say, get phone service working again and rush to contact everyone they can think of. By the time you're able to even get to your bank to talk to a branch manager and show all sorts of proof of identity, your accounts could be long since cleaned out.

Some providers finally are offering secondary passwords for porting/SIM replacement, that sort of thing. Absolutely call them and request your account be locked down as much as possible, ask to specify a secondary password, etc.


> It's objectively worse, dramatically increasing the attack surface

Any 2FA - no matter how weak - should in theory not be weaker than no 2FA. In practice of course these things can often be used as the only factor to "recover" access to an account so yes, weak 2FA like SMS can make things worse.


I use the FOSS https://github.com/beemdevelopment/Aegis and like it far better than other TOTP apps for the feature and UI


> Installing Microsoft Authenticator means giving them your location data 24/7

On my iPhone settings it doesn’t seem like Microsoft Authenticator is accessing location data at all.



What about your car? Most modern cars have connections to low bandwidth systems. Your phone?

But I gotta ask. How are they using your personal data against you?


My car is old enough to just have onstar which is physically disconnected. My phone I avoid adding personal data to (outside of messaging), I limit my browsing, and I've done what little I'm allowed to in terms of locking it down and removing unwanted features. Ultimately though, it's a necessary evil I'm still hoping to find a solution for.

> But I gotta ask. How are they using your personal data against you?

the answer is that they'll use your data against you in any way that they can if it works to their advantage in any way.

Companies don't care about you and your needs, they care about themselves and making money. The reason there is a multi-billion dollar industry around the collecting, buying, and selling of even the most mundane aspects of your life is that companies have seen that all that data can be leveraged against you to give them money and power and one way or another that usually comes at your expense.

Often they use the data they collect to manipulate you. Maybe they want to get to you buy something you wouldn't otherwise, maybe they want to shape your political opinions. Maybe they just sell your data out to others directly or they use that data to make it easier for others to exploit you.

It doesn't matter if it's Facebook selling your data to Cambridge Analytica so that they can try to swing an election, a group of activists who buy up lists of people who have visited abortion clinics so they can harass them, or a company or data broker letting people buy lists of individuals with low IQs and poor education, or lists of people who are likely to suffer from dementia so they can be targeted with scams, it's all using the data you barely noticed you were handing over.

Even when it's not intentional algorithms are constantly searching for ways to exploit you in the moment you're at your weakest. They can detect when someone who is bi-polar is entering a manic phase and push airline tickets to them since people in that state tend to make last minute travel plans. They can detect when you're tired, heartbroken, or under a lot of stress and anxiety and target you aggressively at those times using one trick after another to find whatever works best (both using what has worked for others like you and tailoring their methods to you individually), and they do it all without ever being explicitly programed to. The algorithms just maximize for results, and the ends justify the means while giving corporations plausible deniability for even the most egregiously exploitative means their algorithms employ.

In the US, companies like Google, Microsoft, Facebook, your ISP and cell phone company routinely turn data over the state with both three letter agencies and local police departments sucking up all the data they can. It's a huge violation of our rights and a threat to our freedoms.

Even the most well-intentioned company collecting your personal data is likely not doing enough to secure that data, and whatever data they hold onto is just waiting to be abused when a less scrupulous person takes over, or to be handed over when the company is bought or absorbed into another, or to be sold should the company ever go bankrupt or become desperate enough for the money.

One way or another, the data you hand over will be used against you, and worse you'll have no idea when it happens. Today people are turned down for jobs and denied rental contracts because of the data collected on them. They are charged more for the same products they buy online than what other people are paying. They are told a company's polices are one thing, while other customers are told they are something else. Their insurance premiums are being raised based on this data. Companies have even been shown to use this data for things as trivial as leaving some people on hold longer than others, but nobody is ever told the reason why those things happen. You have most likely already paid more than someone else, had your time wasted, been denied something, been mislead, or been rejected based on the personal data you've handed over.

Nobody is using your data to protect you or put more money in your pocket. It is always used to serve someone else regardless of what that does to you.


Looking at settings, MS Authenticator hasn't ever requested location permission non my iPhone, and I'd be able to deny it if it had.


You don't have to use microsoft authenticator. TOTP is a big step up from SMS and most/good apps won't violate your privacy.


You don't have to enable location permission. Unless there are some geo-fencing options I'm not using.


Why does an Authenticator app even have location access? Geoblocking?


In November 2021 an optional geolocation feature was added to MSFT Authenticator to allow admins to block foreign access.

https://techcommunity.microsoft.com/t5/azure-active-director...

https://support.microsoft.com/en-us/account-billing/common-q...


Exactly, IIRC you can do policies related to locations. It's an optional feature, you don't need to enable it and the app will overall work just fine.


I wondered the same thing about needing access to the cameras and microphones. Turns out they justify it by saying it's for reading QR codes (as if phones had no other way to do this).


FreeOTP works just fine for me


Just wanted to give a shout out for Bitwarden. They have a fantastic TOTP 2FA as part of their premium solution (which is really just $10 per year)


Agreed, and if like to second the *fuck SMS 2FA" stance. I've been locked out of my bank while abroad because I couldn't receive SMS.


"Telcos declare SMS 'unsafe' for bank transactions"

https://www.itnews.com.au/news/telcos-declare-sms-unsafe-for...

And nothing has changed to make SMS any more secure since 2012...


It's not just SMS that is the problem. Cell phone accounts were never intended to serve as high security authentication sources. It's laughably easy to take over a cell phone account, and once you do so, you likely crack open the victim's finances like an egg and leave them almost powerless to do anything about it before damage is done.


That's a _feature_ for the telcos. They want you to be able to walk into a store, grab a new phone, and walk out several hundred dollars lighter with your old number already working on your shiny new phone. They want to make it as easy as possible to do that.

They never signed up to be the gatekeepers to your digital life, and there's nobody paying them to do that. Expecting them to make their sales experience worse than their competitors just because some bank or crypto exchange can't work out a decent way to authenticate their customers is insane.


Not really, if I value it and would pay a higher price to use a telco that safeguards my number. Unfortunately, this (like much of security) is invisible to the consumer until it is too late.


It's not just that not everyone has a mobile phone, people who move between different countries change numbers and lose access to their old one. I've been screwed over by this many times. You can't assume someone has the same phone number for their entire life.


My experience is very much to the contrary. My phone with a Microsoft Authenticator app died and I still cannot access my MS account. In a similar way, it took me some time to get my smartbanking apps running again.

If it was just SMS, I would move the SIM card elsewhere and hey-presto.


This. I regularly live in 2 different countries, Canadian cell plans are exorbitantly expensive when roaming so I have 2 phone numbers and its extremely annoying when 2FA relies on said phone number...


WebAuthn should be the default


Agreed. TikTok does this as well... I literally get spam calls just as I scroll though content there. Microsoft leaks numbers somehow as well, even on Enterprise service apps, as I used to get spam calls on company phones from authenticating on Azure all the time, when the numbers were totally private and used for business calls only.

It's really a new form of torture to have to remember a really complex password and to do a TFA log in every time a change needs to be made now, one of my least likeable parts of modern jobs. It's not even like we had breaches, it just became mandatory and now a lot of things like cloud config go largely unchecked by admins because it's so tedious to log in so frequently and they often get locked out the minute they forget their phone at home because email is often another password/TFA hurdle... Stupid wins first these days. Breaches still happen elsewhere all the time despite TFA adoption as well, hackers keep engineering workarounds, and mobile security is compromised when personal devices are used anyway. A much better approach is in segmenting data, and only retaining essential/required data, but systems are made to collect far too many details on users and subject matter overreaching a "need to know" basis, which also dramatically increases the impact of modern data breaches.

TFA used to be based on email and it was just fine. The only reason why phones became mandatory is precisely because of illicit use of phone contacts by these platforms. The bootleg calls also probably eats up tons of money in terms of prepaid minutes for people with those types of plans, yet whenever there is a fine, none of the affected people see true justice.


I just ordered something online, last night, and it had two required fields:

1) Mobile Phone (landline is not required)

2) The phone number/address needs to be the same as for the card.

I don't use a mobile phone for the card. I use my landline, so I entered that.


Yeah for those cases where phone is required I usually put (555)867-5309. They can spam someone else…


Hat tip on the phone number. Ref: https://en.wikipedia.org/wiki/867-5309/Jenny


And port the dang 2FA apps to desktop. A tower or laptop is something you have. Don't depend so much onna damn phone for everything.


I was overseas and my provider (Cricket) doesn't have roaming so I usually pick up a cheap prepaid SIM locally.

I didn't enable 2FA on Uber but it insisted on sending me a code via SMS (of course, to my inaccessible US number). That was incredibly stupid and shortsighted. Meanwhile, all services that were set up for Authenticator MFA worked just fine over the European carrier's LTE.


I also tend to purchase a prepaid SIM while overseas, and ran into a bunch of issues paying for bills at bars/restaurants using their website or mobile apps, as my credit card was doing a second layer of authentication through Mastercard's Verification and would only send me a code via SMS!

It is crazy that my capitalone mastercard wouldn't allow me to do the validation through my capitalone app!


Next time, use an app like pushbullet that will forward your text messages to you. It's a huge security risk since if pushbullet or whatever gets hacked then the hackers would get all of your access, but for a short term utility like ensuring that you have connectivity it may be well worth it.


This. It drives me crazy that the most critical service I use, Vanguard, still requires SMS for 2FA. It's a pre-requisite for using a yubikey. It makes no sense.


Also SMS not nearly reliable enough. You should have alternatives for that reason alone. My cell carrier was blocking many SMS verification messages for a good two months. It caused me all kinds of problems when my credit union merged with another and I had to change account numbers all over the place. Many had the option of using an email address, but there were quite a few that it was SMS or play find the human on the 800 number.


Unrelated to Twitter, but your post reminded me that consumer should also have an independent "account number" for lack of a better word that belongs to the user, like a telephone number. Electronic payments would come out of this personal account number and be forwarded to whatever institution(s) the person wants. Then changing banks would be as easy as changing phone carriers.


This. Allow email and authenticator app options. It is also frustrating that voip numbers don't work in some auth scenarios.


>Twitter must provide multi-factor authentication options that don’t require people to provide a phone number."

Hmm. That might be difficult. I always thought the reason Twitter required phone numbers was to stop spam accounts from being created. So a phone number is basically acting like an expensive captcha. This order seems to be saying Twitter needs to stop requiring phone numbers. That might lead to an increase in spam accounts.


They already provide these options today.


It's a precedent. This isn't just about Twitter; there are many who do not offer such options.


I don’t think it has any applicability to anyone beyond Twitter.

Maybe it’s a precedent that the FTC will tell you to add non-SMS 2-factor if you are misusing the SMS factor for advertising, but that’s a pretty limited precedent!


It emboldens prosecutors and DA's and makes conversations around going after other bad actors more tenable.


I hope Authy, Apple, Twilio, and many others take note.


I hate all the different ways companies target people.

I recently booked flight on American Airlines for my 80+ year-old father. I requested the golf cart to take him between gates.

Immediately I got a call from "American Airlines Health Alert."

They made it sound like there was an issue with the booking... "An important health alert related to your flight." And there was a "Press 1, if you're over 50" option.

Anyway long story short it was some shady marketing company selling me a panic button in case of falls.

The lady was like"these are very expensive devices"... "we'll give you the device... but you pay a small fee for monitoring every month."

Clearly she'd given the pitch 1,000 times. Didn't give me any time to talk. Finally, I was like, "Hey is there a problem with my Dad's flight, or are you just trying to sell me something?" And she hung up on me.

Fuck American Airlines. Fuck all the airlines really, but it should be illegal to target the elderly just because they asked for help with connection flights.


Twitter doesn't let me DM people who don't follow me because I haven't provided a cell phone number. I refuse to give it, mostly on principle. I send messages very rarely and am clearly not a bot. When did demanding a phone number become OK to access basic elements of a service? This happens even when I try to DM people whose DMs are open.


The one time I tried to make a Twitter account it locked me out "due to suspicious activity" and then later required me to provide a phone number. I never even made a post or really finished entering account information. So it seems it's now basically required for an account period.

I was outraged and agree with you. It also takes on a new cast in light of this FTC action.


Got locked out of facebook for 'suspicious activity' as I was logging in to permanently close account.

Only way to log in now is to provide them with a scan of my passport or drivers license.

...


Amazon did that with me for reasons I do not understand (again, unspecified "suspicious activity"). When I sent them scans of my driver's license they concluded they were too blurry. I'm not sure how I could send them a higher resolution scan over fax, which is what they required (the scans also seemed pretty clear to me and others I showed it to as a sanity check).


Well creating a new account is the strongest signal someone is a spammer /j


Twitter requires a phone number, they just don't require one at sign-up. This is a known pattern they do with every single account - you have to provide a phone number at some point. If you ever log in with a VPN, they may also require a number to even access your account at all.


Google forced me to give a phone number to verify my 10+ year old account, not because I forgot my password, but because they want all your information (I think that they buy ID info from the phone companies)... and I don't even use any kind of 2FA...


Did the same to me. Refused to give it, luckily I only used it for YouTube, lost the 100 odd channels I subscribed to and my playlists and that was about it. Use an RSS reader now to track my favourite channels


That's the problem I see with most of 2FA, you have to reveal more of yourself instead of less, increasing potential attack surface instead of minimizing it. If anything, recent history has shown that you cannot trust anybody on the internet. Even if they're not outright hostile or abusive, they can still get cracked and their data stolen. For myself, I'd rather rely on strong, well-protected passwords and no 2FA as far as possible, but most people might not know how to do that or find it too inconvenient.


Where is this the case for 2FA other than SMS? It seems like the other common ones are either just verifying a shared key, or some type of one-time pad


150M just seems too low of a fine. The expected value of being fined is less than rewards and this encourages future abuse by other players.


given twitter isn't profitable, i'd imagine its more money than they've ever made


>given twitter isn't profitable, i'd imagine its more money than they've ever made

But it's not as much money as they're making in ad sales to those phone numbers. Twitter will just see it as a cost of doing business and there won't be any meaningful change.


The cost, whatever it will be, is always just a cost of doing business.

The purpose of the fine is to make the business unprofitable, which it is successful at doing.


The purpose of the fine is not to make the business unprofitable, it is to make that particular endeavour unprofitable. If profit from violating law - fine for violating law > 0 then then any business not run by ethics will continue to violate the law because not doing so is less profitable. This does not change if the business is throwing out money elsewhere.


I appreciate the security of 2FA, but I don't like the liability and and I don't like being required to have my phone at all times. Jus one of my gripes with the world


I propose multiple YubiKeys for this. Unlike TOTP, it's not susceptible to phishing, and you can keep Nano keys inserted in your USB ports that you regularly use. You don't need your phone or anything most of the time.


Ya, seems like a decent alternative


Not looking forward to Github making it mandatory. I don't want something I can lose to control my access. The insidious part is, as it becomes normalized, more employers will think that they can force their workers to participate in broken security theater with their own private property rather than a proper solution with corporate assets.


Guarantee they're doing the same thing with phone numbers used to verify accounts, as well. I'm not talking about the blue check mark verification, but the verification they impose upon new accounts to prove that you're "real" and not a bot.


I figured that was the whole reason every social media site bombards you with requests to "verify" your phone number.


Repeat after me: we need FIDO2 in exactly the same physical form factor as your house key. Give ‘em away all over the place, make it the default conference swag. SMS is not good.


I mean... please don't trust a random key given to you at an event by someone you will never see again and might not even be affiliated with the company they are claiming to be from to not have a copy floating around in a database?


Agreed, but they need to be sold cheaper first. A YubiKey costs me around $40 and it’s the only brand I trust.

That’s inaccessible to a lot of people.


And then there are UI/UX constraints. The Venn diagram of "knows how to use SMS", "knows how to use 2FA" and "knows how to use yubikey etc" does not have a lot of overlap outside a tech audience.


They used to offer an $18 yubikey, but the cheapest one today seems to be $25. So a bit better than $40 but still way more than say a house key costs


The FIDO2/webauthn ones (the Security Key series) are $25 for USB A or $29 for USB C. You can't use them for storing OAUTH keys, they're not smart card compatible, they can't store your PGP keys, and can't create one-time passwords or store a static password securely... but they are cheaper.


Built-in to laptops and phones is the only way we’ll get mass adoption


What happens when you lose it?


FIDO keys should only be sold in 2-packs in my opinion. You should never have a single key as your second factor.

That being said, one-time use backup codes are a standard way out of the problem.


> That being said, one-time use backup codes are a standard way out of the problem.

Hardly any services seem to give that option unfortunately.


But I don't carry a house key, because my door is opened by my phone.


We need to turn data into a liability.

There's a reason many places work on a "need to know" basis.


data is a liability regardless


It's such a liability that people build corporations with revenues of hundreds of billions and profit margins above 20% out of exploiting it almost mindlessly (more data!!!).


I remember I got scared this might happen when Epic introduced 2FA for claiming free games[0]. FTC check Epic Games too.

[0] https://www.pcgamer.com/uk/for-a-while-epic-games-store-will...


Apple could complement their existing “hide my email” with a “hide my number” feature that makes it easy to create disposable tracking protected phone numbers. This would help counteract the “oh something about your account is suspicious so give us your phone number” dark pattern.


Apple's feature is inherently a monopolistic feature and something not really new. Login with FB and Google already had options for not sharing email.

I still prefer to signup using a spam email address.


Hide my email services are a dime a dozen, but for a “hide my phone number” service you need deeper pockets because phone numbers cost money.


I really can't believe companies are still doing this with people's data. Insane that this is still a thing companies abuse.


I've assumed facebook and google do this too. No? Or it's okay if they haven't promised not to (have they?)


The $5B the FTC fined FB for privacy issues was in part bc of using two factor phone numbers for ads.


Does the recent 5th Circuit decision[1] related to civil penalties issued by administrative agencies have any relevance here?

The article mentioned that the complaint was "filed by the Department of Justice on behalf of the FTC," which sounds a bit more involved than the FTC saying, "Hey Twitter, here's your sign, now pony up"...I have no idea how the game is actually played though.

[1] https://news.ycombinator.com/item?id=31429091


Probably not.

> The 2010 complaint cited multiple instances in which Twitter’s actions – and inactions – led to unauthorized access of users’ personal information. To settle that case, the company agreed to an order that became final in 2011 that would impose substantial financial penalties if it further misrepresented “the extent to which [Twitter] maintains and protects the security, privacy, confidentiality, or integrity of any nonpublic consumer information.”

The $150m fine is because twitter violated that settlement agreement.


Thanks for the clarification.


Twitter itself is still doing it, even if you opt-out of all personalized ads in their app, it'll still advertise stuff to you derived from tracking your browser history.


$150 millions? Heckin wowerino, now that sure made it all not worth it huh, they're never going to do it again, no siree.

The current state of the web is completely laughable.


When you can have Authenticator Chrome extensions [1] what is the point of 2FA? Who decided making it harder to login for an average user is worth the added security? I'm not arguing security is not improved. The question is who weighed the pros/cons of 2FA and decided the entire industry should adopt it? Can we shine some light on the orgs/individuals responsible for this.

> This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic.

Wikipedia describes 2FA very matter of factly without any background on its history and its advocates [2].

[1] https://chrome.google.com/webstore/detail/authenticator/bhgh...

[2] https://en.wikipedia.org/wiki/Multi-factor_authentication


I may have some idea about this since I was kind of around the space at the time. But to be honest I don't understand your question. Are you asking about the benefit of TOTP as an authentication mechanism when users can install insecure browser-based TOTP implementations?

As far as the history, and "who", I think this has a very long history in the "security-industrial complex", which probably means : NSA. Certainly the idea of 2FA goes back as far as smart cards (early 90s). Then came RSA SecurID which I saw as a hack to give you something similar to smart card security but without the need to roll out a PKI. TOTP seems like it is a generic version of SecurID. I don't particularly remember any vendor agenda on all of this, more like everyone was looking to fulfill government and bank requirements for security then the techniques employed leaked out into the corporate/enterprise world, and finally (like, around today), have become mainstream in the B2C use case. My perception has been that all of this was pretty much about "making things better" by some definition of better that depends on reasonable security for reasonable cost, in the context of typical user behavior.


This looks much more like showmanship than actually improving security. Again I'm not saying security is not improved. Now there are people who are happy they set standards for others to follow and IT managers who can show off to their bosses that they're following security standards like ISO27001 and SOC2. SOC2 standard is set by AICPA, the last A stands for Accountants. Of all people.


Certainly there's plenty of hype and herd behavior in this industry, but underlying this is a simple desire: don't allow users to give their passwords to a third party. Or rather, they can do that but the third party won't be able to authenticate because they don't have the smart card or 2FA dongle.

Often there is a requirement in commercial contracts requiring adherence to certain security standards. An example of such a contract is liability insurance.


Apart from security and privacy implications, phone numbers for 2FA are a major issue when you travel to a country where your number is not working. I had to communicate in a very complicated way with my health insurance because of this. Why is that entire practice not banned yet?


Twitter is terrible sorry....for being caught. They promise to do better in the future /sarcasm


Some weeks ago I wanted to deactivate my Twitter account. I hadn't used it for a while, and it claimed that my account was locked. Nothing was sent from it in many months, so it wasn't clear why/how it would be locked now.

For some reason you cannot deactivate your account when it is locked.

So I contacted Twitter demanding that as EU citizen (which is true) I hereby demand all data about me that Twitter or its subsidiaries might have, including account data, to be deleted under the GDPR... Or alternatively unlock my account so that I would be able to deactivate it.

They were actually pretty responsible. My account was unlocked 30 minutes later and I was able to deactivate it.


This is surprisingly reasonable. I would like to see a decisionmaker do some time for fraud, though. They locked people out of their accounts and demanded phone numbers for "safeguarding," then used them for targeting in direct contravention of a previously negotiated agreement with the FTC. If that doesn't rise to criminality, the fraud statutes need to be updated.

edit: they should also be required to dump the phone numbers (even to be recollected later, without the deception), but I didn't see that in the article. Are they being allowed to keep the proceeds of a crime?


First you have to establish who goes to jail, corporations are able to avoid this by having vague structures of shifting blame so a jury can't decide if any particular individual is actually at fault.

There probably should be laws establishing ultimately responsible people with the unenviable duty of being responsible for illegal things corporations do (sort of like an engineer signing off on the design of a bridge), but doubtful such a thing will happen.

We're left then with personal responsibility being limited to people stupid enough to leave pretty explicit records of nefarious intent to commit crimes.


Meh, that's why the CEO makes the big monies. "The buck stops here" kind of thing. Charge the CEO. Make it the CEO's problem to prove they were not responsible only by giving up the person that was. I do believe sometimes CEOs are not fully aware of what happens below them in the org tree, but they are accountable for their people. If the CEO can't handle that, then they shouldn't be accepting the roles. Clearly, this has to be understood as part of the job description


Here is a fictional timeline of events:

* Problem: Spammers automate creation of accounts. Solution: Reuse the MFA infrastructure as some kind of "CAPTCHA". The phone number is not stored.

* Problem: Spammers use a single phone number to unlock 1000's of accounts. Solution: Store the phone number - so those kinds of misuse can be detected.

* Problem: Ads-Team wants to sell more targeted ads. Solution: There is possibly a phone number stored in the user profile, use that.

Who is to blame here? The Ads team that didn't check if the number can be used?


Solution #2 modified to be safer:

> Solution: Store A HASH OF the phone number - so those kinds of misuse can be detected.

If you don't need to store PII verbatim, don't store it verbatim.

> Who is to blame here? The Ads team that didn't check if the number can be used?

Yes. 100% yes. It's insane that we've normalized the idea that if you can physically get your hands on some data then that means you're allowed to do whatever you want with it. Anyone even remotely responsible working in advertising should be tracking provenance of the data they're using. I've heard all sorts of excuses about why this isn't practical, but with each year that passes I find them less convincing, and I've finally reached the point where I reject those excuses outright. If you don't _know_ you're allowed to use some PII for marketing, then you _can not_ use it for marketing. It's that simple.


The person or team who gave green light to storing phone numbers without giving the data appropriate access controls and protections to avoid it being used for anything that is not strictly related with security and fraud control. A system such that if the Ads team tried to access it, they would get an access denied error, or maybe a red alert warning stating that this field cannot be used for marketing operations.

If a system to provide such protections didn't exist, then that system should have been implemented before agreeing on collecting phone numbers. Again, whoever didn't have that insight, should be the one to blame.

(all this is just wishful thinking on my part, of course)


Problem: Bank wants to make more money. Solution: There is possibly some money stored in the customers bank account, use that

Do banks run like that? No. Do banks sell your details, your address, sign you up for random subscribtiona without your permission? No. Why should twitter get away with this


Go to your U. S. bank and get a mortgage, and after a month of emptying buckets of junk from your new mailbox, come back and tell me with a straight face that the bank didn’t sell every scrap of data they had on you.


Huh? That’s pretty much exactly what banks do: They loan out the customers' money to someone else, taking an interest..


Yes! There are now privacy laws that explicitly require you to check if user has given consent for such use of the data.


The team responsible for regulatory compliance is responsible. If the team doesn't exist, then legals should advise management to establish one or provide trainings to every team, and then the teams are responsible.


Ad team, 100%. There are all kinds of laws around advertising. GDPR, CCPA, etc. And all the ad teams I've ever interacted with are well accustomed to consulting with the attorneys before doing something like using a brand new piece of personal data to do a brand new thing.


How about the lowest common leader?


Doesn't the Sarbanes-Oxley act (Sec 906) mandate that publicly listed companies' CEO&CFO assume at least some personal liability when they sign each required SEC statement (10-Q, annual report, etc)?

Something new for them: """ CEOs are required to personally attest that they are responsible for changes to privacy practices and written policies. """


> mandate that publicly listed companies' CEO&CFO assume at least some personal liability when they sign each required SEC statement

It isn't really a "personal liability" until there's a good amount of jail term involved.

Has any CxO been to jail inviolation of SOX? If "no", then there's your answer how useful it is.


Somehow they don't have to figure that out with felony murder. Everyone who participates who is aware is liable to the same punishment, then. Why not in crimes of bureaucracy? Why make sure people who are just following orders are free from punishment?


Because killing someone is usually pretty explicit in the obviousness of a crime being committed.

Filling out forms, designing product features, and implementing them can have each individual contributor mostly ignorant that anything could possibly be wrong with the request and the few people who do have some idea only have a small one which is plausibly deniable. The person who does get caught in those circumstances is usually just a scapegoat anyway.


That's why you investigate. But awareness should be enough. And if we start to have trouble proving awareness (maybe employees aren't aware of a settlement), just require in settlements that employees are informed.

For felony murder, you don't have to know that the person you're with is armed, intends to kill, or if you're driving the getaway car you don't even have to know that they have killed anyone at all. You're participating in something that you're expected to know is wrong, and you're punished for anything that results from the entire event. If this were like felony murder, the engineers that implemented it (assuming awareness) would be as liable for the $150 million as anyone else involved.


> Because killing someone is usually pretty explicit in the obviousness of a crime being committed.

> Filling out forms, designing product features, and implementing them can have each individual contributor mostly ignorant that anything could possibly be wrong with the request

That's got nothing to do with felony murder. Felony murder occurs when you participate in a crime with someone else and they accidentally kill someone while you can't see them.


Because felony murder is an outrageous US-specific injustice. The fact that you're guilty of murder in the US if you rob a store and the store owner shoots your friend defies common sense!

Because I know it matters to some people: this rule is used mostly against young black men.


Pick a C-suite exec or VP at random then. "Nobody, it's too hard to unravel the organizational structure" isn't really cutting it.


That sounds kinda like the Mafia or Yakuza. Take the fall, do time, protect the organization, get respect, get promoted. Many people would gladly do a few years in minimum security prison in exchange for million dollar salaries, etc.

While I do think that would be better than nothing, it could create its own set of bad incentives.


Next interview... "Says here you spent time in federal prison?" "Yeah, at my last job, I broke the law and directed others to do so, too."


Ah so you have experience breaking the law? We have a number of projects where your skills would be useful.


All decision-makers should go to jail in such cases. Then they would work harder on making blame clear.


"All" is a vague term.

So jail everyone at Twitter who isn't an individual contributor and is even tangentially involved?

This is how you escape consequences as an organization: obfuscation. Make what you've done complex enough that it's too difficult for a jury to decide who is responsible for what, prosecutors won't be convinced they have a case and will decline to pursue the matter.


I think the CEO should personally be on the hook for widespread organizational fraud, and in cases should be held criminally responsible.


Yah, some personal risk might justify some of the salary package. Yah sure your going to make $50M a year, but you might have to sit in jail for 20 years sounds fair.


I don't think it has to be that hard; you just need to require that communication is preserved for companies over a certain size, e.g. meeting minutes, emails, etc. This is already the case for financial records and some employment records, and the case with politics ("but her emails!")

This way, a record can be subpoenaed if needed.

Don't keep records or don't have records of this particular decision? The person responsible for making sure the records are kept for that department will be in trouble.

There is some administrative "red tape" here, but it's not that bad, and much of these records already exist (or existed).

The problem is the political will to enact such a law; I agree that's not likely to happen.


"You have to keep a record of what happened during all employee interactions so that we can prosecute you some day" isn't exactly a likely-to-succeed plan. Already prevalent are coaching employees not to leave records of certain legally contentious topics.


I worked for a certain rainforest company and they specifically coached us on not leaving records or discussing certain subjects in any form of written communication.


Early in my career I worked at a company that stored customer credit card numbers in clear text in the database. When I learned of PCI compliance, I naively emailed my manager some links on the topic and a note that we might not be in compliance.

I was promptly rebuked for leaving a paper trail.


Yes, this was addressed with "don't have records of this particular decision? The person responsible for making sure the records are kept for that department will be in trouble."


> isn't exactly a likely-to-succeed plan

It doesn't have to gain popularity, be well-liked, or be agreed with by the companies. If it is law, they must follow it.


Hope they keep an eye on Truth Social that requires your phone number even to use it.


It would be more accurate to say that organizations avoid this with shifting blame. It happens just as much if not more in government, ngos, etc.


It says they cannot use the data commercially, only for the stated purposes (security, recovery).

In practice, it'll be hard to enforce, though.


Why not increase the punishment by having random audits like the government do for drug checks? And make the company pay, would be an even bigger deterrent if it's not just a fine...


>They locked people out of their accounts and demanded phone numbers for "safeguarding," then used them for targeting in direct contravention of a previously negotiated agreement with the FTC. If that doesn't rise to criminality, the fraud statutes need to be updated.

It's bad, I agree.

But jail? That should be reserved for the most heinous crimes and criminals.


I’m basically a prison abolitionist but i don’t really see corporate fines as any real kind of justice at all either, for big or small things. This is just putting a price tag on the behaviour.


Disagree.

White collar malice, even more so in tech, has an enormous blast radius. It affects a giant amount of people. Sometimes in small ways, but small suffering multiplied by a huge number is a large negative impact on society.

If one could trace down the single person (most) responsible for the offense, I would fully support jail time. I doesn't have to be long. Maybe 3 months for a case like this. And a note on the criminal record.

So that they can FEEL it. Right now they hide behind a corporate shield and suffer no personal damage nor reputational damage.


"Doing time" in this type of crime needn't be actual imprisonment. The accused individual(s) may for example face being barred from holding positions of responsibility for a set amount of time, be given a suspended sentence, or both.


Why is smoking weed worthy of jailtime but swindling millions of people isnt? We live in some crazy era of corporate decision makers being above the law.


One fundimental purposes of a "company" is to be an abstraction to ofuscate moral responsibility for individual's actions.


At first I thought the fine sounded excessive but after thinking about it, it seems far too low. I'd like to know the the people that were specifically responsible for this scam.

Did Jack Dorsey implement and endorse this scam?


There are a lot of details I don't see about this, even in the order itself. How did the FTC know twitter was abusing this data? Was there a whistleblower who notified them, or did they break down the doors and start scanning twitter's internal documents? Were they authorized to dig into twitters internal processes as part of the initial security investigation?


For context, Twitter‘S revenue in 2021 was $5 billion, on which they made a loss of $220 million.


What a great buy...

I saw a tweet the other day that said they can't think of a worse purchase since Bank of America bought Countrywide for $40 billion.

TWTR has traded flat since its inception in one of the greatest bull markets of all time.


HP bought Autonomy which turned out to be a total fraud.


Cuban sold Broadcast.com to Yahoo for almost $6B. $10K per user, instantly worthless.


Which is bizarre, in itself.


When I think twitter, I think of a service that costs 5.2B to run.


I can't even fathom how it's possible to use $5B/yr to run Twitter.

So, I co-architected the Opera Mini infrastructure. It peaked at a similar number of users (250-300M monthly active users). Sure, Twitter is much more DB-intensive, but transcoding web pages is pretty CPU intensive too, and typically we transcoded every single web page for them. Opera Mini was their only browser.

Twitter is spending $5B/300M =~ $17/user per year

I believe that from public sources, it's now possible to deduce that we spent less than a 1/100th of that per user/year, almost a decade ago.

Since we didn't have crazy money, we optimized things at every step. Or, well, mostly avoided doing stupid stuff.


That re-implementation of solved things in engineer's favorite language/framework does not happen for free. Its pretty clear from outside that company like Twitter is jerked around by its engineers who have far too much autonomy to implement things their way instead of high perf C/C++ that any company at Twitter's scale must use.

Twitter engineers endlessly tweeting about having JVM perf team, presumably for their low performance Scala codebase.

Here [1] is an example where supposedly Twitter's supposedly Silicon valley calibre engineers couldn't write a half decent log management system and end up using vendor which otherwise only boring enterprises would use.

It reminds me of ignoramuses at Yahoo would claim that Google's frugality is useless and they can do better by just buying from Sun/IBM/HP and so on.

1. https://blog.twitter.com/engineering/en_us/topics/infrastruc...


That Splunk post is a painful read. In my world, Splunk is well-known for being an overly expensive service aimed at sucking money out of enterprise/pre-software-style companies.


According to their financial disclosures for 2021: On income of ~$5.08B, they spent

- $1.8B as "cost of revenue" (costs incurred to run their business),

- $1.25B on "research and development",

- $1.2B on "Sales and Marketing", and

- ~$1.3B together on "general and administrative" (overhead) and "litigation settlement".

Then there are a bunch of small monies related to interest expense and income, etc etc.

They're spending huge amounts of money and could be profitable if they really wanted. I can't imagine what Twitter is doing with 1.25B in research. Elon could make Twitter profitable simply by cutting their research department.


Well presumably that 1.25B is mostly engineering salaries since its "research and development" (for tax purposes where they probably get huge write offs).

Anyway, they need that org to get their 1.8B in "cost of revenue" down, which is presumably mostly the cost of massive server farms to store what are mostly text mesages. Although these days with all the machine learning/etc to sell ads they probably "needs" all that hardware to run their models and can't just optimize it down to a higher perf system painting web pages.


>mostly engineering salaries

If the salaries are towards specific operational roles they'd be listed under "cost of revenue". If the salaries were for general duties not specific to an operational role they'd be under "general and administrative".

Research and development needs to be for research and development. It could be for engineering salaries but those salaries would be towards research and development rather than operational duties.


General engineering usually falls under the "development" category. The details may be state/country dependent. That said, even in stricter jurisdictions I know for sure that companies tend to group their engineering resources in such a way that they aren't justifying individual engineers (except maybe WRT timesheet filing) rather a bunch of people working on a "project" which meets the definition all get lumped together for purposes of R&D.

Random google search, talking about how broad this can be:

https://www.mossadams.com/articles/2021/03/company-qualifica...


What's the point of a tax write off if you lose $200 million a year?


> I can't imagine what Twitter is doing with 1.25B in research.

…and development. I assume “R&D” includes most of the engineers?

Still, it could be a lot more lean for sure.


"Research and development" is engineering.


People forget the key word ”premature” in the infamous Donald Knuth quote, and think all optimization is evil


Is not doing stupid s%$t an optimization? (lol)

I'm reminded of the reddit articles a few years back when they were talking about moving to AWS and having to batch database requests to maintain their database perf. Apparently, at the time they were literally sending tens/hundreds of thousands of database queries for each page load request for a logged in user because each comment was collecting all its up/down votes and adding them up rather than just having a per comment karma attached to the comment.

This is what happens when you hire a whole bunch of recent grads that frankly have no idea how to write code, and think they are the smartest people on the planet when it comes to distributed systems.


It's always premature to optimize until your company is failing, because if you aren't failing yet it means it's worked so far. You should always wait until your company is falling apart to do a full rewrite of your core product.


Exactly. When the business is failing is when there's lots of time and resources available to make your core product more efficient.


AWS instances don't pay for themselves

/s...mostly


There are some tongue in cheek answers but actually how does it cost that much to run?

That’s a ton of money for a website that is very text heavy with short/low quality videos and a largely fixed feature set.


People. Salaries are expensive, especially SV salaries. Then you need well paid management for the extra heads, and group manager for them. IIRC (feel free to correct me, I'm not going to dig it out again) they spent >$1B on R&D itself... which is pretty much just couple hundred of engineers who, (judging from the service changes recently) mostly did nothing.


if your first thought is that the fine isn’t enough, often these fines go along with agreements to change business practices. in this case:

>In addition to imposing a $150 million civil penalty for violating the 2011 order, the new order adds more provisions to protect consumers in the future:

>Twitter is prohibited from using the phone numbers and email addresses it illegally collected to serve ads.

>Twitter must notify users about its improper use of phone numbers and email addresses, tell them about the FTC law enforcement action, and explain how they can turn off personalized ads and review their multi-factor authentication settings.

>Twitter must provide multi-factor authentication options that don’t require people to provide a phone number.

>Twitter must implement an enhanced privacy program and a beefed-up information security program that includes multiple new provisions spelled out in the order, get privacy and security assessments by an independent third party approved by the FTC, and report privacy or security incidents to the FTC within 30 days.


Many employees I talked to described years and years of trying to stop this but eventually the growth team took over. This is so sad.


Who benefits from these fines?

Will these fines end up being paid out to everyone who now needs to deal with a lifetime barrage of spam calls and texts?


It’s ok they are going to get 1billion from Musk so they can afford it.

Jokes part I am glad they got fined. These kind of transgressions need to be dealt with publicly and Twitter is a big enough entity to send a message that this is serious. Of course I am sure you very company that got phone numbers already abused them :)


These fines are meaningless and just looked at by the company as the cost of doing business.


Cool. So the defense, "Facebook does much worse!" didn't fly? #justSayin


Didn't Facebook do something similar without any apparent comebacks?


Here's hoping they get to Microsoft's fishing phone numbers from Minecraft players by threats and blackmail (alleging unauthorized account access that doesn't actually happen).


Interesting, a length of wiggle room for Musk to play with...


Sigh. Yep. Don't ever give a company your phone number for 2FA. It's insecure anyways due to SIM swapping. Stick to FIDO (e.g. yubikey) or TOTP (e.g. google authenticator)


Yep, it's almost more accurate to describe using phone numbers for 2FA as being anti-secure, not just insecure. That's because it's effectively no better than having no 2FA and it's possibly even harder to detect when your account has been compromised by a SIM swap. And many companies that use phone numbers for 2FA also allow resetting one's password via that phone number. It's really just a tragedy that companies do this, rather like when login screens prevent copy/paste.

If you're ever prompted to add a phone number to your account on some web service for "extra security", just click "remind me later" or "skip" as many times as possible.


Exactly. I did tell them many times before [0], [1].

They just won't listen. So give them a fine instead, that will make them listen.

The second 'wake up call' after the last one I've seen today: [2]

[0] https://news.ycombinator.com/item?id=29264937

[1] https://news.ycombinator.com/item?id=30010434

[2] https://news.ycombinator.com/item?id=31510868


Actually its more secure to use NFC yubikey w/ their app than google authenticator for TOTP bc the key is in the yubikey enclave vs the phones


The tradeoff is usability though. I can have a TOTP code stored on two separate phones in two separate locations versus needing a yubikey always present.

To me, I'm too forgetful and dumb to not lose a yubikey, but I manage to not lose my phone.


I just started using that and would recommend it. When you set it up you add each key you own from the same QR code.


You probably want both, as well as printing out some backup codes, to avoid the risk of getting locked out when something breaks.


Unlike FIDO/U2F, TOTP is susceptible to phishing. Getting locked out a serious problem and should be addressed with printed recovery codes probably.


My passwors manager has close to 200 records, thats a lot of peinted codes, id need a filing cabinet.

What I ia m in a different country, visiting damily, and the ubikey is lost - am i locked out of everything?


Yeah I'm extra upset about this because I would have chosen TOTP if I was given the option, but only sms authentication was available for 2FA for the longest time (until it became such a big issue with account takeovers including jack's I believe that they had no choice but to change that)


Or just get a phone and service provider who doesn't allow SIM swapping (e.g. Google Fi, etc.), since many more services only do 2FA with SMS than allow hardware authentication.


Any clear reason to go for Google for TOTP? As opposed to Authy or something else.


There's actually a very good reason to not use Google Authenticator actually.

They don't offer any backups (at least on iOS) and as a result, if you lose your phone, you are hosed. Google Authenticator also doesn't use iCloud for backup for files like other apps. I also just assume at this point no one owns that app and that it'll never get backups because that's how Google operates.

I've seen multiple people lose their TOTP codes this way and have been locked out of their accounts. Or even the more simple case, they buy a new phone, restore from backup and just assume everything is peachy then send their old phone back and then don't realize it until they open the app for the first time.

Use something with cloud backups for your safety.


Authy does not allow to make local backup (export) and it is fully closed source, not really transparent. I wish there were better alternatives.


Ravio OTP on iOS


I suppose you're talking about automatic backups, but at least on Android you can manually "transfer accounts" to export to another device over QR codes.


On iOS, I do see this feature, but it claims it will move the account, as opposed to "copy" it, and so, if it is a backup mechanism, they are explicitly pretending it isn't one.


Although it is primarily designed and labeled as an export mechanism, I can verify that it does work as a backup mechanism. I regularly use it to sync up new 2FA accounts to a backup phone. Simply choose to keep the accounts after exporting.


I got scared as hell a few years ago when an update bricked the app, so launching it caused it to immediately crash. Fortunately, reinstalling the app fixed it without losing any data.

But since then I started actually backing up my recovery codes, and whenever I create a new account somewhere, I set up 2FA on three separate apps on my phone just in case.


They offer one-time backup codes that can be used if a device is lost. I'm not sure if this is Google or the site, but for every login where I've set up Google Authenticator I have copied the backup codes to my password manager for that account. I'd agree that a lot of people might not do that however.


Google the site has backup codes for logging in to Google. Completely unrelated to the data in Google Authenticator.

Google Authenticator used to have no way to get the data out, but does now have an export. It still has no normal backups.


Luckily services makes clear that you should print backup codes in case you lose 2FA.

As for Android Google authenticator - there is export function, that generates QR code for all tokens. You can't screenshot it, but can take a photo with different device and print.


There is a clear reason not to use Authy, which is that making your data portable is extremely annoying. No export function. I ended up writing a 3rd party Authy client just to get my TOTP keys out.

For iOS users, I cannot say enough good things about https://apps.apple.com/us/app/otp-auth/id659877384. Author is responsive, encrypted backups, portable data format.


For android users, Aegis provides much of these benefits (as does andOTP). Both are open source, I like aegis a bit better.


andOTP has just worked for me across multiple device migrations for years. Encrypted backups to a git repo for mobile files managed with MGit.


Thanks. I have been looking replacement for Authy for quite some time because of no export function.


That's good one. Thanks.


Those were examples I thought people were likely to recognize, not vendor recommendations. I edited for clarity.


No, it's all the same.


You do realize that TOTP is a standard that doesn't require you to use either, that you can use the same secrets for any TOTP app, right?


You did read my comment where I suggested "Authy or something else", as in any TOTP app, right?


Elon fixes everything. He is looking for reasons to back out and if this was not disclosed before, he found one more reason.


Interesting. Is this something that has been an ongoing investigation at the FTC? The timing seems extremely suspicious.


Yes. This is one of several similar cases the FTC has pursued against social media companies over the last few years. I believe Facebook had a bug that inadvertently outed their misuse of the 2FA phone numbers for advertising and that was what initially put the practice on the FTC’s radar. Around the start of the pandemic the FTC actually did a study[1] looking at the “secondary uses” of security data. They registered for 2FA with a bunch of websites and then tracked how many non-2FA related calls and text messages the phone numbers received. While the experiment was a great idea I think the way they structured it leaves much to be desired.

[1][PDF] https://www.ftc.gov/system/files/attachments/office-technolo...


That's a settlement they reached recently on things that happened at least 2 years ago. Just to be clear.


Teitter wanted my phone number once, even by locking my account and asking me for my phone number to unlock it. It felt like blackmail and I threatened Twitter with a GDPR request, not only requesting my data but also the algorithms used for automated decision making.

As soon as my account got restored I let their DPO know that I don't insist on the fulfillment of the GDPR request any more. And that I will follow through if Twitter pulls off this kind of blackmail again on me. Haven't had this issue any more.


> [Twitter] agreed to an order that became final in 2011 that would impose substantial financial penalties if it further misrepresented “the extent to which [Twitter] maintains and protects the security, privacy, confidentiality, or integrity of any nonpublic consumer information.”

They violated that order and that's what the fine is for.

I was wondering what kind of authority the FTC has to impose fines based on what as a European I'd consider a GDPR violation (in the USA, this california privacy act thing sounds like it would be the nearest thing, but that's not federal so that couldn't be it). But what was this order about? Clicking the reference in the article:

> The FTC’s complaint against Twitter charges that serious lapses in the company’s data security allowed hackers to obtain unauthorized administrative control of Twitter, including access to non-public user information, tweets that consumers had designated private, and the ability to send out phony tweets from any account including those belonging to then-President-elect Barack Obama and Fox News, among others.

> Under the terms of the settlement, Twitter will be barred for 20 years from misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information

So this wasn't about privacy initially, the FTC's attention came from allowing some public figures' accounts to be hacked, after which it imposed some broad set of requirements, which are broad enough to now include this privacy issue. Not a bad outcome, but interesting turn of events to get the FTC to act as data protection authority.


barred for 20 years from misleading consumers

what is that time limit for?

seems badly expressed to me. i suspect it means there will be a harsher punishment if this happens again within 20 years.


Not a surprise, they were really insistent on getting a phone number for the account.


The fines should be paid the Twitter users


Good, but it should be 10x that amount.


The FTC really ought to take a leaf out of GDPR's book, and start fining truly punitive amounts: https://www.tessian.com/blog/biggest-gdpr-fines-2020/#:~:tex...

$150M for a repeat offense affecting millions of users is paltry.


why aren't these fines percent of revenue, or a multiple of the number of people affected?


Instagram used to do this too


Yeah but where does that water flow. We have guns, we have gravels, but where does it go!?


Is this something unique to Twitter or is this just Biden or someone else trying to stop the Elon deal?


A fine is a cost. It's quite possible that Twitter made more than $150m in doing this.


I truly doubt this was a calculated tradeoff.

It was almost certainly a fuckup where the phone # was mistakenly stored in a shared schema, and someone on the ads side saw it and decided to use it for targeting, knowing nothing about 2FA or how it got there. This probably only affects a tiny fraction of their users.


>I truly doubt this was a calculated tradeoff.

Potentially, sure.

>It was almost certainly a fuckup where the phone # was mistakenly stored in a shared schema, and someone on the ads side saw it and decided to use it

How is this an "almost certainly"? Do you have additional information you'd care to share on why you think so? If this were the case, it would point to insanely sloppy policies, procedures, and implementations.

>This probably only affects a tiny fraction of their users.

Why?


> This probably only affects a tiny fraction of their users.

Because most users provided their phone number when signing up, not just when setting up 2FA. Twitter has always been phone-centric (the first app was literally just sending SMS messages).


I don't think Twitter makes money at all.


They make ~$5B a year. They just spend all of it and then some.


The loss would have been bigger tho.


Clownish. If I were the CEO, some folks would have already been fired.


Several people _were_ fired recently. We don't know why, so maybe. But probably not.


Personally I think the CEO would have known about this happening and turned a blind eye until it became an issue. I have nothing to back this up though. Just a pessimistic take on corporate culture.


Would you fire yourself?


That's called resignation. Yea you would do it if you respect your company and your users. Call in someone more mature.


Clearly anyone with the ethical chops to consider resigning over this would be a net-loss to Twitter if in fact they resigned. Is this some sort of named paradox?


> would be a net-loss to Twitter

Would it? It seems to me like unethical behavior is always more profitable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: