I don't have the most optimistic outlook for this having any impact, but I really hope this sets a precedent for limiting the use of dark patterns with which companies try to tie your identity to a phone number. I think the total sum for this fine is rather myopic: it ignores the long tail of possible future data leaks and the impact it might have on the people behind the affected accounts.
I created my current Twitter account a few years ago and it remained dormant for a while. It was flagged as "in violation of our policies" despite having not made any tweets or using a handle or nickname that would cause offense to anyone. In order to resolve this, I had to enter my phone number to "secure" my account. I don't know what process triggered this review, but I'll be damned if it didn't smell like an easy way to associate an existing marketing profile with my Twitter account. Of course, it's vitally important to profile a service I used to keep up with industry news and post about Goban puzzles.
I've also run into similar patterns on Discord and similar platforms; "Oops! Something suspicious is happening with the account [you literally just created]. Please add a phone number to your profile to proceed."
Although I follow a reasonable set of practices around identity/password management, I usually architect my risk profile with a "I don't care if I lose this account" approach. If that statement isn't true, then I will happily apply all of the security measures available. However, it seems like the idea of creating "I don't care" accounts is becoming increasingly difficult as we continue to invest in user marketing analytics and lower the barrier of entry to these types of technologies that do not have the consumer's best interests in mind.
I tried to create a Facebook account recently to join a group. I used an Apple hide my email address. Within a day or so the account was blocked, and I was required to provide not just my mobile number - but a photo of me.
I did try uploading a celebrity photo instead, and of course it didn’t work. But I was shocked at the need to post a photo of myself. That is way past my creepy threshold.
As I understand it, if they're unable to tie you into their "social graph", you won't be able to use the service. I tried similar a couple of years ago when I begrudgingly wanted to participate in a group that insisted on using FB as the communication platform. Couldn't get an account to stick unless they had some way to figure out who I was. Eventually just stopped trying.
I don't think that's strictly true. I created an account a few months ago with a made-up name, throwaway gmail address and a (cropped) photo from thispersondoesnotexist.com, which is still working fine.
Not a user, but doesn't MercadoLibre operate as a licensed financial institution in at least a portion of their vertical where such a requirement would not be considered unreasonable?
I can certainly see them playing dark pattern games with sensitive privileged data from one business cross-pollinating into unrelated businesses by way of blind user agreement acceptance though.
EDIT: From their most recent 10-K filing[1], under Government regulation:
In addition, Mercado Pago as a payment institution in Brazil is subject to:
...
(iv) Data Protection Law: In August 2018, Brazil approved its first comprehensive data protection law (the “Lei Geral de Proteção de Dados Pessoais” or “LGPD”), which became applicable to our business in Brazil since August 2020. In December 2018, the former president of Brazil issued Provisional Measure No. 869/2018 which amended the LGPD and created Brazil’s national data protection authority (the “ANPDP”). We have created a program to oversee the implementation of relevant changes to our business processes, compliance infrastructures and IT systems to reflect the new requirements and comply with the LGPD. The LGPD establishes detailed rules for the collection, use, processing and storage of personal data and affects all economic sectors, including the relationship between customers and suppliers of goods and services, employees and employers and other relationships in which personal data is collected, whether in a digital or physical environment.
(v) Secrecy rules: In addition to regulations affecting payment schemes, Mercado Pago is also subject to laws relating to internet activities and e-commerce, as well as banking secrecy laws, consumer protection laws, tax laws (and related obligations such as the rules governing the sharing of customer information with tax and financial authorities) and other regulations applicable to Brazilian companies generally. Internet activities in Brazil are regulated by Law No. 12,965/2014, known as the Brazilian Civil Rights Framework for the internet, which embodies a substantial set of rights of internet users and obligations relating to internet service providers, including data protection.
Law No. 12,865/2013 prohibits payment institutions from performing activities that are restricted to financial institutions, such as granting loans directly. In November 2020, the BACEN approved the application filed by MercadoLibre Inc. for authorization to incorporate a financial institution in the modality of credit, financing and investment corporation (SCFI). In light of the authorization granted by BACEN, we incorporated a new entity (Mercado Crédito Sociedade de Crédito, Financiamento e Investimento S.A.), which operates activities related to the granting of loans and obtains better funding alternatives for our business.
...
The BACEN is also implementing the Brazilian Open Banking environment, to enable the sharing of data, products and services between regulated entities — financial institutions, payment institutions and other entities licensed by the BACEN — at the customers' discretion, as far as their own data is concerned (individuals or legal entities). The Open Banking implementation has been gradual, through incremental phases that take into account specific information/services to be shared, and Mercado Pago is a participant of the Open Banking system since February 2021, when its phase 1 started.
I did think of that. In the end I really wasn't trying very hard. I left facebook many years ago and I wasn't really keen to rejoin. I found the car I was looking for elsewhere.
> I've also run into similar patterns on Discord and similar platforms; "Oops! Something suspicious is happening with the account [you literally just created]. Please add a phone number to your profile to proceed."
This happened to me the other day, but for an account which is years old, owns a server with 1200+ members, and... is already logged into Discord on another web browser.
I usually use Firefox for chatting, and Chrome for voice chat (since it works better than FF for that). So I usually have FF permanently open, and I log into Discord through Chrome from time to time to voice chat. So this one time I open up Chrome (while, again, I'm simultaneously logged in through FF and can fully use my account), try to log in, and... I get a "verification required" screen.
They allegedly think I'm an abusive user so they're preventing me from logging in into Discord without verifying my account, but they're simultaneously letting me fully use my account? How does this make any sense? Any abuse I can do I can already do because I'm already logged in, from exactly the same computer.
I wrote to their support asking what's up with this, and they basically told me they don't care, and that I will be required to verify. Of course they won't tell you why they're doing it, because "security".
I regularly receive crypto scam DMs on Discord and they're seemingly unable to block those kinds of accounts, but they sure as hell are good at bullying legitimate users like me.
Fines are generally made small enough not to be overturned, so the agency doesn't have to waste money litigating it, while also typically being large enough to change future behavior. They're not really retributive so much a cold calculations meant to get companies to do what they're supposed to do.
OK, but Twitter is a special case. After the breach the FTC brought charges against Twitter and mandated that the company implement a more robust security program.
I've avoided Facebook vehemently for several years but recently had to get it for a work-thing and seconds after creating it had to supply my mobile number because of something-something suspicious so this is me contributing that anecdata.
Same here although after a year of “come back come back!” Emails almost daily (that went straight to spam for some reason) I tried again and got my account working.
Seriously twitter- go suck an egg. With so much money, how can you betray trust of your users? I get daily calls from car warranty scams because of stuff like this
This is an interesting part to me: "[T]he new order[0] adds more provisions to protect consumers in the future: ... Twitter must provide multi-factor authentication options that don’t require people to provide a phone number."
I would like to see this be a more broad-based rule. No, I am not moved by "SMS is easy" or "getting a number that can receive SMS is harder for scammers to do in bulk." If you must, give users the choice but not the obligation to hand over a mobile number.
To further expand on this. 2FA should not rely on SMS at all. It should be an option but not the default one. An Authenticator app should be the default. I know we assume everyone has a cell phone but that’s not the case.
Authenticator apps aren't much better. Look at their privacy policies. Installing Microsoft Authenticator means giving them your location data 24/7 and allows the to collect even more data on you than giving Twitter your phone number did. Do you really think they aren't going to use that data for anything else? I don't believe that anymore than I believed Twitter.
Personally, I'd rather deal with the hassle of carrying around multiple hardware tokens than give companies a continuous stream of data about my personal life to use against me.
Afaik, TOTP is standardized, so you should be able to use any authenticator app for 2FA. Idk about Microsoft, but I haven't encountered any service that doesn't allow you to bring your own TOTP app.
Bitwarden (and probably others) store the TOTP secrets and are persistent. This isn't a recommendation per se, I'm not sure how I feel about it being stored in the cloud, but it is a bit friendlier.
Sure, but this still requires a certain level of "awareness" for this technology.
It's sort of the same problem PGP suffered from. It's technically great, but cumbersome for non-technical people to use (particularly in a safe way), so people will avoid it.
2FA needs to be simple and easy to achieve mass adoption.
Making people install special apps for just one service, or find out one day they're permanently locked out of their facebook account (or far worse) is simply going to hurt adoption.
If your grandmother can't make it work on her own, then it's not good enough. I'm not advocating SMS is the best option for 2FA, I'm just pointing out the alternatives are currently not up to snuff.
I think this is good advice but it also shows why using TOTP as a default 2FA mechanism (instead of SMS) is a tough sell. How many people are set up to store a TOTP seed in a location other than their authenticator app? How many people even know what a TOTP seed is? I would wager that the vast majority of non-HN readers think of TOTP as a QR code that you scan into an authenticator app, if they are even familiar with authenticator apps.
SMS, for all its security shortcomings, is at least something that the vast majority of people understand already.
> SMS, for all its security shortcomings, is at least something that the vast majority of people understand already.
But of course SMS suffers of the same problems as naive use of TOTP: Lose your phone, you're locked out of every account you have.
So in the worst case, TOTP is as bad as SMS. But, with some awareness/education TOTP is far superior if the user doesn't fall into the trap of attaching the TOTP seed to a phone.
i.e. for the aware user, TOTP is far better. For the naive user, TOTP is no worse than SMS. Thus, always favor TOTP.
Email would be preferred. SMS shouldn’t be the default. If I lost my TOTP tokens, I should be able to go through a tougher path with an email verification step to get in to redo my tokens. What I don’t want is for them to send me an SMS to verify me. What if I’m in a different country? What if I don’t have cell service? What if I don’t have access to my phone and that’s why I’m rotating all my stuff?
No, it's tied to the app because the initial secret is destroyed after you set it up. Every single Authenticator App I've used (which is not all of them admittedly), requires manual backups - typically in some printed form.
All of my other apps automatically back themselves up, or Apple/Google backs things up for me. When I get a new phone or wipe my phone... after logging into all my account I fully expect my Authenticator app to show up on my home screen and have all my codes in there exactly as I left it before.
This is a huge pitfall for the unaware... you will lose all of your codes, and potentially access to whatever services or things they were protecting.
Authy, 1password, bitwarden, and others back themselves up. If not having a cloud backup is a negative for you, pick a TOTP app that does have it - it’s not a failure of TOTP that the few apps you’ve used don’t back up (or you aren’t aware they do).
> No, it's tied to the app because the initial secret is destroyed after you set it up.
TOTP is not tied to any app. When you set it up, save the TOTP seed in a secure place that you control. There is no need to rely on any app, which would be too fragile to consider.
This is good advice, and I will look into Bitwarden for myself personally, but this isn't a great solution for non-techies... which is the problem with anything that is not SMS 2FA.
We all agree SMS 2FA is not as secure as we'd like it to be... but no alternative exists. It's the classic sliding scale between usability and security. The most secure system is one you cannot use... and the most usable system is one with no security. We need something that is very usable, and still secure... perhaps a tall ask but that is indeed what we're after.
Until then... regular people will continue to use SMS for 2FA. We should be happy people are at least comfortable with SMS 2FA instead of not using 2FA at all.
i agree that the authenticator app stuff is fraught for the average user.
> No, it's tied to the app because the initial secret is destroyed after you set it up. Every single Authenticator App I've used (which is not all of them admittedly), requires manual backups - typically in some printed form.
i scan the QR codes with a normal code reader, and then put the information into keepassxc. i can view the secret, generate codes, do whatever, and it's all with decent open source stuff and stored in a file i can back up.
Your authenticator apps won't be backed up. They require you to export them to a QR code or some other printed format... then "restored" once you setup your new phone.
Probably a security policy thing more than a technology thing... but the result is the same. TOTP is dangerous for the wrong user.
For the past several years both macOS and iOS have TOTP built into the password manager. It’s non-obvious how to set it up and doesn’t auto-prompt readily like password management does, but I’ve moved all of my TOTP over and have a backup, it’s synced to all my devices, and I don’t need a dedicated TOTP app.
Steam is another big one. They require you to use the Steam mobile app and it's the only way to do 2FA - no QR code. I've since dropped 2FA for Steam altogether.
Actually, that option has existed forever, since even before the app-based MFA.
However, if you use the "less secure" email MFA then steam places limits on your account that don't exist with the app MFA, like a forced delay on executing trades.
If you've got the steam mobile app for Android for 2fa and want to move to a different app that supports steam 2fa (aegis, winauth, etc), use steam auth on multiple devices, or simply move to a new device without a temporary trade block, version 2.1.4 of the steam app will allow you to perform adb backups; Android backup extractor will allow you to convert the backup to a standard tar file to extract the secrets if you want to.
Unfortunately, Sendgrid and other users of Authy with no alternative 2FA systems in place lock you into the Authy app or SMS as the fallback. There are some, very limited, workarounds for this but still requires you to have the app in Authy.
———————
On a recent find apparently Authy (the app not the sms fallback) has a weird, uh, “feature?”, where my 2fa, for example, for Sendgrid will unlock all of my Sendgrid accounts, which I personally find mildly concerning.
If you load your Sendgrid Authy 2FA on a rooted android phone, you can extract the TOTP secret that powers it under the hood and put it in Bitwarden like you prefer.
Ultimately with any service you’re only protected by your contract and the PR value of a breach of trust. Unless you’re using an open source app and rolling your own sync, an app where trust is paramount (1Password), or one where a misstep is a huge media hit (Apple), you’re at the mercy of that company.
Microsoft fwiw, probably uses location to spot fraud and is unlikely to breach user trust imo.
I have. I worked for an enterprise that used OneLogin could only use the OneLogin Protect app for 2FA. I thought 1Password was broken but I tried a different app with my phone camera and it said the QR code was invalid.
So don't use Microsoft Authenticator. There are many options without the privacy problems with the MS App (which, IMO are overblown, but whatever). Go run your own if you want to be absolutely private. I'm happy with 1Password for managing it.
Are you using Microsoft Authenticator in a corporate environment/profile? I just checked my personal install (Android) and it does not require any permissions (location is denied).
read the contents of your USB storage
modify or delete the contents of your USB storage
Location
precise location (GPS and network-based)
Contacts
find accounts on the device
Storage
read the contents of your USB storage
modify or delete the contents of your USB storage
Camera
take pictures and videos
Identity
find accounts on the device
add or remove accounts
Other
receive data from Internet
run at startup
draw over other apps
prevent device from sleeping
create accounts and set passwords
view network connections
close other apps
control vibration
use accounts on the device
full network access
As with most (all?) Android apps, support for permission requires user consent; Camera, "Files and media" and Location are all set to "Not allowed" on my device. From what I can tell based on Microsoft's help page, location may be a requirement of a work/school account and as far as I can remember, I've never been prompted for the location permission - it's possible I denied access if I was, but the app works without it.
From Microsoft's Authenticator help page:
"You will see a prompt from Microsoft Authenticator asking for access to your location if your IT admin has created a policy requiring you to share your GPS location before you are allowed to access specific resources"
There's one big data leak which Android/iOS deliberately don't let you control: internet access. TOTP apps don't need it, and yet.
The Microsoft app does have a mode which uses the internet to push a message saying "Is this you logging in?", which is weaker than TOTP but feeds into their "AI threat detection engine" mantra. It seems to fall back to TOTP if there is no network.
The problem with android is that it's designed to leak your data like a sieve, so permissions are overbroad and all or nothing. Most people will accept any and every prompt for a permission they're told is required in order to use the app, even when the app doesn't always need it to function.
MS is clearly using this to their advantage and asking for everything they can provide even a thin justification for, but even if you're just giving them a fraction of what they're asking for it's still far worse than handing over a cell phone number. My work considered requiring Microsoft Authenticator but after enough people balked at handing over so much data to MS they caved and we got simple little keychains that do nothing but spit out numbers and can't collect our contacts, our location, or start listening using a microphone. It's hard to beat that.
There are free, open-source, and privacy-respecting options for TOTP 2FA that don't require a mobile phone plan.
You can use something like KeepassXC (desktop) or something like KeepassDX or Aegis (on F-Droid on Android) for your OTP authentication app to manage 2FA for Google, Amazon, eBay, Dropbox, etc. and there are other options as well.
Just wanted to add emphasis on Aegis. I've been using Aegis for Google, GitLab, PSN, domain management. No issues.
And it has zero permissions needed (aside from camera which is granted on a need basis for scanning qr codes). And also works fine without ever having a Internet connection.
Vaultwarden has TOTP support built in, and there are like a dozen open source TOTP authentication apps out there. There's no reason you have to use an app that invades your privacy for TOTP.
Sadly, it uses rust nightly, making setup bleeding edge.
And node, meaning it's a security nightmare.
There are likely other options I guess, but for something of this level (keys to your, or a company's kingdom!), I'd want to see a project with an arm long history, loads of review, etc.
Ah, good point. I'm able to build Vaultwarden with stable Rust, so maybe it's just a requirement for development. Vaultwarden lifts web-vault from Bitwarden, which uses Node, but you aren't required to run it with web-vault.
They do ask on Android it seems. Not sure if this text is common across all apps seeking location permission.
> Optional App functionality, Fraud prevention, security and compliance
However I’m not surprised such apps keeping double standards between iOS and Androids. Apple spanks (or spanks harder) the apps that ask permissions frivolously or block functionality behind permissions unnecessarily just to collect data.
For e.g I use TrueCaller on iOS without giving Contacts permission, but on Android the app features are blocked without it. Not sure of now but earlier Ola/Uber didn’t work on Android without location permission but on iOS they did and still do. Many such examples.
> Authenticator apps aren't much better. Look at their privacy policies.
For the most part, "authenticator app" means TOTP, which isn't proprietary.
Which is beautiful because you don't actually need any app for it. Just save the TOTP seed. There are plenty open source implementations to compute the one-time code when you need it.
That's not the problem with SMS for 2FA. The problem with SMS for 2FA is that cell phone accounts were never intended to serve as what amounts to a high security authenticator service, and cell phone companies are resistant to this newfound 'responsibility', somewhat understandably so.
By default, someone can call up your cellular provider, claim to be you, pass trivial to no security questions, and request a replacement SIM be mailed out, or that your number be ported to another device. Or they can slip a bit of cash to any employee who works at any cell phone store that sells service for your carrier.
SMS 2FA isn't better than just a password. It's objectively worse, dramatically increasing the attack surface. Compromise someone's cell phone account and you are virtually guaranteed access to their bank/retirement/investment accounts, email, social media, etc. And they are virtually powerless to do anything about it for at least a few hours while they scramble to, say, get phone service working again and rush to contact everyone they can think of. By the time you're able to even get to your bank to talk to a branch manager and show all sorts of proof of identity, your accounts could be long since cleaned out.
Some providers finally are offering secondary passwords for porting/SIM replacement, that sort of thing. Absolutely call them and request your account be locked down as much as possible, ask to specify a secondary password, etc.
> It's objectively worse, dramatically increasing the attack surface
Any 2FA - no matter how weak - should in theory not be weaker than no 2FA. In practice of course these things can often be used as the only factor to "recover" access to an account so yes, weak 2FA like SMS can make things worse.
My car is old enough to just have onstar which is physically disconnected. My phone I avoid adding personal data to (outside of messaging), I limit my browsing, and I've done what little I'm allowed to in terms of locking it down and removing unwanted features. Ultimately though, it's a necessary evil I'm still hoping to find a solution for.
> But I gotta ask. How are they using your personal data against you?
the answer is that they'll use your data against you in any way that they can if it works to their advantage in any way.
Companies don't care about you and your needs, they care about themselves and making money. The reason there is a multi-billion dollar industry around the collecting, buying, and selling of even the most mundane aspects of your life is that companies have seen that all that data can be leveraged against you to give them money and power and one way or another that usually comes at your expense.
Often they use the data they collect to manipulate you. Maybe they want to get to you buy something you wouldn't otherwise, maybe they want to shape your political opinions. Maybe they just sell your data out to others directly or they use that data to make it easier for others to exploit you.
It doesn't matter if it's Facebook selling your data to Cambridge Analytica so that they can try to swing an election, a group of activists who buy up lists of people who have visited abortion clinics so they can harass them, or a company or data broker letting people buy lists of individuals with low IQs and poor education, or lists of people who are likely to suffer from dementia so they can be targeted with scams, it's all using the data you barely noticed you were handing over.
Even when it's not intentional algorithms are constantly searching for ways to exploit you in the moment you're at your weakest. They can detect when someone who is bi-polar is entering a manic phase and push airline tickets to them since people in that state tend to make last minute travel plans. They can detect when you're tired, heartbroken, or under a lot of stress and anxiety and target you aggressively at those times using one trick after another to find whatever works best (both using what has worked for others like you and tailoring their methods to you individually), and they do it all without ever being explicitly programed to. The algorithms just maximize for results, and the ends justify the means while giving corporations plausible deniability for even the most egregiously exploitative means their algorithms employ.
In the US, companies like Google, Microsoft, Facebook, your ISP and cell phone company routinely turn data over the state with both three letter agencies and local police departments sucking up all the data they can. It's a huge violation of our rights and a threat to our freedoms.
Even the most well-intentioned company collecting your personal data is likely not doing enough to secure that data, and whatever data they hold onto is just waiting to be abused when a less scrupulous person takes over, or to be handed over when the company is bought or absorbed into another, or to be sold should the company ever go bankrupt or become desperate enough for the money.
One way or another, the data you hand over will be used against you, and worse you'll have no idea when it happens. Today people are turned down for jobs and denied rental contracts because of the data collected on them. They are charged more for the same products they buy online than what other people are paying. They are told a company's polices are one thing, while other customers are told they are something else. Their insurance premiums are being raised based on this data. Companies have even been shown to use this data for things as trivial as leaving some people on hold longer than others, but nobody is ever told the reason why those things happen. You have most likely already paid more than someone else, had your time wasted, been denied something, been mislead, or been rejected based on the personal data you've handed over.
Nobody is using your data to protect you or put more money in your pocket. It is always used to serve someone else regardless of what that does to you.
I wondered the same thing about needing access to the cameras and microphones. Turns out they justify it by saying it's for reading QR codes (as if phones had no other way to do this).
It's not just SMS that is the problem. Cell phone accounts were never intended to serve as high security authentication sources. It's laughably easy to take over a cell phone account, and once you do so, you likely crack open the victim's finances like an egg and leave them almost powerless to do anything about it before damage is done.
That's a _feature_ for the telcos. They want you to be able to walk into a store, grab a new phone, and walk out several hundred dollars lighter with your old number already working on your shiny new phone. They want to make it as easy as possible to do that.
They never signed up to be the gatekeepers to your digital life, and there's nobody paying them to do that. Expecting them to make their sales experience worse than their competitors just because some bank or crypto exchange can't work out a decent way to authenticate their customers is insane.
Not really, if I value it and would pay a higher price to use a telco that safeguards my number. Unfortunately, this (like much of security) is invisible to the consumer until it is too late.
It's not just that not everyone has a mobile phone, people who move between different countries change numbers and lose access to their old one. I've been screwed over by this many times. You can't assume someone has the same phone number for their entire life.
My experience is very much to the contrary. My phone with a Microsoft Authenticator app died and I still cannot access my MS account. In a similar way, it took me some time to get my smartbanking apps running again.
If it was just SMS, I would move the SIM card elsewhere and hey-presto.
This. I regularly live in 2 different countries, Canadian cell plans are exorbitantly expensive when roaming so I have 2 phone numbers and its extremely annoying when 2FA relies on said phone number...
Agreed. TikTok does this as well... I literally get spam calls just as I scroll though content there. Microsoft leaks numbers somehow as well, even on Enterprise service apps, as I used to get spam calls on company phones from authenticating on Azure all the time, when the numbers were totally private and used for business calls only.
It's really a new form of torture to have to remember a really complex password and to do a TFA log in every time a change needs to be made now, one of my least likeable parts of modern jobs. It's not even like we had breaches, it just became mandatory and now a lot of things like cloud config go largely unchecked by admins because it's so tedious to log in so frequently and they often get locked out the minute they forget their phone at home because email is often another password/TFA hurdle... Stupid wins first these days. Breaches still happen elsewhere all the time despite TFA adoption as well, hackers keep engineering workarounds, and mobile security is compromised when personal devices are used anyway. A much better approach is in segmenting data, and only retaining essential/required data, but systems are made to collect far too many details on users and subject matter overreaching a "need to know" basis, which also dramatically increases the impact of modern data breaches.
TFA used to be based on email and it was just fine. The only reason why phones became mandatory is precisely because of illicit use of phone contacts by these platforms. The bootleg calls also probably eats up tons of money in terms of prepaid minutes for people with those types of plans, yet whenever there is a fine, none of the affected people see true justice.
I was overseas and my provider (Cricket) doesn't have roaming so I usually pick up a cheap prepaid SIM locally.
I didn't enable 2FA on Uber but it insisted on sending me a code via SMS (of course, to my inaccessible US number). That was incredibly stupid and shortsighted. Meanwhile, all services that were set up for Authenticator MFA worked just fine over the European carrier's LTE.
I also tend to purchase a prepaid SIM while overseas, and ran into a bunch of issues paying for bills at bars/restaurants using their website or mobile apps, as my credit card was doing a second layer of authentication through Mastercard's Verification and would only send me a code via SMS!
It is crazy that my capitalone mastercard wouldn't allow me to do the validation through my capitalone app!
Next time, use an app like pushbullet that will forward your text messages to you. It's a huge security risk since if pushbullet or whatever gets hacked then the hackers would get all of your access, but for a short term utility like ensuring that you have connectivity it may be well worth it.
This. It drives me crazy that the most critical service I use, Vanguard, still requires SMS for 2FA. It's a pre-requisite for using a yubikey. It makes no sense.
Also SMS not nearly reliable enough. You should have alternatives for that reason alone. My cell carrier was blocking many SMS verification messages for a good two months. It caused me all kinds of problems when my credit union merged with another and I had to change account numbers all over the place. Many had the option of using an email address, but there were quite a few that it was SMS or play find the human on the 800 number.
Unrelated to Twitter, but your post reminded me that consumer should also have an independent "account number" for lack of a better word that belongs to the user, like a telephone number. Electronic payments would come out of this personal account number and be forwarded to whatever institution(s) the person wants. Then changing banks would be as easy as changing phone carriers.
>Twitter must provide multi-factor authentication options that don’t require people to provide a phone number."
Hmm. That might be difficult. I always thought the reason Twitter required phone numbers was to stop spam accounts from being created. So a phone number is basically acting like an expensive captcha. This order seems to be saying Twitter needs to stop requiring phone numbers. That might lead to an increase in spam accounts.
I don’t think it has any applicability to anyone beyond Twitter.
Maybe it’s a precedent that the FTC will tell you to add non-SMS 2-factor if you are misusing the SMS factor for advertising, but that’s a pretty limited precedent!
I hate all the different ways companies target people.
I recently booked flight on American Airlines for my 80+ year-old father. I requested the golf cart to take him between gates.
Immediately I got a call from "American Airlines Health Alert."
They made it sound like there was an issue with the booking... "An important health alert related to your flight." And there was a "Press 1, if you're over 50" option.
Anyway long story short it was some shady marketing company selling me a panic button in case of falls.
The lady was like"these are very expensive devices"... "we'll give you the device... but you pay a small fee for monitoring every month."
Clearly she'd given the pitch 1,000 times. Didn't give me any time to talk. Finally, I was like, "Hey is there a problem with my Dad's flight, or are you just trying to sell me something?" And she hung up on me.
Fuck American Airlines. Fuck all the airlines really, but it should be illegal to target the elderly just because they asked for help with connection flights.
Twitter doesn't let me DM people who don't follow me because I haven't provided a cell phone number. I refuse to give it, mostly on principle. I send messages very rarely and am clearly not a bot. When did demanding a phone number become OK to access basic elements of a service? This happens even when I try to DM people whose DMs are open.
The one time I tried to make a Twitter account it locked me out "due to suspicious activity" and then later required me to provide a phone number. I never even made a post or really finished entering account information. So it seems it's now basically required for an account period.
I was outraged and agree with you. It also takes on a new cast in light of this FTC action.
Amazon did that with me for reasons I do not understand (again, unspecified "suspicious activity"). When I sent them scans of my driver's license they concluded they were too blurry. I'm not sure how I could send them a higher resolution scan over fax, which is what they required (the scans also seemed pretty clear to me and others I showed it to as a sanity check).
Twitter requires a phone number, they just don't require one at sign-up. This is a known pattern they do with every single account - you have to provide a phone number at some point. If you ever log in with a VPN, they may also require a number to even access your account at all.
Google forced me to give a phone number to verify my 10+ year old account, not because I forgot my password, but because they want all your information (I think that they buy ID info from the phone companies)... and I don't even use any kind of 2FA...
Did the same to me. Refused to give it, luckily I only used it for YouTube, lost the 100 odd channels I subscribed to and my playlists and that was about it. Use an RSS reader now to track my favourite channels
That's the problem I see with most of 2FA, you have to reveal more of yourself instead of less, increasing potential attack surface instead of minimizing it. If anything, recent history has shown that you cannot trust anybody on the internet. Even if they're not outright hostile or abusive, they can still get cracked and their data stolen. For myself, I'd rather rely on strong, well-protected passwords and no 2FA as far as possible, but most people might not know how to do that or find it too inconvenient.
Where is this the case for 2FA other than SMS? It seems like the other common ones are either just verifying a shared key, or some type of one-time pad
>given twitter isn't profitable, i'd imagine its more money than they've ever made
But it's not as much money as they're making in ad sales to those phone numbers. Twitter will just see it as a cost of doing business and there won't be any meaningful change.
The purpose of the fine is not to make the business unprofitable, it is to make that particular endeavour unprofitable. If profit from violating law - fine for violating law > 0 then then any business not run by ethics will continue to violate the law because not doing so is less profitable. This does not change if the business is throwing out money elsewhere.
I appreciate the security of 2FA, but I don't like the liability and and I don't like being required to have my phone at all times. Jus one of my gripes with the world
I propose multiple YubiKeys for this. Unlike TOTP, it's not susceptible to phishing, and you can keep Nano keys inserted in your USB ports that you regularly use. You don't need your phone or anything most of the time.
Not looking forward to Github making it mandatory. I don't want something I can lose to control my access. The insidious part is, as it becomes normalized, more employers will think that they can force their workers to participate in broken security theater with their own private property rather than a proper solution with corporate assets.
Guarantee they're doing the same thing with phone numbers used to verify accounts, as well. I'm not talking about the blue check mark verification, but the verification they impose upon new accounts to prove that you're "real" and not a bot.
Repeat after me: we need FIDO2 in exactly the same physical form factor as your house key. Give ‘em away all over the place, make it the default conference swag. SMS is not good.
I mean... please don't trust a random key given to you at an event by someone you will never see again and might not even be affiliated with the company they are claiming to be from to not have a copy floating around in a database?
And then there are UI/UX constraints. The Venn diagram of "knows how to use SMS", "knows how to use 2FA" and "knows how to use yubikey etc" does not have a lot of overlap outside a tech audience.
The FIDO2/webauthn ones (the Security Key series) are $25 for USB A or $29 for USB C. You can't use them for storing OAUTH keys, they're not smart card compatible, they can't store your PGP keys, and can't create one-time passwords or store a static password securely... but they are cheaper.
It's such a liability that people build corporations with revenues of hundreds of billions and profit margins above 20% out of exploiting it almost mindlessly (more data!!!).
Apple could complement their existing “hide my email” with a “hide my number” feature that makes it easy to create disposable tracking protected phone numbers. This would help counteract the “oh something about your account is suspicious so give us your phone number” dark pattern.
Does the recent 5th Circuit decision[1] related to civil penalties issued by administrative agencies have any relevance here?
The article mentioned that the complaint was "filed by the Department of Justice on behalf of the FTC," which sounds a bit more involved than the FTC saying, "Hey Twitter, here's your sign, now pony up"...I have no idea how the game is actually played though.
> The 2010 complaint cited multiple instances in which Twitter’s actions – and inactions – led to unauthorized access of users’ personal information. To settle that case, the company agreed to an order that became final in 2011 that would impose substantial financial penalties if it further misrepresented “the extent to which [Twitter] maintains and protects the security, privacy, confidentiality, or integrity of any nonpublic consumer information.”
The $150m fine is because twitter violated that settlement agreement.
Twitter itself is still doing it, even if you opt-out of all personalized ads in their app, it'll still advertise stuff to you derived from tracking your browser history.
When you can have Authenticator Chrome extensions [1] what is the point of 2FA? Who decided making it harder to login for an average user is worth the added security? I'm not arguing security is not improved. The question is who weighed the pros/cons of 2FA and decided the entire industry should adopt it? Can we shine some light on the orgs/individuals responsible for this.
> This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic.
Wikipedia describes 2FA very matter of factly without any background on its history and its advocates [2].
I may have some idea about this since I was kind of around the space at the time. But to be honest I don't understand your question. Are you asking about the benefit of TOTP as an authentication mechanism when users can install insecure browser-based TOTP implementations?
As far as the history, and "who", I think this has a very long history in the "security-industrial complex", which probably means : NSA. Certainly the idea of 2FA goes back as far as smart cards (early 90s). Then came RSA SecurID which I saw as a hack to give you something similar to smart card security but without the need to roll out a PKI. TOTP seems like it is a generic version of SecurID.
I don't particularly remember any vendor agenda on all of this, more like everyone was looking to fulfill government and bank requirements for security then the techniques employed leaked out into the corporate/enterprise world, and finally (like, around today), have become mainstream in the B2C use case.
My perception has been that all of this was pretty much about "making things better" by some definition of better that depends on reasonable security for reasonable cost, in the context of typical user behavior.
This looks much more like showmanship than actually improving security. Again I'm not saying security is not improved. Now there are people who are happy they set standards for others to follow and IT managers who can show off to their bosses that they're following security standards like ISO27001 and SOC2. SOC2 standard is set by AICPA, the last A stands for Accountants. Of all people.
Certainly there's plenty of hype and herd behavior in this industry, but underlying this is a simple desire: don't allow users to give their passwords to a third party. Or rather, they can do that but the third party won't be able to authenticate because they don't have the smart card or 2FA dongle.
Often there is a requirement in commercial contracts requiring adherence to certain security standards. An example of such a contract is liability insurance.
Apart from security and privacy implications, phone numbers for 2FA are a major issue when you travel to a country where your number is not working. I had to communicate in a very complicated way with my health insurance because of this. Why is that entire practice not banned yet?
Some weeks ago I wanted to deactivate my Twitter account.
I hadn't used it for a while, and it claimed that my account was locked. Nothing was sent from it in many months, so it wasn't clear why/how it would be locked now.
For some reason you cannot deactivate your account when it is locked.
So I contacted Twitter demanding that as EU citizen (which is true) I hereby demand all data about me that Twitter or its subsidiaries might have, including account data, to be deleted under the GDPR... Or alternatively unlock my account so that I would be able to deactivate it.
They were actually pretty responsible. My account was unlocked 30 minutes later and I was able to deactivate it.
This is surprisingly reasonable. I would like to see a decisionmaker do some time for fraud, though. They locked people out of their accounts and demanded phone numbers for "safeguarding," then used them for targeting in direct contravention of a previously negotiated agreement with the FTC. If that doesn't rise to criminality, the fraud statutes need to be updated.
edit: they should also be required to dump the phone numbers (even to be recollected later, without the deception), but I didn't see that in the article. Are they being allowed to keep the proceeds of a crime?
First you have to establish who goes to jail, corporations are able to avoid this by having vague structures of shifting blame so a jury can't decide if any particular individual is actually at fault.
There probably should be laws establishing ultimately responsible people with the unenviable duty of being responsible for illegal things corporations do (sort of like an engineer signing off on the design of a bridge), but doubtful such a thing will happen.
We're left then with personal responsibility being limited to people stupid enough to leave pretty explicit records of nefarious intent to commit crimes.
Meh, that's why the CEO makes the big monies. "The buck stops here" kind of thing. Charge the CEO. Make it the CEO's problem to prove they were not responsible only by giving up the person that was. I do believe sometimes CEOs are not fully aware of what happens below them in the org tree, but they are accountable for their people. If the CEO can't handle that, then they shouldn't be accepting the roles. Clearly, this has to be understood as part of the job description
* Problem: Spammers automate creation of accounts.
Solution: Reuse the MFA infrastructure as some kind of "CAPTCHA". The phone number is not stored.
* Problem: Spammers use a single phone number to unlock 1000's of accounts.
Solution: Store the phone number - so those kinds of misuse can be detected.
* Problem: Ads-Team wants to sell more targeted ads.
Solution: There is possibly a phone number stored in the user profile, use that.
Who is to blame here?
The Ads team that didn't check if the number can be used?
> Solution: Store A HASH OF the phone number - so those kinds of misuse can be detected.
If you don't need to store PII verbatim, don't store it verbatim.
> Who is to blame here? The Ads team that didn't check if the number can be used?
Yes. 100% yes. It's insane that we've normalized the idea that if you can physically get your hands on some data then that means you're allowed to do whatever you want with it. Anyone even remotely responsible working in advertising should be tracking provenance of the data they're using. I've heard all sorts of excuses about why this isn't practical, but with each year that passes I find them less convincing, and I've finally reached the point where I reject those excuses outright. If you don't _know_ you're allowed to use some PII for marketing, then you _can not_ use it for marketing. It's that simple.
The person or team who gave green light to storing phone numbers without giving the data appropriate access controls and protections to avoid it being used for anything that is not strictly related with security and fraud control. A system such that if the Ads team tried to access it, they would get an access denied error, or maybe a red alert warning stating that this field cannot be used for marketing operations.
If a system to provide such protections didn't exist, then that system should have been implemented before agreeing on collecting phone numbers. Again, whoever didn't have that insight, should be the one to blame.
(all this is just wishful thinking on my part, of course)
Problem: Bank wants to make more money. Solution: There is possibly some money stored in the customers bank account, use that
Do banks run like that? No. Do banks sell your details, your address, sign you up for random subscribtiona without your permission? No. Why should twitter get away with this
Go to your U. S. bank and get a mortgage, and after a month of emptying buckets of junk from your new mailbox, come back and tell me with a straight face that the bank didn’t sell every scrap of data they had on you.
The team responsible for regulatory compliance is responsible. If the team doesn't exist, then legals should advise management to establish one or provide trainings to every team, and then the teams are responsible.
Ad team, 100%. There are all kinds of laws around advertising. GDPR, CCPA, etc. And all the ad teams I've ever interacted with are well accustomed to consulting with the attorneys before doing something like using a brand new piece of personal data to do a brand new thing.
Doesn't the Sarbanes-Oxley act (Sec 906) mandate that publicly listed companies' CEO&CFO assume at least some personal liability when they sign each required SEC statement (10-Q, annual report, etc)?
Something new for them: """ CEOs are required to personally attest that they are responsible for changes to privacy practices and written policies. """
Somehow they don't have to figure that out with felony murder. Everyone who participates who is aware is liable to the same punishment, then. Why not in crimes of bureaucracy? Why make sure people who are just following orders are free from punishment?
Because killing someone is usually pretty explicit in the obviousness of a crime being committed.
Filling out forms, designing product features, and implementing them can have each individual contributor mostly ignorant that anything could possibly be wrong with the request and the few people who do have some idea only have a small one which is plausibly deniable. The person who does get caught in those circumstances is usually just a scapegoat anyway.
That's why you investigate. But awareness should be enough. And if we start to have trouble proving awareness (maybe employees aren't aware of a settlement), just require in settlements that employees are informed.
For felony murder, you don't have to know that the person you're with is armed, intends to kill, or if you're driving the getaway car you don't even have to know that they have killed anyone at all. You're participating in something that you're expected to know is wrong, and you're punished for anything that results from the entire event. If this were like felony murder, the engineers that implemented it (assuming awareness) would be as liable for the $150 million as anyone else involved.
> Because killing someone is usually pretty explicit in the obviousness of a crime being committed.
> Filling out forms, designing product features, and implementing them can have each individual contributor mostly ignorant that anything could possibly be wrong with the request
That's got nothing to do with felony murder. Felony murder occurs when you participate in a crime with someone else and they accidentally kill someone while you can't see them.
Because felony murder is an outrageous US-specific injustice. The fact that you're guilty of murder in the US if you rob a store and the store owner shoots your friend defies common sense!
Because I know it matters to some people: this rule is used mostly against young black men.
That sounds kinda like the Mafia or Yakuza. Take the fall, do time, protect the organization, get respect, get promoted. Many people would gladly do a few years in minimum security prison in exchange for million dollar salaries, etc.
While I do think that would be better than nothing, it could create its own set of bad incentives.
So jail everyone at Twitter who isn't an individual contributor and is even tangentially involved?
This is how you escape consequences as an organization: obfuscation. Make what you've done complex enough that it's too difficult for a jury to decide who is responsible for what, prosecutors won't be convinced they have a case and will decline to pursue the matter.
Yah, some personal risk might justify some of the salary package. Yah sure your going to make $50M a year, but you might have to sit in jail for 20 years sounds fair.
I don't think it has to be that hard; you just need to require that communication is preserved for companies over a certain size, e.g. meeting minutes, emails, etc. This is already the case for financial records and some employment records, and the case with politics ("but her emails!")
This way, a record can be subpoenaed if needed.
Don't keep records or don't have records of this particular decision? The person responsible for making sure the records are kept for that department will be in trouble.
There is some administrative "red tape" here, but it's not that bad, and much of these records already exist (or existed).
The problem is the political will to enact such a law; I agree that's not likely to happen.
"You have to keep a record of what happened during all employee interactions so that we can prosecute you some day" isn't exactly a likely-to-succeed plan. Already prevalent are coaching employees not to leave records of certain legally contentious topics.
I worked for a certain rainforest company and they specifically coached us on not leaving records or discussing certain subjects in any form of written communication.
Early in my career I worked at a company that stored customer credit card numbers in clear text in the database. When I learned of PCI compliance, I naively emailed my manager some links on the topic and a note that we might not be in compliance.
Yes, this was addressed with "don't have records of this particular decision? The person responsible for making sure the records are kept for that department will be in trouble."
Why not increase the punishment by having random audits like the government do for drug checks? And make the company pay, would be an even bigger deterrent if it's not just a fine...
>They locked people out of their accounts and demanded phone numbers for "safeguarding," then used them for targeting in direct contravention of a previously negotiated agreement with the FTC. If that doesn't rise to criminality, the fraud statutes need to be updated.
It's bad, I agree.
But jail? That should be reserved for the most heinous crimes and criminals.
I’m basically a prison abolitionist but i don’t really see corporate fines as any real kind of justice at all either, for big or small things. This is just putting a price tag on the behaviour.
White collar malice, even more so in tech, has an enormous blast radius. It affects a giant amount of people. Sometimes in small ways, but small suffering multiplied by a huge number is a large negative impact on society.
If one could trace down the single person (most) responsible for the offense, I would fully support jail time. I doesn't have to be long. Maybe 3 months for a case like this. And a note on the criminal record.
So that they can FEEL it. Right now they hide behind a corporate shield and suffer no personal damage nor reputational damage.
"Doing time" in this type of crime needn't be actual imprisonment. The accused individual(s) may for example face being barred from holding positions of responsibility for a set amount of time, be given a suspended sentence, or both.
Why is smoking weed worthy of jailtime but swindling millions of people isnt? We live in some crazy era of corporate decision makers being above the law.
At first I thought the fine sounded excessive but after thinking about it, it seems far too low. I'd like to know the the people that were specifically responsible for this scam.
There are a lot of details I don't see about this, even in the order itself. How did the FTC know twitter was abusing this data? Was there a whistleblower who notified them, or did they break down the doors and start scanning twitter's internal documents? Were they authorized to dig into twitters internal processes as part of the initial security investigation?
I can't even fathom how it's possible to use $5B/yr to run Twitter.
So, I co-architected the Opera Mini infrastructure. It peaked at a similar number of users (250-300M monthly active users). Sure, Twitter is much more DB-intensive, but transcoding web pages is pretty CPU intensive too, and typically we transcoded every single web page for them. Opera Mini was their only browser.
Twitter is spending $5B/300M =~ $17/user per year
I believe that from public sources, it's now possible to deduce that we spent less than a 1/100th of that per user/year, almost a decade ago.
Since we didn't have crazy money, we optimized things at every step. Or, well, mostly avoided doing stupid stuff.
That re-implementation of solved things in engineer's favorite language/framework does not happen for free. Its pretty clear from outside that company like Twitter is jerked around by its engineers who have far too much autonomy to implement things their way instead of high perf C/C++ that any company at Twitter's scale must use.
Twitter engineers endlessly tweeting about having JVM perf team, presumably for their low performance Scala codebase.
Here [1] is an example where supposedly Twitter's supposedly Silicon valley calibre engineers couldn't write a half decent log management system and end up using vendor which otherwise only boring enterprises would use.
It reminds me of ignoramuses at Yahoo would claim that Google's frugality is useless and they can do better by just buying from Sun/IBM/HP and so on.
That Splunk post is a painful read. In my world, Splunk is well-known for being an overly expensive service aimed at sucking money out of enterprise/pre-software-style companies.
According to their financial disclosures for 2021:
On income of ~$5.08B, they spent
- $1.8B as "cost of revenue" (costs incurred to run their business),
- $1.25B on "research and development",
- $1.2B on "Sales and Marketing", and
- ~$1.3B together on "general and administrative" (overhead) and "litigation settlement".
Then there are a bunch of small monies related to interest expense and income, etc etc.
They're spending huge amounts of money and could be profitable if they really wanted. I can't imagine what Twitter is doing with 1.25B in research. Elon could make Twitter profitable simply by cutting their research department.
Well presumably that 1.25B is mostly engineering salaries since its "research and development" (for tax purposes where they probably get huge write offs).
Anyway, they need that org to get their 1.8B in "cost of revenue" down, which is presumably mostly the cost of massive server farms to store what are mostly text mesages. Although these days with all the machine learning/etc to sell ads they probably "needs" all that hardware to run their models and can't just optimize it down to a higher perf system painting web pages.
If the salaries are towards specific operational roles they'd be listed under "cost of revenue". If the salaries were for general duties not specific to an operational role they'd be under "general and administrative".
Research and development needs to be for research and development. It could be for engineering salaries but those salaries would be towards research and development rather than operational duties.
General engineering usually falls under the "development" category. The details may be state/country dependent. That said, even in stricter jurisdictions I know for sure that companies tend to group their engineering resources in such a way that they aren't justifying individual engineers (except maybe WRT timesheet filing) rather a bunch of people working on a "project" which meets the definition all get lumped together for purposes of R&D.
Random google search, talking about how broad this can be:
I'm reminded of the reddit articles a few years back when they were talking about moving to AWS and having to batch database requests to maintain their database perf. Apparently, at the time they were literally sending tens/hundreds of thousands of database queries for each page load request for a logged in user because each comment was collecting all its up/down votes and adding them up rather than just having a per comment karma attached to the comment.
This is what happens when you hire a whole bunch of recent grads that frankly have no idea how to write code, and think they are the smartest people on the planet when it comes to distributed systems.
It's always premature to optimize until your company is failing, because if you aren't failing yet it means it's worked so far. You should always wait until your company is falling apart to do a full rewrite of your core product.
People. Salaries are expensive, especially SV salaries. Then you need well paid management for the extra heads, and group manager for them. IIRC (feel free to correct me, I'm not going to dig it out again) they spent >$1B on R&D itself... which is pretty much just couple hundred of engineers who, (judging from the service changes recently) mostly did nothing.
if your first thought is that the fine isn’t enough, often these fines go along with agreements to change business practices. in this case:
>In addition to imposing a $150 million civil penalty for violating the 2011 order, the new order adds more provisions to protect consumers in the future:
>Twitter is prohibited from using the phone numbers and email addresses it illegally collected to serve ads.
>Twitter must notify users about its improper use of phone numbers and email addresses, tell them about the FTC law enforcement action, and explain how they can turn off personalized ads and review their multi-factor authentication settings.
>Twitter must provide multi-factor authentication options that don’t require people to provide a phone number.
>Twitter must implement an enhanced privacy program and a beefed-up information security program that includes multiple new provisions spelled out in the order, get privacy and security assessments by an independent third party approved by the FTC, and report privacy or security incidents to the FTC within 30 days.
It’s ok they are going to get 1billion from Musk so they can afford it.
Jokes part I am glad they got fined. These kind of transgressions need to be dealt with publicly and Twitter is a big enough entity to send a message that this is serious. Of course I am sure you very company that got phone numbers already abused them :)
Here's hoping they get to Microsoft's fishing phone numbers from Minecraft players by threats and blackmail (alleging unauthorized account access that doesn't actually happen).
Sigh. Yep. Don't ever give a company your phone number for 2FA. It's insecure anyways due to SIM swapping. Stick to FIDO (e.g. yubikey) or TOTP (e.g. google authenticator)
Yep, it's almost more accurate to describe using phone numbers for 2FA as being anti-secure, not just insecure. That's because it's effectively no better than having no 2FA and it's possibly even harder to detect when your account has been compromised by a SIM swap. And many companies that use phone numbers for 2FA also allow resetting one's password via that phone number. It's really just a tragedy that companies do this, rather like when login screens prevent copy/paste.
If you're ever prompted to add a phone number to your account on some web service for "extra security", just click "remind me later" or "skip" as many times as possible.
The tradeoff is usability though. I can have a TOTP code stored on two separate phones in two separate locations versus needing a yubikey always present.
To me, I'm too forgetful and dumb to not lose a yubikey, but I manage to not lose my phone.
Yeah I'm extra upset about this because I would have chosen TOTP if I was given the option, but only sms authentication was available for 2FA for the longest time (until it became such a big issue with account takeovers including jack's I believe that they had no choice but to change that)
Or just get a phone and service provider who doesn't allow SIM swapping (e.g. Google Fi, etc.), since many more services only do 2FA with SMS than allow hardware authentication.
There's actually a very good reason to not use Google Authenticator actually.
They don't offer any backups (at least on iOS) and as a result, if you lose your phone, you are hosed. Google Authenticator also doesn't use iCloud for backup for files like other apps. I also just assume at this point no one owns that app and that it'll never get backups because that's how Google operates.
I've seen multiple people lose their TOTP codes this way and have been locked out of their accounts. Or even the more simple case, they buy a new phone, restore from backup and just assume everything is peachy then send their old phone back and then don't realize it until they open the app for the first time.
I suppose you're talking about automatic backups, but at least on Android you can manually "transfer accounts" to export to another device over QR codes.
On iOS, I do see this feature, but it claims it will move the account, as opposed to "copy" it, and so, if it is a backup mechanism, they are explicitly pretending it isn't one.
Although it is primarily designed and labeled as an export mechanism, I can verify that it does work as a backup mechanism. I regularly use it to sync up new 2FA accounts to a backup phone. Simply choose to keep the accounts after exporting.
I got scared as hell a few years ago when an update bricked the app, so launching it caused it to immediately crash. Fortunately, reinstalling the app fixed it without losing any data.
But since then I started actually backing up my recovery codes, and whenever I create a new account somewhere, I set up 2FA on three separate apps on my phone just in case.
They offer one-time backup codes that can be used if a device is lost. I'm not sure if this is Google or the site, but for every login where I've set up Google Authenticator I have copied the backup codes to my password manager for that account. I'd agree that a lot of people might not do that however.
Luckily services makes clear that you should print backup codes in case you lose 2FA.
As for Android Google authenticator - there is export function, that generates QR code for all tokens. You can't screenshot it, but can take a photo with different device and print.
There is a clear reason not to use Authy, which is that making your data portable is extremely annoying. No export function. I ended up writing a 3rd party Authy client just to get my TOTP keys out.
Yes. This is one of several similar cases the FTC has pursued against social media companies over the last few years. I believe Facebook had a bug that inadvertently outed their misuse of the 2FA phone numbers for advertising and that was what initially put the practice on the FTC’s radar. Around the start of the pandemic the FTC actually did a study[1] looking at the “secondary uses” of security data. They registered for 2FA with a bunch of websites and then tracked how many non-2FA related calls and text messages the phone numbers received. While the experiment was a great idea I think the way they structured it leaves much to be desired.
Teitter wanted my phone number once, even by locking my account and asking me for my phone number to unlock it. It felt like blackmail and I threatened Twitter with a GDPR request, not only requesting my data but also the algorithms used for automated decision making.
As soon as my account got restored I let their DPO know that I don't insist on the fulfillment of the GDPR request any more. And that I will follow through if Twitter pulls off this kind of blackmail again on me. Haven't had this issue any more.
> [Twitter] agreed to an order that became final in 2011 that would impose substantial financial penalties if it further misrepresented “the extent to which [Twitter] maintains and protects the security, privacy, confidentiality, or integrity of any nonpublic consumer information.”
They violated that order and that's what the fine is for.
I was wondering what kind of authority the FTC has to impose fines based on what as a European I'd consider a GDPR violation (in the USA, this california privacy act thing sounds like it would be the nearest thing, but that's not federal so that couldn't be it). But what was this order about? Clicking the reference in the article:
> The FTC’s complaint against Twitter charges that serious lapses in the company’s data security allowed hackers to obtain unauthorized administrative control of Twitter, including access to non-public user information, tweets that consumers had designated private, and the ability to send out phony tweets from any account including those belonging to then-President-elect Barack Obama and Fox News, among others.
> Under the terms of the settlement, Twitter will be barred for 20 years from misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information
So this wasn't about privacy initially, the FTC's attention came from allowing some public figures' accounts to be hacked, after which it imposed some broad set of requirements, which are broad enough to now include this privacy issue. Not a bad outcome, but interesting turn of events to get the FTC to act as data protection authority.
It was almost certainly a fuckup where the phone # was mistakenly stored in a shared schema, and someone on the ads side saw it and decided to use it for targeting, knowing nothing about 2FA or how it got there. This probably only affects a tiny fraction of their users.
>It was almost certainly a fuckup where the phone # was mistakenly stored in a shared schema, and someone on the ads side saw it and decided to use it
How is this an "almost certainly"? Do you have additional information you'd care to share on why you think so? If this were the case, it would point to insanely sloppy policies, procedures, and implementations.
>This probably only affects a tiny fraction of their users.
> This probably only affects a tiny fraction of their users.
Because most users provided their phone number when signing up, not just when setting up 2FA. Twitter has always been phone-centric (the first app was literally just sending SMS messages).
Personally I think the CEO would have known about this happening and turned a blind eye until it became an issue. I have nothing to back this up though. Just a pessimistic take on corporate culture.
Clearly anyone with the ethical chops to consider resigning over this would be a net-loss to Twitter if in fact they resigned. Is this some sort of named paradox?
I created my current Twitter account a few years ago and it remained dormant for a while. It was flagged as "in violation of our policies" despite having not made any tweets or using a handle or nickname that would cause offense to anyone. In order to resolve this, I had to enter my phone number to "secure" my account. I don't know what process triggered this review, but I'll be damned if it didn't smell like an easy way to associate an existing marketing profile with my Twitter account. Of course, it's vitally important to profile a service I used to keep up with industry news and post about Goban puzzles.
I've also run into similar patterns on Discord and similar platforms; "Oops! Something suspicious is happening with the account [you literally just created]. Please add a phone number to your profile to proceed."
Although I follow a reasonable set of practices around identity/password management, I usually architect my risk profile with a "I don't care if I lose this account" approach. If that statement isn't true, then I will happily apply all of the security measures available. However, it seems like the idea of creating "I don't care" accounts is becoming increasingly difficult as we continue to invest in user marketing analytics and lower the barrier of entry to these types of technologies that do not have the consumer's best interests in mind.