One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
<< find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
Honestly, kudos. The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
<< As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And people laughed at Red Reddington when he said he had no email.
There was a post from someone a long time ago who has an email address and name similar to Make Cuban but not quite. He got quite a few cold call emails meant for Cuban. A lot of them were quite sad (people asking for money for medical procedures and such).
As we all know some of the "consent" pop-ups have a first page of general settings, and then a "vendors" page to further deselect all the "legitimate interests".
I recently noticed that a fraction of the "vendors" allow deselecting the "legitimate interest" but have the "consent" tick box marked and unmodifiable.
The following vendors have un-deselectable "consent" tickboxes:
Skimbit Ltd
Confiant Inc.
Lumen Research Limited
Visarity Technologies GmbH
DoubleVerify Inc.
Revcontent, LLC
Adssets AB
Integral Ad Science (incorporating ADmantX)
Mirando GmbH & Co KG
Polar Mobile Group Inc.
Rockabox Media Ltd
Telecoming S.A.
Seenthis AB
HUMAN
Papirfly AS
NEXD
One Tech Group GmbH
illuma technology limited
CHEQ AI TECHNOLOGIES
Adjust Digital A/S
VRTCAL Markets Inc
Cavai AS
Kiosked Ltd
Protected Media LTD
Oracle Data Cloud - Moat
Bannernow, Inc.
Jetpack Digital LLC
GeoEdge
Ensighten
IVO Media Ltd
Online Media Solutions LTD
Mobkoi Ltd
Redbranch, Inc dba Fraudlogix
Alphalyr SAS
Silverbullet Data Services Group
Stream Eye OOD
adbalancer Werbeagentur GmbH
Somplo Ltd
Velocity Made Good LLC
Vyde Ltd.
Adelaide Metrics Inc
Sqreem Technologies Private Limited
TMT Digital Inc
dpa-infocom GmbH
Brandhouse/Subsero A/S
streaMonkey GmbH
Alkimi
Zeit Agency ApS
Sitewit, Corp
AccountInsight Ltd
Aderize, Inc.
fraud0 GmbH
Channel99, Inc.
Videobot Ltd
Appstock LTD.
Dando online LTD
EMBRACE Systems GmbH
Hiili SL
YIELDBIRD SP. Z O.O.
Volentio JSD Limited
BEAPUP SOLUTIONS LTD
Public Good Software Inc.
Kidoz Inc.
DataDome SA
Sarigato Sp. z o.o.
Gesher Software LTD dba bridgeupp
Playdigo Inc
Sipo Inc
EliteAppgrade
SpinX Pte Ltd
Creatopy INC
Codevelop Technologies GmbH
Adgrid Media, LLC
ProgrammaticX LTD
Nitrouppi LTD
9 Dots Media Ltd
Vudoo Pty Ltd
Mobavenue Media Pvt Ltd
Carbonatix LTD
1) What is up with these?
2) Are these even legal under GDPR rules?
3) Does this not nullify arguments by certain 3 letter agencies that users "consent" to their tracking?
4) Who is behind these companies? Any idea on how to approach this from an investigative journalism angle? Can we figure out the headquarters, employee counts, CEO's of these companies?
5) If "undeselectable consent tickboxes" qualify as legally valid consent, doesn't this set a precedent to foist off miryads of types of lack of consent as "consent"? Will this enable legalizing rape? Where does this Pandora's box end? How is this any different from:
6) As far as I understand, an illegal contract is void. If the forms submitted by users contained undeselectable "consent tickboxes"; then the forms no longer constitute legal contracts. Observe that this is regardless of the preferences of all the other tickboxes: even if users were to lazy to deselect all the deselectable tickboxes, the mere presence of deselectable tickboxes voids these forms as contracts. This means that all the other vendors didn't receive any consent, since their specific submitted form-as-a-contract is void, even if the majority of the vendors had consent tickboxes that could be deselected. It would seem prudent for such companies to insist that the forms don't contain undeselectable tickboxes for any companies since it would nullify the consent they hope to receive.
As a European resident, I have put in a complaint to the company for you. Should it be dismissed out-of-hand, I will forward a complaint to my national Information Commissioner's office. I will post any results.
out of curiosity: which company did you put in a complaint to?
about posting any results: I assume you are aware that after some time it is no longer possible to add comments to a HN discussion, I assume you will post any progress as a HN submission?
> The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
Unless your problem is with the company doing the privacy violations, this doesn’t make any sense.
Where I live, which is not in the USA, I'm confident my doctor's office doesn't sell their contact list - or at least, not without statistical anonymisation and aggregation for research purposes.
They probably outsource processing the data and storing it to other entities, but that will be under contracts which govern how the data may be used and handled. I assume that's not what "sell the data" means in this conversation.
It would be such an egregious violation of local data protection law to sell patient personal details for unrestricted commercial use, including their contact info, and it would make the political news where I live if they were found out.
Also "not in the USA" i actually work on a medical ish application these days (not the in production version, mind but a fork with new features that's entirely separate at the moment).
I have access to ... zero patient data. Our entire test database is synthetic records.
HIPAA is pretty much the only halfway effective privacy regulation the US has. It imposes strong regulatory, licensure, and even criminal censure for violations.
It's formulated so that they can give those contacts away rather than sell them, but only to the rest of the medical goods & services supplychain that are involved in your care, who are also bound by HIPAA.
The worst dark pattern this has generated so far seems to be pharmaceutical company drug reps bribing your doctor to change what they would prescribe you.
The worst that's likely to happen without regulation, as far as I can tell, involves an associated provider just leaking UnitedHealthcare's full database of every patient and every condition.
Exactly this was tried by the likes of James Oliver and journalists/comedians of that caliber running ads and gathering data from politicians in Washington.
>One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
>As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
Fortunately this is changing with iOS 18 with "limited contacts" sharing.
The interface also seems specifically designed to push people to allow only a subset of contacts, rather than blindly clicking "allow all".
The far bigger issue is the contact info you share with online retailers. Scraping contact info through apps is very visible, drawing flak from the media and consumers. Most of the time all you get is a name (could be a nickname), and maybe some combination of phone/email/address, depending on how diligent the person in filling out all the fields. On the other hand placing any sort of order online requires you to provide your full name, address, phone number, and email address. You can also be reasonably certain that they're all accurate, because they're plausibly required for delivery/billing purposes. Such data can also be surreptitiously fed to data brokers behind the scenes, without an obvious "tiktok would like access to your contacts" modal.
On android you can choose whether to grant access to contacts. And most apps work fine without.
GrapheneOS, which I use, also has contact scopes, so troublesome apps that refuse to work without access will think they have full access. You can allow them to see no contacts or a small subset.
There's also multiple user profiles, a "private space", and a work profile (shelter) that you can install an app into, which can be completely isolated from your main profile, so no contacts.
It surprises me how far behind iOS is with this stuff. Recently I wanted to install a second instance of an app on my wife's iPhone so she could use multiple logins simultaneously, there didn't really seem to be a way to do it.
The point is that it doesn't matter whether YOU grant access to your contacts. As long as anyone who has you in THEIR contacts decides to just press "share contacts" with any app, you are doxxed and SkyNet is able to identify you for all practical purposes.
You have two different points in your comment. Firstly, iOS has not been behind on having apps work if they don’t get access to a specific sensor or data. It’s on Android that apps refuse to work if they’re not given contacts access or location access and so on. Comparing the same apps on iOS and Android, I have found that Apple’s requirements for apps not to break when a permission is not granted is well respected and implemented on iOS apps. The same apps on Android apps just refuse to work until all the permissions they ask for are granted. YMMV.
I do agree that iOS is behind by not providing profiles and multiple isolated installations of apps, and it would be great if it did.
I think it's not properly appreciated that Apple fully endorses all of this. For two reasons: (1) the provision of the output of billions of dollars of developer time to their users for no up front cost (made back via ads) is super valuable to their platform; and (2) they uniquely could stop this (at the price of devastating their app store), but choose not to.
In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
>I think it's not properly appreciated that Apple fully endorses all of this. [...] they uniquely could stop this (at the price of devastating their app store), but choose not to.
A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps. Just look at f-droid. Most (all?) of the apps there might be privacy respecting, but good luck getting any of the popular apps (eg. facebook, tiktok, google maps) on there.
>In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
What would make you think Apple's pro-privacy changes aren't "about stealing $10B a year or so from Facebook"? At least some people are willing to pay for more privacy, and pro-changes hurts advertisers, so basically any pro-privacy change can be construed as "less about meaningful privacy and far more about stealing".
F-Droid will never have popular apps because it requires them to be open source. In fact F-Droid does the build for you, generating reproducible builds and avoiding the risk of adding trackers to the binary that aren't actually in the source code. With F-Droid the code you see is what you get.
> A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps.
40 years ago apps were sold on floppy disks. 30 years ago they were sold on CD-ROMs. 20 years ago, DVDs.
Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing. Apps being blocked or banned from stores for choosing to not respect your privacy is a good thing.
>Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing.
I'm not sure you're trying say. I specifically acknowledged the existence of f-droid as a "privacy respecting app store" in the quoted comment.
>Apps choosing to not respect your privacy, and being blocked or banned from stores, is a good thing.
"a good thing" doesn't mean much when most people haven't even heard of your app store, and are missing out on all the popular apps that people want. Idealism doesn't mean much when nobody is using it. Apple might not be the paragon of privacy, but they had a greater impact on user privacy than f-droid ever will. To reiterate OP's point: what's the point of having a perfectly private OS and app store, when there's no apps for it, and your normie friends/relatives are going to sell you out anyways by uploading their entire contact list and photos (both with you in it) to google and meta?
Or because they were tricked. eg. LinkedIn’s “Connect with your contacts” onboarding step which sounds like it’ll check your contacts against existing LinkedIn users but actually spam invites anyone on your contact list that doesn’t have an account.
Wasn't this also how some services would connect e.g. your bank accounts? They'd ask for your credentials and log into your bank to scrape its contents.
And I kinda get it, some services external to your bank can help you manage your finances etc. But it's why banks should offer APIs where the user can set limited and timed access to these services. In Europe this is PSD2 (Revised Payment Services Directive).
I think the key point is that they would take your Linkedin password and try to use that on your email without asking you, in case you reused passwords.
The linked wikipedia article below says that they asked you for your email password specifically -- is there any evidence that they would try to use your linkedin password itself?
This is how a load of emails were sent out from my Hotmail account to anyone I had ever contacted (including random websites) asking if I want to connect with them to Facebook. The onboarding seemed to imply it would just check to see if any of my contacts were already using facebook.
God damn this feature. About ten years ago I inadvertently did something in LinkedIn and ended up spamming everyone I knew with LinkedIn invites. It annoyed a lot of people.
Grapheneos lets you pick this for apps before they even launch. You can revoke their network access, as well as define storage scopes for apps at a folder level, so if an app needs access to photos, you can define a folder, and that is the only folder it can scan for photos.
I used that when submitting parental leave at work. I didn't want to provide full access to all my photos and files for work, so all they got was a folder with a pic of a birth certificate.
A big problem with GrapheneOS is the fact it only officially supports Google phones. Google is apparently incapable of selling those things globally, limiting availability.
There's also the fact hardware remote attestation is creeping into the Android ecosystem. There's absolutely no way to daily drive something like GrapheneOS if essential services such as banks and messaging services start discriminating against you on the basis of it. Aw shucks looks like your phone has been tampered with so we're just gonna deny you access to your account, try again later on a corporation owned phone.
GrapheneOS is amazing from a security and privacy perspective but it doesn't matter. The corporations will not tolerate it because it works against their interests. They will ban you from their services for using it. Unlike Google and Apple, they have no leverage with which to force the corporations to accept terms unfavorable to them.
iOS and Mac also let you do this, for photos, contacts and files.
Apple is also pushing developers toward using native picker components. That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
> That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
This is an interesting contrast with the earlier philosophy of phone OSes that the file system is confusing to users and they should never be allowed to see it.
From an user perspective, photos aren't files. Music isn't files. Contacts aren't files. Apps aren't files. App data isn't files.
The only things that "walk like a file and quack like a file" are documents, downloads, contents of external storage, network drives and cloud drives, and some Airdrop transfers.
Yes, it's technically possible to use the files app to store photos, music etc, but if you do that, "you're holding it wrong."
Until the app's devs get wise to this, and do not allow the app to function without the network access. It could be as simple as a full screen, non-closable screen that says the app requires network access with a button to the proper setting to correct the issue.
Such "go away" screens are in violation of Apple's AppStore rules. You cannot make a permission a condition of using the app, and stop the user from using it if they don't grant that permission. The app should gracefully do as much as it possibly can without the permission.
Try signing in in any Google app without allowing data sharing with Safari. It's not possible. They don't let you.
It's kind of weird that Apple introduced this big fat tracking consent popup, but they don't really do anything to actually prevent cross-app tracking...
This holds for every app and every permission? Because I'm quite sure I recently used an app that closed for not allowing a permission. May be misremembering..
5.1.1 (iv) Access: Apps must respect the user’s permission settings and not attempt to manipulate, trick, or force people to consent to unnecessary data access. For example, apps that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos. Where possible, provide alternative solutions for users who don’t grant consent. For example, if a user declines to share Location, offer the ability to manually enter an address.
This wording is actually a lot weaker than I remember it back when I wrote iOS apps. The developer also was not allowed to exit the app or close it against the user’s intent, however I can’t find that rule anymore.
I agree with these guidelines (although they could be improved), although I think that some things could be done by the implementation in the system, too.
> For example, if a user declines to share Location, offer the ability to manually enter an address.
This is a reasonable ability, but I think that the operating system should handle it anyways. When it asks for permission for your location, in addition to "allow" and "deny", you can select "manually enter location" and "custom" (the "custom" option would allow the user to specify their own program for handling access to that specific permission (or to simulate error conditions such as no signal); possibly the setting menu can have an option for "show advanced options" before "custom" will be displayed, if you think it would otherwise make it too complicated).
> that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos
This is reasonable, that apps should not be allowed to require microphone access for such a thing.
However, sometimes a warning message makes sense but then to allow it anyways even if permission is not granted; e.g. for a video recording program, it might display a message about "Warning: microphone permission is not allowed for this app; if you proceed without enabling the microphone permission, the audio will not be recorded." Something similar would also apply if you denied camera permission but allowed microphone permission; in that case, only audio will be recorded. It might refuse to work if both permissions are denied, though.
Yeah, "unnecessary" is the word that may as well render the whole section moot unless it's actually properly enforced. If I can remember I'll test it today and see how it goes.
Yeah like the ChatGPT app that doesn't work without a Google account. I have Google play on my phone, just no account logged in. I do have Google play services like firebase push which many apps legitimately need. But ChatGPT just opens the login screen in the play store and exits itself.
I'm always wondering why these idiots force the creation of an account with their direct competitor. It's the only app I have that does this. But anyway I don't use their app for that reason, only use them a bit through API.
>Its not. Apple still owns your stuff. There is no difference between Apple and other 3p retailers.
That could be taken to mean anywhere between "Apple controls the software on your iPhone, therefore they control your contacts" and "Apple gives out your data like the data brokers mentioned in the OP". The former wouldn't be surprising at all, and most people would be happy with, and the latter would be scandalous if proven. What specifically are you arguing for?
The "vulnerability" part doesn't seem to be substantiated. From wikipedia:
>The images were initially believed to have been obtained via a breach of Apple's cloud services suite iCloud,[1][2] or a security issue in the iCloud API which allowed them to make unlimited attempts at guessing victims' passwords.[3][4] Apple claimed in a press release that access was gained via spear phishing attacks.[5][6]
Regardless of their security practices, it's a stretch to equate getting hacked with knowingly making available data. Moreover you can opt out of icloud backup, unlike with whatever is happening with apps mentioned in the OP.
Useless without limiting the kind of data I want to share per contact. iOS asks for relationships for example. You can set up your spouse, your kids, have your address or any address associated with contacts. If I want to restrict app access to contacts, I also want to restrict app access to specific contact details.
> (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
When I was at a medium-sized consumer-facing company whose name you’d recognize if you’re in the tech space (intentionally vague) we had some customers try this. They’d find product managers or directors on LinkedIn then start trying to contact them with phone numbers found on the internet, personal email addresses, or even doing things like finding photos their family members posted and complaining the comments.
We had to start warning them not to do it again, then following up with more drastic actions on the second violation. I remember several cases where we had to get corporate counsel involved right away and there was talk of getting law enforcement involved because some people thought implied threats would get them what they wanted.
So I can see why companies are quick to lock out customers who try these games.
I wonder if it ever evoked an dive into exactly what happened to leave these customers with thinking this was the most likely avenue for success? Hopefully in at least some cases their calls with CSRs were reviewed and in the most optimistic of best cases additional training or policies were put into place to avoid the hopelessness that evokes such drastic actions.
That would require empathy from someone who is, right now, bragging about how they sicced their lawyers and the cops on customers they were fucking over.
I'm going to guess that the answer would be "nope, didn't care." That Cirrus isn't going to pay for itself, friend...and you can't retire at 40 without breaking a few eggs.
I remember when Google was locking accounts because people had the audacity to issue a chargeback after spending hours trying to resolve Google not delivering a working, undamaged phone they'd paid well over half a grand for. Nobody at Google cared, but when the money (that Google never fucking deserved in the first place) was forcibly and legally taken back, the corporation acted with narcissistic rage...
> That would require empathy from someone who is, right now, bragging about how they sicced their lawyers and the cops on customers they were fucking over.
How do we know they were fucking them over?
There's always going to be a subset of people who take any perceived slight as an attack on their honor, and will over-react. (I've had death threats for deleting a reddit post, fwiw.)
> So I can see why companies are quick to lock out customers who try these games.
Most of the companies who customers try these "games" against are places like Google and Meta that literally do not provide a way for the average customer to reach a human. None.
Those have got it coming for them, the megacorps' stance on this is despicable and far worse than the customers directly reaching execs who could instantly change this but don't because it would cut into their $72 billion per year net profit.
This is a case where laws simply did not catch up to the digital era. In the brick and mortar era it was by definition possible to reach humans.
I get that your company was smaller and probably did allow for a way to reach a human but that's not generalizable.
Long ago when Google tried to launch its very first phone somewhere in Europe I can distinctly remember that it was initially not allowed to because of some regulation that mandated a company selling telephones to have a customer service.
Can't remember if they eventually found a loophole or if the regulations were changed.
For most situations, Walmart and CVS have fine customer service compared to most. You just need to show up in person. Fortunately, their business model might make that a little annoying, but not even difficult for most.
The challenge isn't getting customer service. It's getting someone who isn't reading from a decision tree that conveniently doesn't include any paths where the corporation has fucked up.
My only connection to Amazon support has been for AWS.
Perhaps though this should be an example of good customer service where talking to a human is easy, and not lumped in with the likes of Google where its impossible.
Perhaps your experience with the online shop is different, but frankly they're in my "good" column, not my "bad" column.
This is not like for like: AWS assigns reps because the dollar amounts are significant compared to your monthly cell phone bill or a purchase at Amazon. It's not surprising that buying a car gets you better customer service than renting one.
Two companies that are so gigantic they combine to a great percentage of number of "company interactions" the average Westerner has on a daily basis.
Anyway, I don't think it contradicts my point? Your company exist, mom and pops exist and there's a whole spectrum between them, so it's not generalizable.
I think the sort of desperate mailing works better when you reach out to execs and VPs, not PMs and managers. Some founders had well-known emails and it was common to hear stories about escalating (eg jeff@amazon). It’s a well documented technique that many people have had great success with.
I’m not an exec, but I work on a major product in a major company. A significant portion of Americans use my work. My corporation has a reputation for poor customer support ATM. If I started getting personal emails or phone calls, I’d contact corporate security or lawyers just out of fear and confusion. That said, I’d be peeved on behalf of my customers if that same treatment was applied to messages directed at our household-name-CEO.
Honestly not condoning people crossing the line of threats/abusive behavior, but it sounds like you worked at one of those companies that make it impossible to get ahold of someone, don’t respond to customers, or other poor customer service issues, and then are surprised people resort to this
What's funny is that the exec I got on the phone was super supportive and helpful and was genuinely amused to hear from me and hear what was happening. He put me in touch with their "Executive Support Team" and it was after this that I guess someone realized they didn't like the route I had taken.
I feel somewhat vindicated after this announcement (though it does nothing to bring my account back):
> Accessing any kind of customer service for Cash App was a challenge, too, according to the CFPB. Block included a customer service number on Cash App cards and in the app's Terms of Service, but calling it would it ultimately lead users to "a pre-recorded message directing consumers to contact customer support through the app."
As a result of sales drones getting hold of my number, I have to put my phone on silent and never pick up unless I recognize the number. Very unfortunate. What if there is an emergency with my kids?
If you're using iOS you can set certain contacts to bypass silent mode so that you still hear their notifications/calls. I know it doesn't help with unknown numbers, but just saying in case you're not aware. I'd be surprised if you can't do the same on Android.
Yes, thanks, I've configured that for kids and other loved ones. But I can't pick up anything else, even sales people from India manage to use a number that appears local (in The Netherlands for me), so I might miss a call from the kid's school.
I've just added the numbers of my kids school onto the list and it's been fine for me. I've never had them contact me from anything other than the schools number, but I'm in the UK and I would be very surprised if a teacher tried to call me from their mobile phone or something.
Oh wow, I knew this was a rampant problem in the US, but I didn't realise we had that at that scale here in the Netherlands as well. Hoping I can dodge the bullet a little longer...
> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies
The article author claims that you can't get this stuff for under $10k. Where do you find it for pennies?
As a test I downloaded it and got my wife’s full email and cell phone number easily from their free trial. And the full price would be on the order of pennies per contact.
The thing is...contact details aren't really private information, basically by definition.
The distinction is contact details privacy is based on the desire not be interrupted by people you didn't agree to be interrupted by - i.e. it's a spam problem - and realistically to solve this requires a total revamp of our communications systems (long overdue).
The basic level of this would be forcing businesses to positively identify themselves to contact people - i.e. we need TLS certificates on voice calls, tied to government issued business identifiers. That would have the highest immediate impact, because we could retrain people not to talk to anyone claiming to be a business if there phone doesn't show a certificate - we already teach this for email, so the skill is becoming more widespread.
A more advanced version of this might be to get rid of the notion of fixed phone numbers entirely: i.e. sharing contacts is now just a cryptographic key exchange where I sign their public certificate which the cellphone infrastructure validates to agree to route a call to my device from their device (with some provisioning for chain of trust so a corporate entity can sign legally recognized bodies, but not say, transfer details around).
This would solve a pile of problems, including just business decommissioning - i.e. once a company shuts down, even if you scraped their database you wouldn't be able to use any of the contact information unless you had the hardware call origination gear + the telecom company still recognized the key.
Add an escrow system on top of this so "phone numbers" can still work - i.e. you can get a random number to give to people that will do a "trust on first use" thing, or "trust till revoked" thing (i.e. no one needs to give a fake number anymore, convention would be they're all fake numbers, but blocking the number would also not actually block anyone you still want to talk to).
EDIT: I've sort of inverted the technical vs practical details here I realize - i.e. if I were implementing this, the public marketing campaign would be "you can have as many phone numbers as you want" but your friends don't have to update if you change it. The UI ideally would be "block this contact and revoke this number?" on a phone which would be nice and unambiguous - possibly with a "send a new number to your friends?" option (in fact this could be 150 new numbers, one per friend since under the hood it would all be public key cryptography). I think people would understand this.
What definition of contact details makes them not private?
Contact details (your phone number, email or address) are definitively private information, you should be the one that decides who gets them and who doesn't.
But it's not meant to be shared widely, for most people it's meant to be shared with consideration and/or permission.
Also, it's not just about "a desire not be interrupted by people you didn't agree to be interrupted by", it's about not having the data in the first place, for any reason, including tracking of any sorts.
Pre-internet/cell phone nearly everyone had their name/phone number/address in phone books. Libraries had tons of phone books. And you could pay for the operator to find/connect you to people as well.
Contact info being private is a relatively recent concept.
I think this could be one of the more legitimate uses of blockchain - distributed communications, contacts, and a refundable pay-per-call system to make spam calling uneconomical. Communication in general does desperately need an overhaul, phones are effectively useless as phones nowadays.
>> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone.
I find it funny how easy it is to find scammy websites which promise to remove your data (right...), but how hard it is to find the actual marketplaces where people trade this data. It also makes you think about what other systems have similar asymmetric interfaces for the public and the ones in the know (yes, I know there are plenty).
Assuming these marketplaces operate within the bounds of the law, would it break HN’s ToS to post them? I’d be interested in pursuing the same strategy.
And the combination of contacts are also unique enough to identify you. Even though they change over time. Some fuzzy matching, take in another few bits of fingerprint like device type and country and voila no advertiser ID required.
Ps smart idea to use it for that purpose. If I failed to get proper service I'd just review bomb the company everywhere and soon enough I'd get a call fixing my problem and asking to remove them :)
> there is no sane way to protect your contact details from being sold
I can think of one: make it illegal to buy, sell, or trade customer data. All transfer of data to another party must have a record of being initiated by the individual.
Yeah, I wonder if it might help to create a little newsletter for politicians and regulators. Send emails telling them exactly where they are, what apps they use, and so on. And send them the same information about their children.
Eh, California protects politicians from having their real estate holdings posted online by government, and afaik, most county recorders have decided it's easier to not let any of it be online than to figure out who is a politician and only restrict their information.
Of course, much of it is public information so businesses can go in person, get all the info and then list it.
To use a line sometimes attributed to Beria, “give me the man and I will give you the case against him”. By which I mean that I’m sure they will find some means of making you sorry.
I mostly connect through Signal. I do technically have a phone number that my close friends and family have, but its a random VoIP number that I usually change every year or so. Surprisingly no one has really cared, I send out a text that I got a new number and that's that.
How? Most of the services I use, from Walgreens to banks to retirement accounts, require a phone number either for 2FA or just to verify that you’re you when signing up. After changing my phone number this year and having to go through the rigamarole for each service, I decided never again.
I've had limited luck feigning ignorance with a bank recently. "I don't know why I'm not getting a code" "No, I don't have another phone number" "I still can't log in to the web portal". They dropped the phone number requirement in favor to sending the OTP to email in the end, but it took way more effort than is reasonable. I tend to include a request to the CS person to pass along a request for TOTP/authenticator apps but given the request for a phone number is likely intentional I doubt the feedback is getting too far. In my naive mind, if enough people do the same, maybe they'll get the message.
Yeah, companies are not dumb, and they know when you have VoIP number vs a full account with an "accepted" company.
I can kind of see why not allowing 2FA to a number that could be easier to loose, but that's weak argument. Of course they don't want someone from .ru to get a US number with all of the baggage that would entail
There are flaws to their methodology. For half the companies, to change your number from A to B, you first must verify a NONCE with A, then verify a NONCE with B. This just means you have to possess two phone numbers for a period of time — Weeks, or in reality, months — while you change the long list of services over to the new phone number.
There is a simpler/better way and that is to verify you have your email address before allowing you to do a NONCE with B.
Changing your number every year could mitigate as it would introduce entropy and stale data into the system. When done at scale, data lifetime would behave similar to the automatic deletion of messages on whatsapp. Somewhat mimicking an in person verbal conversation where only people's memories remember what was said and even their version gets changed every time memory recall takes place. Systems already exist in real life that protect privacy, it's just that we do a poor job of reproducing them with tech.
Changing your telephone number every year could be an artificial holiday like valentines day or halloween. It can be done if people deem it's important.
I already do this. It gives me an opportunity to trim the contact list to people I actually talk to regularly which I send the new number too. Also shows me my footprint online since I have to update the number. I only change it for places I actually use regularly or are important.
I just block all unscheduled calls and calls from unknown numbers. If there isn't a calendar event and it isn't coming from a known family member or close friend, the call doesn't go through.
I also have multiple cell and virtual numbers and give different ones out to businesses, banks, friends, and family. Businesses that don't need to ship me stuff also get a different address than ones that do.
I don't register to vote anymore because they leak my residential info. When they can agree to stop leaking it, I will participate again.
I have done this as well. I once got an travel insurance claim rejected by some outsourced handler and found out who the CEO of the insurance startup was. I emailed him and magically it got resolved
If you're US based, there's tons of data broker sites, and you can glue together the information for free as various brokers leak various bits (E.g. Some leak the address, others leak emails, others leak phone numbers). And that's by design for SEO reasons, they want you to be able to google someone with the information you have, so they can sell you the information you don't have.
Some straight up list it all, and instead of selling people's information to other people, they sell removals to the informations owner. Presumably this is a loop hole to whatever legislation made most sites have a "Do Not Sell My Info" opt out.
What you do is look up a data broker opt out guide, and that gives you a handy list of data brokers to search. E.g.
Where are you buying this? Might be handy for a job search. Zoominfo basically doesn't have a b2c offering and I am not paying several thousand for an experiment in improving my career
You are very lucky. In China, virtually all websites are required by law to use your phone number (verified by SMS) to register and/or to use. And all numbers must be linked to your ID.
I can make calls from my phone/laptop, using VOIP.
I could receive as well if I wanted to, but I rarely need to be called, so I do not normally keep a number, and I could not be called when out and about anyway, because wifi-only, but you do get an answerphone, so people can leave a message.
I can relatively easily skip trace people but where are you buying specific peoples information? Do you mean youre skip tracing or buying directly from data brokers?
You may be committing some type of violation of privacy laws if you're contacting them via phone and they're on the do not call list. Because they work at a company does not mean that you and the employee have a business relationship.
As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)