FWIW, I'm not necessarily saying that TSA's argument works out in practice. I'm just saying that they can at least try to make it.
Versus, even if we give adtech a similar benefit of the doubt, and accept for the sake of argument that they really could more effectively convince me to buy the world's greatest fidget cube, there's still just no way that such an outcome is a great enough social good to justify the means they used to achieve it.
(Except perhaps from the perspective of the adtech and fidget cube people. And, even then, only if we assume that they have a slightly deranged opinion of the social utility of their fidget toy relative to all other fidget toys. Or that they're solipsists.)
So, this is a weird one. I mean, all of this stuff is pretty much automated, there isn't a team of people at adtech companies going hmmm user 621163091 is looking for PC's serve him an ad for Lenovo. Its a series of complicated ML models with large amounts of inputs and outputs that no human understands at a level where they could predict which ad a human gets.
In the above scenario, I'm not sure that I see a violation of privacy, but I suspect you would. If you could enlighten me as to why, that would be super helpful.
Data has a way of being used in ways other than the one for which it was ostensibly originally collected. Especially once you let data scientists get in there and start having fun with it. (Source: I should know, I am one.) Or maybe you've got a unscrupulous employee who uses their access to the data to dox people. (Source: I used to work at a company where that happened.) And since, in a country like the USA, they are not subject to any particularly effective data protection laws, they're also really easy to sell to just whoever, or maybe liquidate at a bankruptcy auction, or whatever. The buyer may or may not intend to use it for better or worse purposes than the original collector. There's no real way of knowing.
There's also the security question. Data breaches are real and happen all the time. I think that the crackers' perspective on this subject may be, if I may misappropriate the famous IRA statement, "remember we only have to be lucky once. You will have to be lucky always."
In short, the mere existence of these pools of data is a threat, not only to people's privacy, but to their personal security. Even when you can't demonstrate that a specific harm has occurred yet. It's like hazardous waste: given enough time, it will leak out and cause damage.
So, I totally agree with everything you've said, (I am also a data scientist, so appreciate that part especially).
OTOH, ads pay for a lot of stuff, so if you could delete the raw data after some short time, and only retain the model which was used for serving/prediction, then I think a lot of those concerns go away, and the whole infrastructure around data is made an awful lot safer.
Lots of your other points only really apply to the US (the cavalier attitude towards data and privacy, especially), so I wonder if this is something that would work better in the EU, because you do have much stricter laws around the use of personal data.
Versus, even if we give adtech a similar benefit of the doubt, and accept for the sake of argument that they really could more effectively convince me to buy the world's greatest fidget cube, there's still just no way that such an outcome is a great enough social good to justify the means they used to achieve it.
(Except perhaps from the perspective of the adtech and fidget cube people. And, even then, only if we assume that they have a slightly deranged opinion of the social utility of their fidget toy relative to all other fidget toys. Or that they're solipsists.)