Hacker News new | past | comments | ask | show | jobs | submit login
An open letter to FB, Twitter, Instagram regarding algorithms and my son's birth (twitter.com/gbrockell)
790 points by waffle_ss on Dec 12, 2018 | hide | past | favorite | 337 comments



This seems to happen a lot when one's experience is statistically unlikely, or when it doesn't neatly mesh with a commercial opportunity.

For instance, I am a transgender woman and I get THE WEIRDEST ads. Makes sense, since an algorithm meshing together my engagement histories from ten years of social platforms must see something quite strange.

The only appropriately targeted advertisements I get are from academics interested in studying me. Like: "are you a trans woman? Take this study and win a gift card."

When platforms attempt to monetize me, they wind up pushing the brooks brothers dress shirt deals and so on that I used to gobble up in my prior life.

I've taken my business local as a result, but the algorithmic rejection of my reality on these platforms takes a pretty consistent toll. I don't know what the solution is, but wanted to underscore that this is a broader problem.


I am coming to the conclusion that the only way to beat the system is to not play in the system. Or play as little as possible. I no longer go on FB, Instagram or Amazon. I do my best to just be anonymous. I don't "need" to hang out on social media. I lived quite happily before these services even existed, and will continue without them. We have control. We can resist their shiny toys. If we are waiting for THEM (or politicians) to fix these problems, we will die of old age waiting. It's antithetical to their business and self-interests. The only way to make them less powerful is to not give them your data. That is the source of their power. Information.

  Greetings Professor Falken

  Hello

  A strange game.
  The only winning move
  is not to play.
  (1983 - Wargames)


This would be a solution if the problem was social media. The problem here is the intrusiveness of advertisement. Personally targeted advertisement exists on the non-social web, on the phone, in your mailbox, at your door. Group-targeted[#] and still intrusive advertisement exists in the public transports, on the radio, in newspapers, in the streets, on most "things" in your house (as branding), on your clothes. I think by now it's clear that we cannot realistically avoid substantial amounts of ads by simply changing our personal behavior. Just like for ecology, it is a common mistake to depoliticize the problem by advocating only individual actions. Defending oneself against ads has to be done in whatever community you are part of using the available power levers (that were put there exactly for that purpose). This can be the state level but also as a union, in a neighborhood. Advertisement should to be (more) socially controlled and regulated.

Advertisement is inherently intrusive, it's goal is to make you listen to some message you didn't ask for. It is strictly one-way communication and as such it's deeply authoritarian: tptb have the right to talk, you are coerced to listen (it goes against fair access to public speech, which goes against the ability to exercise your power on society as a citizen). It is based on ethically questionable methods: psychological manipulation and information gathering; to provide some hindsight, when performed by an individual and not a corporation, we call this behavior harassment or psychological abuse. Yeah sure, you might be stalked by a nice guy with whom you could be friend just like you might see an ad on an interesting product, but no, stalking is oppressive even if done by a nice guy and harassment is a tiny fraction of the possibilities you have to communicate. You want to get a new fridge? Ads will not help you while an independent comparison or buyers guide will. You cycle a lot and would like to keep up-to-date with related products? Subscribe to the newsletter of some cycling community. We don't need ads for anything, they only solve some problems as byproducts and we can always solve them more efficiently and less intrusively.

F ALL ADS

note: i didn't properly define "ads" here but broadening from pure "commercials" we could have mostly the same arguments against all sorts of "public relations" like corporate communication or modern vote-based politics.

[#] there is no such thing as "non-targeted advertisement"


In that case the solution would seem to be more trivial: block the ads. Ad blocking tools are readily available.

About the only issue you may run into still is shopping sites, where they may offer "related" products. Amazon is brutal with this in particular.


The entire modern economy seems to exist for, because of, and due to ads.


I think you're taking Joshua's quote out of context: The WOPR learns about futility in game theory, not anonymity through exclusion.


I think OP is just reapplying it to another context where it works equally as well.


I'd say the only not loosing move is not to play.


Misconstruing the original context is how we got into this fake news problem in the first place.


What "fake news problem?" And who is "we" here?


It's from a movie.


While I don't engage much on social media, I've found it better to randomly click an ad. I'm ruining their profile one click at a time.

The ads I see across platforms and websites are so hilariously dumb and irrelevant now. You are going to be tracked, might as well ruin the analytics.


That might be a fun thing to do. Make google and the others pay out the nose by actually clicking as many ads as you can instead of avoiding them. Just have everybody be their own click farm. We could have an international "click to support business day" (hah) where for 24 hours everybody around the globe just clicks ads. That's what they want us to do, isn't it? Maybe we should give it to them.


If you like screwing with advertisers, I recommend the browser extension AdNauseam

https://adnauseam.io/

Basically, it hides and silently clicks on every ad in the background. It even has an archive where it shows you the ads it has hidden and clicked, so you can check in to see what advertisers think you are interested in, and it tries to eyeball the cost of those clicks to advertisers. Lots of fun to load the worst ad-ridden offenders' websites and get a smug sense of satisfaction that you wasted advertisers' money


You probably aren't. That's highly suspicious behavior that isn't hard for the ad networks like Google to detect and they'll just refund any costs as fraudulent activity.


Yeah, I wondered why it wouldn't just choose a random number of currently-visible ads to click, including 0, at random times? Clicking all ads all the time would definitely be conspicuous.

Then I found this on their FAQ (https://github.com/dhowe/AdNauseam/wiki/FAQ#what-is-the-clic...):

> What is the "click-probability" setting? This setting lets you control the likelihood that each discovered Ad will actually be clicked by AdNauseam. 'Always' means that every Ad discovered will be clicked, while 'Rarely' means that very few ads will be clicked(10%).


I wasn't meaning to use a plugin, or necessarily specifically target Google. Everybody manually click on ALL ads they see on ALL platforms. Make it so "the ad and tracking system" model just doesn't work and becomes to expensive for them to justify. They claim all of this capturing our data is to "provide better, more relevant, personalized targeted ads." They charge companies for those ads. We can basically make them ineffective by overwhelming them. All of them. I see nothing wrong, legally or morally, with everybody clicking on all of the links that they are putting before us. That's what they are there for, so let's utilize them en masse. If everybody did that on ALL platforms, what would happen? Where would they go? They would quit spending as much on these platforms, which is basically the entire revenue model of the platforms. The platforms would be forced to shift their behavior once it becomes literally ineffective for them to do business the way they have been and profits start to shrink, wouldn't they?

Another thing we can do: when you do a search, perform another search for the opposite. Or perform 2 irrelevant, random searches on things you literally don't care about or have nothing to do with your life. If we give them more crap than actual data, their algorithms would probably become ineffective.


Maybe not really click on all the ads though. Some of them can be shady as hell, or they open a new site with even more ads, and you'll never be done! Clicking on ads opened by other ads, I find that the "deeper" you, the worse it gets. By which I mean my personal feeling of: How likely whatever's being offered is a scam, but also how they are messing with my browser, pop-unders and other weird behaviour.

And most of the time I have uBlock enabled. I'm not entirely sure if messing with the tracking system actually weighs up against having to browse without uBlock.


Anecdote: I've hopped on a pot committed, 5+ device, aggressivly tuned AdNauseum bandwagon several times now, about one to two month spans spread over the past several years. I usually forget/ignore the increased background traffic for a short time then realize during a moment of inconvienient network congestion and flip back to ad-block, pie-hole, Blockada for mobile vpn etc... I have repeatedly observed a very steep increase in spam phone calls for myself (I take great effort to minimize the footprint of my personal emails across random db's) and email/social media spam for both other household members and office collegues, starting shortly after these heavy AdNauseum binges begin. Lot's of uncommon or specialized, high margin, high intent customer markets and service spam floods in. Never understood why they can stay so persistant. They often seem completely unfased, as if Google's prompt and voluntary fraud detection, disclosure and refund issuance can go entirely unnoticed. /s


It gets you banned rather quickly as a bot from most ad networks. Win-win.


But I suppose getting banned doesn't mean they stop serving you ads, right? It's more like a shadow-ban in that sense. There's still a win in that they'll have to invalidate a whole trail of tracking data about you.


Doesn't clicking on ads reward google, instead of punishing them?


The budget from advertisers is already credited to Google - if such tactic becomes popular, advertisers would get lower returns (bounces would be way higher than now), and move their budgets elsewhere, harming Google's business.


Here in the real world where that’s not gonna happen, they do benefit though. Adblockers getting popular makes sense, but some sort of ‘screw with google’ plugin (which already exists) is too niche.


> move their budgets elsewhere

This assumes Google isn't effectively a monopoly. Is that assumption valid?


Google doesn't bill the campaign upfront. It bills every time a threshold is crossed - for example, whenever you spend $500.


This is fascinating. It seems like it would be pretty easy to write a chrome app that runs a window in the background that interacts with every. single. advertisement.

Unfortunately the people to pay the toll would be the businesses running CPC campaigns, not google.

You'd have to run it at such massive scale that google wouldn't be able to collect because advertisers could show that something was wrong with google's model.


Um. Google would make money if you did that. At least short term. I guess if enough people joined in then advertisers would move to FB and google would lose money.


I don't care, I ruined their analytics and more importantly my online profile. The cross platform tracking exists, I'm going to fuck it over regarding my identity. Plus at an incredibly small scale I make it less valuable for advertisers.


Or when you're in a hurry, just click the least relevant ones.


I'm sure that by now the ad networks must have figured out that I compulsively lie to them and take that into consideration. Youtube just can't believe that I've mysteriously never heard of a single product they ask about in their surveys... "Wow, this guy has never heard of Starbucks either?!"

Lying to CAPTCHA is getting tougher these days - it takes me a few minutes now before they let me through even though I've identified a plain piece of road as a sign. It's a pain to sit through until they let me go, but I'M NOT YOUR FREE TRAINING DATA!


That is part of why I switched away from Google & containerized Youtube in Firefox, its really easy to end up in CAPTCHA hell it seems. Qwant and DuckDuckGo are sufficient, and actually better when searching specific item names or part #, its pretty impressive how shit Google's results have become as of late for this use case IMO.


The only reason nowadays to use Google search is verbatim error messages for exotic software products.

There it still seems to outsmart DuckDuckGo. Else than that I haver zero reason to look back. As a matter of fact. That little box, containing basics for questions like "shell date operations" is usually sufficient and if not you have a link to (usually) stackoverflow. I really like the concept.


On a side note why does Google offer a captcha service to the public, then for their services use an incredibly shitty one. It takes me 4-5 tries to decrypt the random squiggles on their sites. I don't mind decoding a bit of text or an address, but lately even their public captcha has gone rapidly down hill.


I was recently thinking, I'm not entirely sure if they would still use those photo-CAPTCHA's for ML training data any more. The classification "puzzle" that is offered seems like something deep learning is already capable of today? It seems like such "toy data", considering the images and the task given.

I could be completely wrong though. Is there anyone up to date on modern ML capabilities to comment on whether this data is useful and what for? I used to think it made sense, especially with ReCAPTCHA (digitizing books) but it just doesn't seem that valuable any more?


Getting captcha correct on google is actually difficult. For some reason they use a different one than they provide for third parties.

As for survey/reviews I routinely one star apps that nag me to leave a review. Some apps have two options "review now" or "remind me later" . Those apps get reviewed, "app is great, but won't stop fucking asking me for a review even though I already have."


I’ve been doing the same thing. Originally blindly, now I can actually see my ad interests buried in the settings under account data and all the way at the bottom of the list... but it’s there. And oh how it’s funny to see what changes the list. What’s terrifying though, is the things on the list that I have not engaged in online that only could have been added as result of verbal conversations with people who have not set up even a modicum of privacy on their devices. Specifically flag words I use in conversations for just such a purpose. As a result, I’m collecting evidence for a massive privacy lawsuit that undoubtly will occur sometime in the future which I’m eager to take part and assist in.

That aside and in the interim, a small thing we can all do if you’re concerned about privacy is to inform people when discussing such matters where to go on their devices to learn about what is known about them.

No other conversation has scared indivuals I’ve met more than showing them their ad interests.


Also don't feed the machine. (Or just with the basics)

I just post irrelevant stuff to social media. Keep likes under control. Ad-block/tracking blocker and I'm wary of what I search in search boxes (Google or others. Sometimes DDG doesn't give a good answer so I go back to Google).


Do you use your smartphone? Or dumbphone? Do you use credit cards (including debit cards and every other way of electronic payments)? Do you use browser? Even loaded with anti-spying plugins?

If any of the answers is yes then you are being watched and profiled. And of course those companies you actively avoid would simply buy raw data or "analytics" from the companies you still use.


There is a big difference in letting these companies control your mental well being, as in the letter, and letting them know intimate details about you. The first is well within your control:. don't use Facebook/Instagram/etc., then their algorithms will not affect your mood, because you won't interact with them at all.


Of course I know that it isn't monetized and I _believe_ it isn't tracked, but you do know that HN is a form of social media, right?

My point being that it's harder to escape than it looks.


In the case of the letter author, I suspect the problem isn't just that stillbirths are rare, but that you can't really monetize a stillbirth. Better to assume the baby was born successfully than miss out on the opportunity to advertise to a new mom.

I'm sure no one in the room of marketing execs has considered personal consequences like this one.

This type of thing is why I do my best to enable privacy settings and disable personalized ads. I don't actually care whether Google knows what I ate for dinner last night, but I don't want constantly see Google's fuzzy judgements of my humanity as I browse the web.


Given the amount of ads for fertility doctors I saw after we lost our son, and the ridiculous claims they were making, I think its safe to say that stillbirths are both lucrative and certainly advertised towards.

Honestly, I'm not sure theres anything to do about it. As awful as losing our son was, I'm not sure taking away others rights is really an appropriate response. While Facebook would be wise to take these things into account to build social trust, advertisers themselves are always going to want to advertise their products.


Sorry to hear of your loss, it is a horrible thing to contemplate. Hope you (will) recover(ed) in time; all the best.


>>> I'm sure no one in the room of marketing execs has considered personal consequences like this one.

Your first sentence was more correct (I suspect) : they thought about it and purposely forgot about it 'cos there was no money to be made there.


Are people that callous?

I work for a large corporate, and I can honestly say that I think no-one would make such a decision. Perhaps I am naive.


I doubt people hear this story and say 'Meh, fuck em'.

However, what about (in thought not out loud) 'False positives are hard to prevent, special cases like these are very rare. Instrumenting our platform with exceptions like these is a massive undertaking for which I don't have the political capital. Lets not take action now.'

From outside the company, those are nearly the same reaction. From inside the head of the thinker, they are very different.


This is suspiciously reminiscent of the adage, "All it takes for evil to triumph, is that good people do nothing."


They'd just think, "all measuring has false positives", and wouldn't think about it beyond checking if there isn't too much of them.


I can definitely imagine certain execs being single-minded enough to think "Well if a parent has suffered some tragedy, they're not going to click our ad anyway, so it's costing us nothing to advertise to them"


The execs in question are most probably all male and have never thought about stillbirths.


Men lose babies too.


> but that you can't really monetize a stillbirth

Serving up ads that are guaranteed to not apply are a negative because it means you're giving up the opportunity to serve up ads that might actually make money.


Are advertisers going to notice that 1% of people in the 'Just had a baby' group actually had stillbirths? I'd guess not, especially because advertisers don't get to know who they advertised to.

Is FB or Google going to change this to improve their conversion rate by ~1%? Probably not, 1% is not a lot. In the end, as long as the advertising platform gets a decent conversion rate on 'Just had a baby' for the advertisers, everyone is happy.

Heck, advertisers might prefer 10% accuracy and 90% recall over 50% accuracy and 80% recall. If pushing that recall up a bit yields a few more customers, the extra cost of showing that to a lot more non-viable people might just work out.


Serving up ads that are known to be offensive to the user isn't just a lost opportunity to make money, it's also going to encourage that user to start blocking ads, which the advertisers and Google/Facebook don't want.


Still births aren't rare. That makes this all that much worse.

> Stillbirth effects about 1% of all pregnancies, and each year about 24,000 babies are stillborn in the United States. That is about the same number of babies that die during the first year of life and it is more than 10 times as many deaths as the number that occur from Sudden Infant Death Syndrome (SIDS)

https://www.cdc.gov/ncbddd/stillbirth/facts.html


1% is in the same ballpark as the percentage of transgender people in the US (~0.6%). So if we're saying one is rare, so is the other.

I consider 1% to be a pretty low percentage, myself.


1% is no where near "rare". It's a low percentage, but still a frequent enough occurrence.


>I suspect the problem isn't just that stillbirths are rare, but that you can't really monetize a stillbirth.

And you can monetize transgenderism, so unless the cost of this letter starts to get very expensive, OP's advertising will get better well before mothers of stillborn children.


Can confirm. I actually went and specifically informed a couple of ad-targeting networks that I'm a woman, in the cases where they allow that, around when I was updating many other people and institutions. That helped on some fronts.

So now some of the ad targeting networks have me targeted as "woman who's a successful professional and knows what she's doing in life", which, uh, is correct up until that last part.

The robots are just not expecting a woman in her thirties to still be baffled and overwhelmed by fashion and looking for the basics. I get my most useful recommendations by word of mouth, and yay, that sounds very nice and authentic, but it's a slow process.

So this is a different problem than the original article: we could be targeted _better_ and both we and the advertisers would be happy, for at least a moment.

But. We're trusting the advertisers to use that information responsibly. What if the kind of people who make those anti-trans reply videos on YouTube start taking out deliberately divisive ads, targeted at the trans community? On balance, I think it might be better for the ad networks to not quite understand.


It's really shocking how terrible the selection is for professional clothing for women, it's almost like clothing manufacturers have gotten together to ensure everything fits poorly and doesn't coordinate with anything in order to drive up sales. It seems they can only get it right when it comes to designing yoga pants.


This happens all the time to me. And I'm the most generic type you can find, if you are attempting to classify humans: middle class tech savvy healthy white male 30-39.

All it takes is a really really simple wrench in their system: my wife uses my computer (oh my god can you imagine that?!)


All it takes is a really really simple wrench in their system: my wife uses my computer (oh my god can you imagine that?!)

Yep. Remarketing is a great way to ruin Christmas shopping surprises on a shared IP address.


It doesn't work by IP address, but by cookie.


Very interesting perspective. I think stories like yours are why I believe in the ideas behind the diversity effort going on in tech right now. As a black male I've definitely interacted with these platforms and just have been puzzled by the outcomes.

On the other hand I wonder, would you be okay with these ad networks being able to identify you as transgender? And then tailoring ads towards you? I'm sure there are things that are still interesting to you post-transition.


I wonder if it makes sense for ad platforms to enable a "clear history" option just like our browsers do? That way you can say very clearly "the current model is wrong, please restart and try building a new one for me". That might also help this particular case: rebuild the model from scratch including only content going forward rather than backward. Should eliminate the baby ads effectively, right?


Google at least let's you specifically go into ad settings and check/uncheck specific categories of things that you get ads about (that's an oversimplification of the process behind it).

I'm not sure how but one time I started getting a few ads in Spanish about toothpaste. I had also just found out about the ad settings page, and after taking the minute to remove the handful of wrong interests, the incorrectly targeted ads stopped.

Edit: Facebook allows it as well, here are the links for anyone that wants to do it now.

https://adssettings.google.com/authenticated

https://www.facebook.com/ads/preferences/


> Google at least let's you specifically go into ad settings and ...

... improve the tracking accuracy of your data in exchange for reducing your annoyance with ads


I mean right at the top of the page is a single switch that lets you turn off all targeted advertising if that's what you want.

But personally I like targeted advertising. Ads let me use services for no monetary payment while still providing income for the content creators, and I like seeing ads that are targeted towards my interests more than ads that aren't. And while it's not ideal that you might have to go in and change what your preferences are in cases where they get it wrong, I much prefer it over the alternative of all ads being "wrong" for me.


That'd work in this case, but would destroy lots of value when people like me would go in every few months and reset all my ad profiles. Then again, the total percentage of people who would care enough to go in and do that is probably pretty small.


I'm going to start sounding like I work for them, but Google already allows you to disable targeted advertising entirely, and Facebook at least let's you disable targeting for the vast majority of things (the major exceptions being age, gender, location, and the content of the page if I recall correctly).


I would immediately try to figure out a way to automatically clearing my advertising history as often as possible. So would other people; at least one of us would build an easy-to-use tool for other people to do the same.


Nice. And I could click it every 60 seconds.


"when one's experience is statistically unlikely"

It's important to note that this is not an especially unlikely event. Of my 10 closest female friends, who eventually had children, I know that at least 5 of them had a miscarriage at some point in their life. I've a friend who had several miscarriages before having a successful pregnancy carried to term.

Admittedly, if a woman makes it past the first few weeks, her chances improve:

"Once a pregnancy makes it to 6 weeks and has confirmed viability with a heartbeat, the risk of having a miscarriage drops to 10 percent."

https://www.healthline.com/health/pregnancy/miscarriage-rate...


> the algorithmic rejection of my reality on these platforms takes a pretty consistent toll

That sucks. :/

https://someonewhocares.org/hosts/ and https://github.com/StevenBlack/hosts/blob/10bba14590738c445c...


ublock origin is a much safer and more effective way to deal with filtering then hosts files, as well it supports the host format and has its own very large lists built in


USA stillbirths occur in about 1% of pregnancies. It may be unlikely for any particular woman. But it is more common than, say, people in the market for a truck.

There's no reason at all, except laziness on the part of marketers, not to handle this case. Marketers who hope for a long-term relationship with young families might be really smart to avoid alienating them by pitching snuggly carriers to women who've just suffered a miscarriage.


Posting in case this may be of use. I've been able to make all the social networks more or less serve me irrelevent stuff or not serve me stuff. Here's how:

* I use Facebook in safari on ios and have ad blockers enabled

* I use the youtube app logged out

* Twitter I generally visit people's profiles directly or in a 3rd party all

Instagram is the only one I can't fix, as I use the app. Maybe if I did more in the web browser, but then I'd lose stories.

Anyway, Instagram consistently serves up ads that are in the ballpark of relevant to me. But other than that, the ads I see are random generic ads for stuff like cars and laundry detergent. The kind of mass market ads you'd see on TV. I get ads in other languages when I travel.

It's nice, I can mostly ignore them. Of course, key to this is that I prefer to ignore algorithms on things like youtube. This method won't work if you need that.

But if you don't mind algorithms not being personalized, you can make the ads irrelevant to you too.


Instagram is view only in the browser as far as I can tell. No way to post images.


Yeah, that's the one I use as an app. And that's why I actually see somewhat targeted ads there.

I might actually switch to viewing it in browser more. But I think you lose the messaging. And the ability to post as well, but I don't post as much as I watch.


If you use the chrome inspector to change your user agent to a phone, you can use it to post


> This seems to happen a lot when one's experience is statistically unlikely,

Aren't still births something like 1%? That's not super common, but unlikely isn't quite the right word either. You're unlikely to die in an airplane crash.


There’s over 3 million miscarriages a year in the U.S. Often time it’s the worst event in your life.

Stillbirths are about 1/45 births.

This reminds me of when engineers were designing the first air bags. They didn’t think about the size difference between women and men. Diversity matters because it gives you different perspectives about life. I suspect that if the ratio of men to women in the tech industry were reversed this wouldn’t be an issues because it would have come up in discussions.


Last time i checked on my mens stillbirth and miscarriage group, 100 percent of still born children had a father.

Trying to frame child loss as something that only happens to women is not only insulting to the equal number of men who have lost children but also furthers the notion that men dont care, when most research on the area shows they care pretty much the same as the mother. Comments that further this notion are what lead to the disproportionately high alcoholism and drug abuse rate for dads of loss


[flagged]


Where did she say that her suboptimal experience was exclusive to her?


She said it is a "rejection of her reality", which is complete hyperbole. I can't find the original comment to check the exact words she used.


Yes, her reality is that she is no longer the person whom the ads continue to target her as. She claims that the level of misperception she's seeing from her ads is higher than normal, and provides examples of such. She's not claiming that non-transgendered persons never see mistargeted ads.


How does she know what is "normal"? That was my point - nobody is "rejecting her reality". There are simply some ads that don't appeal to her, as happens to everybody else, too.

How can she blame the algorithms for rejecting her, if her past actually contains things that the algorithm happens to pick up on?

Since everybody gets ads that don't appeal to them, clearly it is just a case of algorithms not being smart enough. It's like saying the sky rejects her because it isn't pink. It is blue for technical reasons, not to spite her identity.


There is no malice on part of the algorithms, but I'm sure you can see how it would appear that way and make someone feel bad. Even people unintentionally hurt others feelings all the time, but their not having malice does not invalidate the way they made the other feel.

Would you say the same about the situation in the linked article?


"It would appear that way" - that was my whole point, that the perception is presumably wrong here. There is no rejection, and nobody built anti-trans parameters into their ad server by intent. Sure, the OP feels that way - but feelings can be changed, in theory. They should be amendable by rational thought. Also I suspect it can become a habit to frame everything in terms of "rejection of my identity". I am not blaming that person, just pointing out that it may not be a healthy habit.

Maybe, to sum it up in a more friendly way, this quote is a good summary: "never attribute to malice what you can attribute to incompetence" (don't know who said it).

As for the miscarriage, I likewise wouldn't blame the algorithms or companies. However, I would think that most of them would be happy to learn and improve their algorithms. So pointing out such a flaw should hopefully be received well.


You can click "parent" until you arrive at the comment you're looking for.


We're working on a chrome extension that blocks the existing ads on a page, and replaces them with privacy-safe ads that reward you with our crypto token for each ad you see.

Chrome Extension: https://chrome.google.com/webstore/detail/clearcoin-the-ad-b...

Code: https://github.com/clearcoin/extension


There are methods to hide ads.

This is just an ad?


Instagram is not a person. Don't ask it to be decent.

Those ads are not your friends. Don't expect them to treat you like they care.

You're not sharing your information with a human. You're sharing it with a business who is tuned to maximize profit. The ROI on dealing with still births for advertising is probably negative.

They don't care about you. You've made the decision to let a business deep into your personal life, a space once reserved only for loved ones. These are the consequences.


This is mostly very true, but I think the author understands this better than you're implying. The point of this tweet is to influence public opinion. This:

>The ROI on dealing with still births for advertising is probably negative.

would hopefully not be true anymore due to negative PR, companies would take action out of pure self-interest, and it would stop this situation from happening to future people.

There's of course still the broader problem of companies ravenously and irresponsibly gathering as much data on individuals as possible. To me any increase in a company's liability for the information they're hoarding, even in the form of righteous outrage that on its surface seems to anthropomorphize them, is a step in the right direction.


>This is mostly very true, but I think the author understands this better than you're implying. The point of this tweet is to influence public opinion.

I suspect your parent understands this, and you're not understanding his/her perspective, which is that the general public needs to have a better understanding of how the Internet and monetization works. And that it is problematic to expect almost anything of value from a company you are not paying money to.

They key takeaway in the comment is something everyone needs to understand:

>You've made the decision to let a business deep into your personal life, a space once reserved only for loved ones. These are the consequences.

Yes, the tweeter may be able to influence the company's decision in some minor way, restricted to still births. And it will solve only one small problem. You'll still get plenty of these problematic ads in bad circumstances.


I don't think the author of this tweet is trying to educate the general public. I simply think she is trying to engage companies so that their ad targeting accounts for her case and someone else in the same situation doesn't face what she faced.


> would hopefully not be true anymore due to negative PR, companies would take action out of pure self-interest, and it would stop this situation from happening to future people.

Company X spends Y months advertising baby items to you, then at month Z- due to no fault of your own- a still birth occurs. Company X continues to advertise baby items for A time frame until the algo realizes you are not interested. Unless there's a market for people advertising for services/products for still births- they need better adwords- then there's no loss to the company being paid for the advertisement.

Although extremely sad and very common- not so much so in a 1st world country- company x has no reason to stop. It's the advertisers who are "at fault" and even then, who are they to care about someone else's feelings? Maybe people using these 'free' services need to utilize better coping skills.


Any person can ask others to change their behavior.

And if they get the attention of the company, the company may change its behaviors.

Whether out of sympathy or self-interest is kind of immaterial.

As users, we'd like the companies to change their behavior.

Companies rely on users' good will. Once they depend on users for revenue, they need to act to a certain degree as though they care about those users. Or users may leave. These are the consequences.


They don't rely on goodwill at all. Companies rely on you spending money. The odds of any one person having an attention span long enough to hold a grudge that might have an impact on their wallet are so vanishingly small that they really only need to care about milking everyone for all they are worth. I prefer this explanation, because you can see that it meshes very well with what happens in practice.


> They don't rely on goodwill at all.

Wow, of course they do.

> The odds of any one person having an attention span long enough to hold a grudge

So... that's why people draw attention to issues like this. To try to organize users just enough to encourage a company to change its behavior.

Or, failing that, to get government to lean on companies to get them to change their behavior.

> I prefer this explanation, because you can see that it meshes very well with what happens in practice.

Companies have been successfully boycotted before. Advertisers leave when consumers get mad enough. Maybe not all of them. But at some point, the platform does actually make some changes, sometimes.


> Those ads are not your friends. Don't expect them to treat you like they care.

Why? Why they don't care? Is it technically impossible or what? You know, they can try to care and try to become a friend, maybe ads would work better in that case? How do you think?

What you are doing in your comment is just stating facts that are known for everyone already. For what reason you are doing it? What are you trying to achieve? Are you showing us how cynical you are, that you've grown enough to get lost your naivety completely? It is the case, or there are some other reasons behind your message?

I can say what Gillian is trying to achieve and I believe she is succeding: she will make system more human centered, more friendly to humans. I can understand her, but I cannot understand you.


She will not succeed in making the system more human centered or friendly. She will succeed in changing the ROI of dealing with advertising to mothers of stillborn children, as another poster stated, by creating negative PR. Those are not equivalent.

You are exactly the person I'm trying to influence by reminding you that these systems care about revenue, not you or your well being.

These facts may be well known, but you are certainly not considering them when choosing your language. You are letting an amoral entity into your very personal life for dubious reasons and expecting it do be "friendly to humans," when they are designed to be friendly to shareholders wallets. It will be exactly as "friendly" as it needs to be, no more. And, if shareholders decide the cost of being friendly is greater than the benefit, then the "friendly" feature will be turned off.


These systems are sociopathic. Full stop. It is in their nature.


It'd be depressing if they tried to sell you a funeral service after you let them know of your misfortune... Just avoiding this would be a big win.


They did end up showing her an ad for adoption


> You are exactly the person I'm trying to influence by reminding you that these systems care about revenue, not you or your well being.

There are no need to remind me that. I know it. But I know some more. These systems are part of more complex systems of society and of humanity in a whole, these systems are adaptive and reflective. All of them are adaptive, as a small so a large ones. They probably would not change just because some women told they should, but such a system will change itself if there are prospects of greater earnings or treats of losses. PR is important thing that influences earnings and losses, so it is improbable for them just ignore that women. It is less improbable for them to ignore all other similar cases and to react just for stillborn, but I the worst scenario they would need a few more open letters, and then they'll search for all cases like that preemptively, not waiting for an open letter.

Look at Google, it learned to pay attention to AI biases preemptively. Yes, we can argue that google doesn't respect anything except money, but it would be arguing about hidden states of such a system as a google. At the same time "to pay attention to AI biases" is behaviour which you can observe. Shouldn't we prefer to speak about observable phenomena, not about some hidden ones?

> These facts may be well known, but you are certainly not considering them when choosing your language. You are letting an amoral entity into your very personal life for dubious reasons and expecting it do be "friendly to humans,"...

Please, stop making ungrounded assumptions about me. I'm using adblockers aggressively and see no ads. If some ads slip through my defences, I go into troubles to create custom filters to block them nevertheless. I'm blocking even more then just ads, annoying gifs for example (hate them for moving and disturbing my attention). I clear my cookies immediately after tab closes. I'm not using proxies or VPN, but I'm behind NAT and, I believe, there are at least few hundreds others with whom I share IP. Moreover this IP changes time to time. I use ~5 browser profiles with different sets of addons. I'm paranoid about tracking and any ad I can see makes me nervous. So I beg you, do not try to diagnose me by the singled fact I used word "friend" while speaking about advertising company.

And yes, I'm not considering my language, because I just do not look at reality through lens of morality. Morality is not a some kind of shiny fundamental law, morality is a dirty tool in hands of society. Or in hands of some groups. It is really dirty disgusting tool, each time you hear word "morality" you should be alert, the most common reason to speak about morality is to convince others that something amoral is moral, that black is white. So I do not think about morality at all, and when I'm speaking about someone I'm speaking about behaviour, or about hidden reason that leads to observed behaviour. When I'm speaking about "good" or "evil" I always point for whom it is good and for whom it is evil, because there are not abstract "good" or "evil", it is always dependant on point of view. And this case is not an exception. And yes, "friendly" doesn't mean "moral" for me, though it doesn't mean "immoral" too.

> It will be exactly as "friendly" as it needs to be, no more. And, if shareholders decide the cost of being friendly is greater than the benefit, then the "friendly" feature will be turned off.

It is precisely what I meant when I spoke about corporation trying to be friendly. I just omitted the obvious part about money.


>And yes, I'm not considering my language

Then we're done here.


>she will make system more human centered, more friendly to humans. I can understand her, but I cannot understand you.

Any marginal increase in human friendliness this may cause is negligible because advertising in general is a inherently "thing" oriented industry worth hundreds of billions of dollars.

It is at odds with human satisfaction and survival.

I am reminded of an MLK quote:

"We must rapidly begin the shift from a thing-oriented society to a person-oriented society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, extreme materialism and militarism are incapable of being conquered".

Advertising is the driving force behind the current state of insatiable materialism/consumerism that is leading to the destruction our ecosystem.

With all that being said, "advertising" can indeed be used for "good", human oriented goals, but as it stands it's mostly being used for exploitation.


>What you are doing in your comment is just stating facts that are known for everyone already.

For the HN crowd, yes. For the general public: Still no. In my experience, most people do not understand this. A lot of people, particularly young people, think that they are entitled to free services like email, instant messaging, and pretty much any service an ad-driven company provides (Facebook, Instagram, Twitter, Snapchat, etc).

>I can understand her, but I cannot understand you.

Read the last line of his comment. (S)he is trying to say that there are negative consequences to revealing so much about yourself to strangers.


I don't disagree with this, but this argument fails to acknowledge the responsibility of social media companies to communicate what information they gather about you and how they choose to use it. On both fronts, they're still entirely too opaque.

It might not be a fair comparison, but I feel like this argument is the equivalent to smoking for 40 years without realizing that it causes cancer because the hypothetical tobacco companies didn't have to disclose that information. When you go to the doctor and the doctor blames you for your failing health, is that fair for the doctor to do? You did something that was enjoyable and seemed novel without understanding the cost to your health.

Informed consent in social media is still laughable in 2018, and we don't really discuss that aspect of the issue enough IMO.


Social media companies are incredibly clear about what information they gather and how they will use it: everything and however they want. Is anyone actually unaware of this? The individuals may not consider the consequences, but twitter has no responsibility to tell you that they'll use your pregnancy post to advertise to you, and what the ramifications of that might be. Advertising algorithms are amoral tools put into this world by amoral entities.

The woman's situation is a use case with an ROI attached to it. It's a PBI for Twitter.

Elsewhere in this thread there's a transgender individual. Their use case is another PBI.

The transgender person's use case will be solved first because the transgender community represents more advertising dollars than a grieving mother.


> Social media companies are incredibly clear about what information they gather and how they will use it: everything and however they want.

OK, but...the thing is, that's not actually true, however much their less ethical executives might prefer it to be.

They don't, for instance, collect data on what games I play on my iPhone, because that information is not available to them. But while I have enough technical knowledge to understand that they don't have access to that, a) not everyone does, b) while it's true on iPhones, I don't know if it's true on Android—and there may be different answers for "can Facebook track that on Android" and "can Google track that on Android", and c) there are probably things they can do that, despite my relatively good understanding of the subject, I'm either unclear on or at least suspect they can't do.

Similarly, there are many things they don't do with it, because they can't, due either to technical or legal restrictions, or because it's just not profitable. But I'm much less clear on what these things are, and I would very much like to know.

Of course, they don't want us to know either of these things. Right now, most people generally fall into one of two categories: those who, like you, have just thrown up their hands and said, "They collect everything, and there's nothing we can do about it!", and those who don't really know/understand much of anything about it.

Being forced to be clear and open about what data they collect and how they use it would draw attention to it, and make it possible for people who don't want to live like digital doomsday preppers in bunkers to take some reasonable precautions to safeguard their data. It might also get (more) people to say, "Hey, the way you're doing that is wrong, and we should make laws to make that illegal."


Should have said, "Everything they can that's potentially profitable," but overall I'm aiming for pith over nuance.

I'm not throwing up my hands, I'm here saying: consider the things you're letting into your life and their motivations. There are plenty of things we can do about it. Not share every single detail about ourselves on social media being a good start.

This isn't private data we're talking about with the authors post. It's data she made very public, by choice.


But I think that her post highlights some other obvious holes in what we know about what information these companies collect and what they do about it.

Did they notice the posts she made and the behaviour she exhibited after she lost her baby? Did their algorithms capture it in any way, and they just aren't tuned to do anything about it because it's unprofitable?

Or are their algorithms dumber and less all-consuming than we give them credit for?

The whole point is we don't know, and we really, really should.


>Or are their algorithms dumber and less all-consuming than we give them credit for?

How do you mean? Advertisers can tell exactly how well these algorithms work, they see how much they spend and what they get in return.

It's not necessarily a significant problem if your ad is only relevant to 1% of the people who see it, as long as it's profitable.


Sure, but there are two separate questions here:

1) Did they successfully determine that she had a stillbirth/miscarriage?

2) If so, did they use that information to improve their ads?

The answer to 2 is clearly "no". Thus, the remaining question is not whether they found it profitable to use the information, but whether they were able to gather it in the first place. Particularly given NorthOf33rd's claim that they're gathering "everything."


> They don't, for instance, collect data on what games I play on my iPhone, because that information is not available to them.

they would like to reconstruct that information from what they've got, if they could, and they'd probably be more successful than most people would believe if you gave them a zeroth order explanation of "they can't get data outside of their app".


Technology professionals are well-aware of what information is gathered and how it's used, but I don't think the broad public understands at all. Terms and conditions are still bogged-down piles of text that nobody reads, and the leaders of the big tech firms still dodge many questions directed at them by members of congress, the media, and the public at large.

> Advertising algorithms are amoral tools put into this world by amoral entities.

Are you saying that social media companies aren't subject to moral and ethical scrutiny? I'm really confused by this statement.


It's possible to interpret your comment as a direct response addressing the mother who put out the tweet, which seems very inappropriate. The point you're making might be true, but maybe it's better to rephrase it.


I also read it as a direct response to the mother who put out the tweet and it seems pretty appropriate to me? What part of the parent comment is inaccurate?


Whether or not it's accurate matters a lot less than whether or not it's kind.


She put out a public letter on the internet. We're here discussing it. If she stumbles across this thread, I'm sure she'd be aware that I'm discussing the letter. Not responding directly to her.

If I posted this point to her twitter feed, I would soften the language. But I would do my best not to change my point.

I would soften the language because I have a moral imperative- unlike twitter and instagram.


Kind doesn't really make for healthy discussions when your definition of kind includes leaving out pertinent information and points of view.


Kindness and empathy, or lack thereof, are the point of this open letter. She's asking for more empathy from tech companies. The comment we're discussing lacks empathy altogether.


If you publish an open letter you invite open responses. Doubly so if you do it on the Internet. Triply so if you work in the media.


It'd be real nice if people stopped using the public aspect of discourse (and especially virtual discourse) as some sort of general-purpose excuse for acting like an asshole. While such behavior is often framed as telling hard truths or suchlike, more often than not it turns out to be trite cliche deployed for sadistic effect.


This is flawed on so many levels.

1. Just because a thing isn't a person, doesn't mean it can't/shouldn't be decent.

2. That thing was designed by humans which, inferring from your post, you can ask to be decent, ergo you can ask their thing to also be decent.

3. It would make a lot more sense for the business to care in the long term, rather than the short term, that's where maximum profit is.

4. I highly doubt she actively made "the decision to let a business deep into her personal life" since most of these services are opt-in by default.

There's really no need to be so narrow-minded and out of touch with reality...


You've made the decision to let a business deep into your personal life, a space once reserved only for loved ones. These are the consequences.

Nice victim blaming there, let's hold individuals responsible for the bad experiences they have with the vast data collection and ad targeting infrastructure they're virtually forced to interact with. If only they had been studying the blade instead.


Victim? She's not a victim. She's upset she saw an advertisement. Aggrieved, offended, dismayed, rightfully very upset about the loss of her pregnancy, but she's not a victim.


I didn't read that as victim blaming. Maybe it's a little callous but I don't think the commenter is trying to shift the blame or responsibility from the companies to the author; instead I read it as trying to change expectations about the level of empathy and humanity the author will receive.

> If only they had been studying the blade instead.

Uh, what?


It's a meme that pokes fun at callous personas. https://knowyourmeme.com/memes/i-studied-the-blade


> Those ads are not your friends. Don't expect them to treat you like they care.

> You're not sharing your information with a human. You're sharing it with a business who is tuned to maximize profit. The ROI on dealing with still births for advertising is probably negative.

Considering only the profit aspect (because the ethics, social responsibility, and public opinion aspects have already been addressed elsewhere in this thread and on twitter):

Surely it's a better ROI to have a user than to lose a user. Hence, it makes more sense for e.g. "stillborn" be a trigger word that stops/blocks pregnancy-related ads for X time than for the user to be so brokenhearted by the platform itself that they leave.

Surely it's a better ROI to show relevant ads than offending ads. Hence, it makes more sense to show e-counseling ads to grieving eyes than to show them tragedy-reminder ads.

Honestly, this seems like a no-brainer from any perspective.


> Hence, it makes more sense for e.g. "stillborn" be a trigger word that stops/blocks pregnancy-related ads for X time than for the user to be so brokenhearted by the platform itself that they leave.

Only iff you can be confident that the occurrence of this trigger word means that a person indeed experienced stillbirth, and wasn't just researching this, reading up on it because of fear, a non-native speaker looking up the definition of the word, etc. Metrics like this are really only weakly correlated with what (you'd think) they're trying to determine. Even targeted advertising is essentially a shotgun approach, as we don't even know what actually makes a person click on an ad.

Also: as long as (probability of losing an user X their expected lifetime value) is lower than (probability of ad hitting a gullible person X money got from the ad), it's more profitable to just display the ad. People all over the thread seem to think that it's only about losing vs. not losing a user. But there's opportunity cost to not losing a user, and frequently it's judged not worth it.

(It's good the author is raising a stink over this, it shifts the ROI calculations a little bit towards being more humane.)


> Only iff you can be confident that the occurrence of this trigger word means that a person indeed experienced stillbirth

As the original tweet pointed out, if there are only searches for "stillborn" and "baby not moving", followed by days of silence (especially with no "vacation" trigger), then an algorithm should be able to figure it out. Especially if friends and family are all commenting with crying emoji and the user has said pregnancy adds "aren't relevant to me".

But of course, I haven't created a multi-billion dollar social network, while Jack Dorsey and Mark Zuckerberg have, so I clearly don't understand how to maximize advertising ROI.


And not only are hashtags stupid, meta-hashtags are vastly more stupid. You're intentionally labeling a thing that no end-user will ever use for its alleged purpose. Nobody has ever wondered what else they will see when they search #justarandomcommentimadeintoahashtagfornorealreason


For integrity of the darkness, I guess they would advertise some sort of a positive psychology product?


This is a pretty insensitive comment


I think it's things like this that will ultimately bring the downfall of the tech giants - the fact that even with "machine learning", the quality if the experience is still so low. They built an addictive product in the beginning, but now all of them are starting to make us feel nauseous, which is one of the fastest ways to break an addiction.

It's true here for social media (in spades), but it's true with Amazon (sorting through all the junky products, never knowing if something is genuine), Netflix (all the junky content, barely curated), and many of the others as well.

I think ultimately we will (re) learn that constant naive appeals to the lowest common denominator, while a fast way to make a buck, might not be the best long-term strategy.


I remain skeptical that it'll make a dent, honestly.

People will still use Twitter, Facebook, Google, Netflix, Amazon, etc. because they provide enough convenience and positive experiences that outweigh the perceived negatives.

I mean people know about the negatives of companies that use sweat shops, child labour, or who employ people in horrific working conditions, and yet these corporations are still very much the giants in their field.

You have companies who source ingredients from companies that don't have great living conditions for livestock, and they're still in business.

The tech giants will remain tech giants because as much as people will complain about it, they won't abstain. They won't stop using their phones, their websites, their apps, their search engines, their e-mail... because it's all too hard to give up.


> The tech giants will remain tech giants because as much as people will complain about it, they won't abstain. They won't stop using their phones, their websites, their apps, their search engines, their e-mail... because it's all too hard to give up.

That's one reason why they like network effects so much. It's one of the ways to turn an "addiction" into actual necessity, by creating a coordination problem. You can easily ditch Instagram or Facebook up to the point when it would suddenly seriously handicap your social life. You could still do it if all your friends agreed to switch to something else at the same time, but those friends have friends too, and good luck coordinating all that.

There are other ways to turn a choice into a necessity. In the western world, if you don't want or can't afford an iPhone, you generally can't ditch your Android phone, because there's nothing else available. And you can't just stop using smartphones either, as that makes functioning in the modern society much more difficult.


I think that's pretty optimistic. Society seems to constantly drone on about "garbage" reality television and yet the networks keep pumping them out at an ever-increasing rate with more ridiculous plots all the time. For every person you hear complain, there are many more behind the scenes that you don't hear from who are the exact demographic that these companies are appealing to.

These companies wouldn't be doing it if it wasn't working.


A filter bubble. Or "dark matter people" - all evidence points to their existence, and yet you don't seem to meet any one of them.

I had a huge realization about this when one day, I counted up how many distinct people I ever interacted with in meatspace, however briefly (say, by saying "hello", or even noticing their existence and thinking about them). I've added that up to _maybe_ 10 000 people. Which amounts to 1% of the population of my home town.

It's less surprising to see the market (or politics) producing weird outcomes if you realize that your direct experience isn't even giving you a good statistical sample of the population.


> I think ultimately we will (re) learn that constant naive appeals to the lowest common denominator, while a fast way to make a buck, might not be the best long-term strategy.

That's the best case. The worst case is we've broken our attention spans and ability to interact without heavily optimized feel-good hits and get to keep both pieces. At least in politics we seem to have gone quite far down that path and I don't yet see a way out of re-learning proper discussion methods. Bright ideas are definitely needed here.


Facebook in particular can easily fix this. Just use their friends to get the information.


I felt immense pity for the mother in this scenario, and after reading the article I agreed that FB aught to have some sort of way to indicate that such advertisements are offensive and should not be shown to the user. Ideally, I'd like to see a paid tier that gets rid of advertising altogether.

Then I read some of the comments here and on other sites. To put it bluntly, I don't understand the tone of entitlement people have when they demand FB/Twitter/etc fix their individual problems. I mean, I obviously understand it - such services are very popular and play a big part of peoples' social circles and lives. What I don't understand is why they think they have a right to demand that a platform they aren't even paying for should cater to them.

If everyone who disagreed with Facebook's revenue model were to leave Facebook, another platform would inevitably pop up that would cater to that new market niche. This seems to be the most fair solution to all parties, but instead it seems like people demand that someone compel FB to change based on their wants. I disagree with this not onlt because it's immoral and unfair, but also because it just means that those future FB competitors will have all sorts of obscure and unique regulatory hoops to jump through.

Am I the only one who thinks this way?


I don't think the OP is asking for regulation.

I think the OP is begging for consideration.

As a user, we have every right to state our opinions.

It's not "immoral" or "unfair" to say, "You're making money off of me, I'd really like you to change your behavior."

I don't know why or how you jumped all the way to "regulatory hoops."

Have you seen people advocating laws in this discussion? I haven't.


Yeah, and I found asking for that consideration to be personally enlightening. I'd never thought about a case like hers before, and it seems excruciating.


Facebook and Google would garner a lot more goodwill with me if they just offered some sort of option to subscribe out of ads on their networks.


Consideration from whom?

Some advertisers have bought the right to display their ads to people the platform has probably identified as “women having babies soon”.

Does the OP want consideration from the platform or the advertisers?

I feel the OP is venting the feeling that they’ve had a huge personal tragedy yet the world has just kept on going like nothing happened.


If an advertiser saw this and were horrified, I'm sure they'd be asking the platform to try to do something about it.

If the platform saw this and were horrified, I bet advertisers would appreciate or at least not mind, if they fixed situations like this.

If you and I saw this and were horrified, we might amplify her voice.


What if the ads for baby products were on advertising outside (or inside) the hospital. Should there be NO advertising for baby products because 1% of pregnancies end in stillbirth?


I'm pretty sure OP is saying, "Hey, tech companies, if you're going to mine my data to figure out I'm pregnant, maybe you should give me some way to tell you that the pregnancy ended in sorrow, and I don't want to see any ads about baby needs."

It's not calling for a ban on baby product ads.

It's saying that force-feeding ads on baby products to someone who has lost their child is awful, and it doesn't seem like a big stretch to let a user say, "I don't have a baby!" in a way that tech companies / advertisers can use, to be more sensitive.


> To put it bluntly, I don't understand the tone of entitlement people have when they demand FB/Twitter/etc fix their individual problems.

Individual problems caused by the platform. If something or someone is causing a problem for me one method to rectify it is to simply ask them to help. It's a pretty standard thing in society.

> What I don't understand is why they think they have a right to demand that a platform they aren't even paying for should cater to them.

For the most part, they do have a right to demand the platform do whatever they want. And depending upon the request, the platform has the right to say "no" to those. The users then have the right to continue using the platform or not.


> If something or someone is causing a problem for me one method to rectify it is to simply ask them to help.

I'd go further, as someone who makes products used by people I'd be thrilled if a user shared their experience with my product in such a fashion! Even if the experience is bad, at least I'd know about and have the chance to fix it. Worst case is people having bad experiences with my product and me being clueless.

I understand that in this case this particular issue is probably known and ignored, but still, inferring "entitlement" from people who share feedback seems like a step in the wrong direction.


> What I don't understand is why they think they have a right to demand that a platform they aren't even paying for should cater to them.

Because those platforms consistently and repeatedly lie to their users: they claim that they are here to serve you, or to help connect you, or help you find things you love.

If all you ever hear is "we're here to help you!", is it any wonder people expect help with their issues?

Maybe if they changed their tagline to something like "Facebook: Here to sell you to advertisers" then people's expectations would be different.


Individual "entitlement" multiplied by enough people becomes a market signal for the company to adjust their product. While it's stupid to expect a platform to change just because you complained, the act of complaining is a proper, kosher way of influencing a company - even if rarely effective, for the same reason that people just packing up and leaving for a different provider is a rare occurrence.


What I don't understand is why they think they have a right to demand that a platform they aren't even paying for should cater to them.

They're providing content that they're not even being paid for. Stop assuming that the person who own the hardware is automatically the rightful owner of the ecosystem there. Property relations are imaginary and arbitrary and can be rewritten.


I don't demand that anyone fix my individual problems when it comes to advertisements, I simply use an ad blocker and call it done. I don't need that intrusion into my life.


No, I felt the same way. The platforms are just doing what they are programmed to do, its not meant to be offensive. The fact that it is offensive is not something a computer could detect, exactly, so in my mind the platforms have done nothing wrong.


No, with power comes responsibility. If I make an automated car that runs over pedestrians but otherwise runs great. I can’t ethically write it off as “well, it’s just doing what it was programmed to do. There aren’t many pedestrians compared to cars anyways.” These ad companies that presume too much deserve the backlash when they get it wrong. They are responsible for their creations actions good and bad.



But at the end of the day, humans are still in charge, not algorithms. If they can't adequately control the algorithms, perhaps they're not ready to be deployed.

To see this a little more clearly, if a military deployed autonomous killer robots that kept accidentally killing civilian children, would you then argue the military isn't responsible?


false equivalency, no thank you.


Ummm, you're the one whose position makes them equivalent. Either humans are responsible for the algorithms they deploy, or they're not. To argue unintended effects can be ignored in one case but not the other is inconsistent.


"It's not meant to be offensive" ≠ "It's not offensive".

No one (afaik) is saying Facebook should be boycotted or punished over this issue specifically, so intent is irrelevant. A woman was offended—unintentionally—and is expressing her feelings, in the hope the companies will make an intentional change.


I feel like this is similar with Facebook "making it rain" whenever you send money through their messenger. It does not matter why you send the money, they always celebrate. Very distasteful and not taking into consideration the real reason. They presented this feature at MobileHCI'17 as part of their "Delight" mantra. They don't care about 100% accuracy, as long as it works for the majority, they will not cater to the minority.


I've been really interested in a 'digital agent' model for a while now. Some profile, and/or grand set of settings that apply to all my interactions online. I interface with way too many platforms online to keep track of, this can lead to unwanted and annoying targeting similar to (but obviously on a much lesser scale) to this mother.

I recently bought a shaving kit for my future brother-in-law for Christmas, I looked up some reviews on Google and Youtube and now every Youtube video and ad is regarding this particular shaving product. This product is now following me on the web, if I were able to simply tell my 'digital agent' "hey, I'm not interested in shaving products anymore" and have my agent then broadcast this message to Youtube, Instagram, etc. I think that'd be helpful.


You know things are broken when after you buy things you begin to get ads for the very thing you just bought. Now, you could forgive one ad network not knowing, but when you get served up ads by the people who know (have a record of the transaction) you bought something, you know something’s not working the way it should, optimally.


There was a thread about this about a month ago:

https://news.ycombinator.com/item?id=18535748

"This one is funny, and has a couple of obvious solutions that have been prevented due to internal politics. The short answer is that they have a couple of different recommender systems, all competing against each other internally for sales lift. One is purely based off of pageviews. When you get recommendations for something you already bought, many times it is because you looked at it, but they don't know nor care if you already bought it. In their words, it works really well and accounting for sales brings in a lot of needless complexity.

Another is based off of sales. They also don't care if you already bought it because according to them, it works well. I remember trying to point out to them that for some types of products (specifically consumable products) this would work really well, but durables not so much. They claimed otherwise, that although they couldn't explain it, it was entirely common for people to rebuy things like vacuum cleaners and TVs and kitchen knives. I did a tiny bit of research to show them why they thought that, and proved with a small segment (vacuum cleaners, I believe), that after you filter for returns and replacements, that the probability of sequentially buying two of the same vacuum cleaner was effectively zero. They asked me to do it for the rest of their products, but I didn't have limitless time to spend on helping another team, especially one with a PM who was a complete dick to me for having the audacity to make a suggestion that he hadn't thought of.

In all, I believe there are a dozen or so recommender services, each with their own widget. There are tons of people that think all of the recommenders have merits in some areas and drawbacks in others, and the customer would be better off if they merged concepts into a single recommender system. But they all compete for sales lift, they all think their system is better than the other systems, and they refuse to merge concepts or incorporate outside ideas because they all believe they are fundamentally superior to the other recommenders. Just a small anecdotal glimpse at the hilariously counterproductive internal politics at Amazon."


>after you filter for returns and replacements, that the probability of sequentially buying two of the same vacuum cleaner was effectively zero.

But aren't the returns segment significant? "I bought this vacuum cleaner, returned it because it didn't suck (!); oh and look, this advert said this one has the best suction."?

Also, it seems common amongst some sectors to rebuy: like parents might buy a coat, find it's good, rebuy for the other children. Landlords might update their properties, rebuying items that work well and are robust enough, etc..

?


As a different poster than Parent posted, people are not goldfish. In all those repeat purchase scenarios the buyers know what they bought and can buy again if they so choose (in the near future). If it's six months down the road, I can see maybe this would be useful.


>they all compete

this is amazon's answer when an employee asks about work culture. it's working for now but the wheels seem to be falling off as the months pass by


This works the same way in Spotify’s recommender system. It keeps recommending you records you not only have already listened to a couple of times, but also added to your library and even made available offline. This is so fixable.


Spotify has several ways of recommending new music.

Discover Weekly is a brand new playlist of new music that you haven't listened to before.

They have their Daily Mixes which seem to be based on music you listen to regularly, with a few new, but similar, things mixed in.

I have found that it generally recommends the same type of music, so I can feel a little stuck in a few genres, but they just gave me a playlist of new music that's outside the genres I normally listen to.

I've been very happy with Spotify and their music discovery is a reason I don't see myself switching to Apple Music or similar anytime soon.


Note that I generally agree about Discover Weekly, but it can recommend (and occasionally has recommended, for me) music I have heard before. I suspect this might be due to another suspicion of mine, that Discover Weekly puts songs into the playlist that people you follow are listening to. I've seen this happen many times with Discover Weekly, but I can never know for sure if it's a coincidence or not.


This is a WONTFIX bug.

The ad networks don't care. They're just serving ads paid for by a bidder who very much has decided that you're the target audience. They win when the bidder keeps bidding for your attention.

The bidder, meanwhile, probably has some very good numbers delivered to them quarterly from their ad agency showing that there is a very strong correlation between how they are targeting their ads, and actual purchase intent, and low and behold, purchases! When the ad is delivered relative to your purchase is an afterthought, or perhaps, just not worth paying their adtech people to fix. (After all, adtech engineers are expensive, and whatever they're doing seems to be working, so why change?)

And, the agency who set up the campaign doesn't have any problem cashing the checks for their successful work.

I'm not saying it's not broken, I'm saying the incentive structure currently is not set up in a way that will ever change it.


Nah, this is definitely not evidence of brokenness. People rebuy things they've bought recently all the time, certainly more often than the general population. Maybe they want a 2nd one. Maybe the 1st one broke. Maybe they liked it so much they want a 2nd one to give as a gift. All kinds of reasons.

patio11 has a great tweetstream on this issue.

https://twitter.com/patio11/status/982208307057246209


Most of these things are not things like candy or a latte. They're durable items I'm unlikely to buy in the near future. If I buy an 8oz hammer, don't recommend another 8oz hammer. I can see perhaps recommending nails, or maybe a ball-peen hammer, maybe a chisel or punch, but not the exact same item. If it's defective, I'll return and perh get an exchange.

Or I bought socks. "Oh, they must have forgotten they actually wanted 12 pairs, not 8". Ok, remind me in a few months, I might be ready to buy again, but not right after the purchase.


Statistically speaking, a person who just bought an 8oz hammer is much more likely to buy another 8oz hammer than some other random person from the population of people who has never shown any interest in hammers at all.


Sure but I'm going to guess statistically they are more likely to buy something not a hammer than a hammer. Maybe "people who bought a hammer also bought...", or, you bought a hammer and nails, here is a cordless drill. I mean, why would I buy a hammer twice in a row, and after that unlikelihood, a third hammer?

If I were going to buy two hammers, I’d have put “2” in the basket in the first place. That’s the more likely scenario. I mean they are not operating blind in a vacuum. They know what I just bought. We’re not trying to guess against an unknown person with unknown purchasing history.


If you're advertising hammers, and you can isolate a cohort of users statistically more likely to buy hammers than users at large, it would be irrational not to target hammer ads at those users.


I once heard that that is at least partially intentional in order to reassure the customer that it was a wise purchase. To prevent post-purchase regret and cancellation.


But every ad network ive ever worked with already allows you to define categories of people you specifically don't want to show your ads to.

It's on the shopders of the ad creator to set that up though. I did read a comment one time that said the companies might be doing it on purpose, to "reaffirm" you made the right choice going with their product. Not sure how true that is, but it might factor in.


Honestly, it’s a turn off for me; or rather a small annoyance. But if true, it must work for most people if true, though I’m having a hard time imagining people not slightly annoyed at this ad behavior.


Unfortunately being annoying and working aren't mutually exclusive.

And if that is really the case, then I'd argue it's on ad networks like Google and Facebook to find a way to realign incentives here so that what works best or makes the most money isn't annoying to the user (similar to how they are attempting to ban overly obnoxious ads).


This happens all the time. And I don't understand the reason for it. Companies are incentivized to understand that I have already bought something and then NOT have to spend on advertising the same thing to me. Yet they do. No idea why.


I see this sentiment a lot, but I think it probably does make financial sense to show you ads for a thing you already bought. There are often a few whales out there who are buying one of a thing to sample and then might later buy hundreds or thousands.


If I were buying a sample of something, with a view to buying in bulk, I don't think I'd base that bulk-buying decision on ads that follow me around after the sample purchase. I've already found the product and bought a sample; any decision to buy a thousand of them will be based on my evaluation of that sample. I'm not likely to forget about the whole project until reminded by an ad that stalks me around the web...


Doesn't matter how low is if the payout is high enough. Also, it's not like they've got any better ad to show you, CTR is something like one in a thousand for display.


Remarketing converts multiple times higher than prospecting.

For everyone 1 person annoyed by seeing an ad for a product they bought, 10 others buy.


I recently viewed the Amazon page for a 55 gallon drum of personal lubricant and you would not believe the weird ads I get now.


Personalised recommendations can be a very slippery problem to solve.


Or we could all just start googling and searching for things that are completely irrelevant to us. This would fuck up the advertising models and marketers would be forced to change their practices.

So I say to you all: go and google some random shit and do some deep dives into it. Be predictably unpredictable.


Because I don't have better things to do than playing whack-a-mole with some stupid advertisement algorithm.

I have simpler option: don't use social media, use DDG, uBlock and pay for products you use.


In theory, I don't mind advertising or machine learning based recommendations that is a nice mix of "here's more of the same" and "hey, how about this from left field". I think Spotify for instance has a good mix of this. My biggest issue, aside for privacy concerns, is the efficacy of these systems. If I could get some nice gift ideas before purchase that would be great -- if these systems could realize that I made this purchase as a gift and don't want it added to the model they have on me that would be great, these systems just really seem hit or miss right now.


That’s a system that’s impossible to build. You’re describing something that can perfectly predict human behavior. If we built a system like this, it would not be used to sell me Tide Pods.


Of course it would be used to sell you Tide Pods. It would sell you cars, too. It would also be used to make you anxious and fearful, but not so depressed that you kill yourself. It would be used to make you vote for certain people, buy particular clothes, and encourage you to maximize your lifetime consumption.


Maybe this is close to your idea: https://cs.nyu.edu/trackmenot/


Yes, but I think that this is too shallow. You have to do deep dives (as if you were actually interested in the product/service) for it to really work. Like the OP said, you have to go to review sites, watch youtube videos, etc.


Google really hates this. They've got some heavy anti-spidering watchers going on to prevent this. There was a Github project that was designed to randomize the search queries under your account. I'm not sure if that got shut down by google, or what happened. Suffice to say, not so good things happened.


There are people working around the clock to determine this information, and there's no indication that they couldn't just peg you as "open-minded" and start selling products associated with that.


Right! But I will not click on those ads because I am not part of that demographic. The whole point of doing this is to shift you to a demographic where the ads become irrelevant and you aren't tempted to click them.


The fact that you aren't clicking the ads is powerful evidence in itself, much stronger evidence than what you Google.


I run a comment reply bot on my reddit account on saturday afternoons/nights when I'm out. It just posts some highly-voted comment from a CSV. My reddit history makes no freaking sense.


https://adnauseam.io

Here you go!

Google actually banned this from the Chrome store so it must be hurting them.


For good reason: It's a fraudulent click.


A what? Fraudulent click? Lol. There’s no such thing.


Where's the fraud?

I'm not making money off of it, nor does anyone I know or have a business relationship with stand to benefit from it.

I'm merely authorizing an application to use my local system to access unsolicited hyperlinks provided to me by a third party while viewing another party's Web site.


I'm someone saddened that the discussion here focuses only on what the advertiser (/ tech companies) should have done and leaves out the aspect of how we have failed to bring the obvious solution to the harmed individual: Install an adblocker at the first sign of trouble. How come she was so morally obliged to watch any ads in the first place instead of engaging in basic self defense?

Or if the answer is technological: How come we accept the tyranny of advertisers on our most personal devices? Pi-Hole is a nice gimmick for those who can use it. If we can't bring the choice not to watch advertisements to the masses, we have essentially failed.


I'm slowly becoming convinced adblocking is the most noble form of software development.


You can't put an ad blocker in native phone apps, like Instagram. Sure, you can do it on the mobile website, but then you have to switch to the mobile website.


Apps that change the /etc/hosts file work great. Nearly all social media apps pull ads from separate domains, which can be easily curated into a list. Requires rooting your phone.


Precisely. Whenever I help a friend with their computer issues I am appalled by the user experience on the internet without and ad blocker, so it's not just a matter of privacy.

Asking companies to change their algorithms is quite honestly petty. And for some reason this has rubbed me the wrong way, gives out an attention seeking vibe. Just my two cents.


> Or if the answer is technological: How come we accept the tyranny of advertisers on our most personal devices?

On the side of the user: because we don't like paying for services with money.

On the side of the company: hockey stick growth gets big pay outs, growth can happen much quicker if you give a service away and pay with advertiser.


> Or if the answer is technological: How come we accept the tyranny of advertisers on our most personal devices?

This alone deserves a entire novel to be answered properly but in short: it reaches out to social, political and economical reasons. As I programmer I so wish the problems to be only technological but alas, they are not.

1. Most people view anything they don't understand and/or are not good at as magic that "is simply there and works this way". Most non-technical people I knew in my life -- not a small number, has to be north of 300 -- simply had no idea you could block ads. And something like the Pi-Hole they would never imagine even being possible. It'd be a spy movie tech for them.

2. Possibly controversial anecdotal observation of mine during my whole life: most normal folk are very maleable and accepting for the realities around them. Many of my peers call them "sheep" or other derogatory terms and even though on rare occasions I am pissed enough to agree with them, I still understand and realize that people have jobs and lives to deal with and they don't want to go out of their way to try and make a difference in the world. I used to be mad at that but nowadays, with me having suffered years of depression and burnout, I understand that regular folk all too well...

3. Related to the above: not many want to change society at large. As much as it boggled my mind, I actually heard a chunk of the people I met agreeing to have ads served to them. At certain point you have to wonder would you be the hero of the story that aims to bring down tech giants that harvest personal data, or the villain? I know I would view myself as the hero for sure but you gotta wonder sometimes.

---

As an even more controversial aside, IMO the current breed of capitalism is ruthless and tries to fill every minute of people's leisure time with a ton of activities -- like doing taxes in huge convoluted procedures, poking you with notifications on your phone to hatch virtual eggs quicker with the diamond currency of your mobile game of choice or whatever, Facebook et al never leaving you alone about somebody posting something, and lots and lots of others.

What I am trying to say is that most people I see around me are way too tired and broken to NOT accept tyranny.


She isn’t obligated to do so, because no one is obligated to use these platforms. They are entirely optional to life and purely a luxury item.

It’s a suffering person, yes, but would I rather have Instagram that you can opt in to and receive targeted ads in exchange for its feature set than no Instagram? Yes. And I believe if we allowed people to make the free choice, they would too.

This is about the equivalent of going to a high school reunion and having people talk about their kids soon after you had a miscarriage. It’s horrible to have that happen but it’s an honest mistake.

Jumping on the “targeted advertising is evil” wagon just means we fail to solve it another more-likely way: like how you’d imply to your old classmates in that situation that you find it painful to hear about the topic right then.


I dont see tyranny, but I do see a ton of sheep flocking to a pasture.


She describes clicking "I don't want to see this ad" and "It's not relevant to me".


That's not using an Adblocker, though. It's just letting Fb/whoever know that the ad isn't suitable for them.


Maybe she’s on mobile? Also, the author isn’t against seeing ads, period.


The first time I saw an ad that was targeted to me on Facebook based on browsing I had done elsewhere, I deleted Facebook within a month and never used it again.


That reminded me of this speech by Jeff Bezos to Princeton:

https://www.princeton.edu/news/2010/05/30/2010-baccalaureate...

“Jeff, one day you’ll understand that it’s harder to be kind than clever.”

It really had an impact on me.


Video link of the same: https://youtu.be/vBmavNoChZc?t=367


Thanks! I hadn't read that before and it was very thoughtful.


I feel like I’m alone in asking this but does personal responsibility play no part in this?

She starts by admitting that she voluntarily signaled to these platforms that she was pregnant. They are ad platforms. She then goes on to admit that she continued to go back on the platforms after and then blames them for showing ads she herself engaged with at one point!

I don’t think anyone would ever wish someone grieving to be reminded of the source of the grief or say “too bad.” But I think a reasonable person could conclude this is a little unreasonable.


I didn't read that as she "voluntarily signaled to these platforms that she was pregnant". She made social media posts with many clear signals that she was pregnant.

She basically said, "I know your algorithms can detect that I was pregnant. Can't your algorithms detect that my baby is no longer alive?" That seems like a pretty fair statement.


So now society is frustrated because it’s data isn’t being analyzed and used to serve ads effectively enough? We’ve officially come full circle.

How about this- if you are uncomfortable with the prospect of ads potentially shown to you that may upset you based on recent history that’s now changed, lay off the platform for a while.


This is precisely the hazardous morality we begin to enter when we become as passively engaged as social media allows. The platform becomes an antidote to sadness; social media has reached a place in society where it's casually being used as a means for self-medication.

People who are on social media appear to be deluding themselves into believing that social media can offer some sort of support and minimize the pain, but this is probably the biggest misconception about which we can convince the public otherwise.


>She basically said, "I know your algorithms can detect that I was pregnant. Can't your algorithms detect that my baby is no longer alive?" That seems like a pretty fair statement.

Maybe a fair statement, but a fundamental misunderstanding of how ad targeting works.


She didn't voluntarily choose "Send me pregnancy related ads," the system inferred that from her history/context. She's asking why the system couldn't also infer that her pregnancy failed, after she made a considerable amount, if not more/stronger signals for the later.


These ads are served by models. She's saying, "Hey model-builders, add in the stillbirth option. You can detect that using machine learning too -- throw in some sentiment analysis and call it good." As someone who is not building these models herself, that's as much personal responsibility as she can take. And it's a good point. Brands that want goodwill and sales if she has a kid later should want to not be seared in her memory as a pain-point after stillbirth.

It's a bit much to say "ads she herself engaged with at one point" -- Facebook and email marketers serve you a lot of ads for companies and sites you may never have engaged with directly.


but she did said that she tried to stop ads by selecting "Not interested" without much success


You aren't alone. I had the same thoughts. Why would she share that type of information on a social networking site to begin with?

It's terrible what happened to her and her family, but this post basically reads "I put my foot in a fire. Fire, why did you burn my foot?"


Are you actually suggesting that a woman should hide her pregnancy from her friends and family?


I'm horrified that anyone shares anything about their child, unborn or otherwise, on a social media platform. Shouldn't that child have the right to choose for themselves if they want literally their entire lives archived by advertisers?

Yes, she shouldn't have posted about her pregnancy on social media sites.

*Edit to add - There are a lot of other ways to share with friends and family about her pregnancy that aren't social media.


I saw this post the other day, but this just seems like it is going to involve even more outrage at FB when the algorithm presents something incorrectly. Getting ads for "You just lost a child, would you like to call a helpline?" are going to outrage just as many people, which is why I expect they just stay away from that type of subject matter all together.


Its an Ad platform not a member of this person's inner circle. Just shut the damn ads off for anything having to do with babies. At the end of the day, you are creating a mechanism to connect a person with a company so they can buy products. If something would be inappropriate for a salesperson to say to a person, then make damn sure your Ad network doesn't say that. It would be wholly inappropriate for a salesperson to suggest a hotline. Just stop. The Ad network is business, this just got deeply personal.


I think this is the overarching issue with the original post, as they're looking for Facebooks ad network to comfort them or give them some kind of advice, and that is totally not what it is designed to do.


No, I don't think the story is looking for comfort from Facebook ads. I am pretty sure she is looking for Facebook not to add to the pain and just get out of any conversation she is having with those that support her. That's the point of the letter.

Frankly, if you are an advertiser and know your ads are being served to someone who, through tragedy, cannot use your product and instead are now part of a thousand cuts treatment, you should be pretty offended yourself.


> Frankly, if you are an advertiser and know your ads are being served to someone who, through tragedy, cannot use your product and instead are now part of a thousand cuts treatment, you should be pretty offended yourself.

How, exactly, are they supposed to know that? The bigger issue here is everyone expecting Facebook to somehow be omnipotent. They're not. They are a sharing platform that makes money by selling targeted ads. They're not all knowing.


> How, exactly, are they supposed to know that?

Because situations have more than one possible outcome. Lumping everyone into a single cohort based on a majority rule (e.g. people who buy pregnancy-related products often buy baby-related products later) is a kind of insensitivity that would be frowned on if applied in person.

> The bigger issue here is everyone expecting Facebook to somehow be omnipotent.

I'd argue that Facebook (or its advertising algorithm in this case) is operating _as if it is_ omniscient when really there are lots of shortcomings that can and do cause harm. It should be reigned in instead of forgiven.


The same way they know to present the ads in the first place? If they can do one, they can do the other, right?

It just happens infrequently enough that they haven't had to care about it. Yet.


I would imagine that in this case, some of their social media folks are seeing the problem. Target got word that they were accidentally telling some parents their kid was pregnant. Enough retweets and articles should hit the any social team worth the name.


If I employ a human too stupid to not insult my customers, I would probably fire him.


I read:

> Or maybe, just maybe, not at all.

As "please don't".


I think they are asking to stop being prodded with baby ads altogether, not to be shown ads for loss support hotlines.


The correct response is not to try to advertise targeting the traumatic event, but to pivot and avoid it and related subjects entirely. I doubt people will complain if ads about babies and parenting were to disappear after a still birth or miscarriage.


>The correct response is not to try to advertise...

Never going to happen, they say marketers ruin everything for a reason.


I'm not saying "don't advertise at all", but rather "advertise something very different". Does that matter, though? Maybe not.


How about just stopping the pregnancy ads? Why does it suddenly have to be serving ads about stillbirth?! Maybe just go back to serving pre-pregnancy ads, or ads about something else....or maybe just no ads at all, for a welcome respite?


It wouldn’t outrage people who need the help.


There is a choice on Google/FB/Amazon to turn off personalized ads I would presume? How good those choices are being respected or how long does it takes effect is another issue.


You should try it.

I switched interest based ads off on Google a few times, and also tried using YouTube while logged out. It’s pure trash.

When you opt out of interest based ads you actually opt in to their inventory of worst content imaginable. There’s no middle ground.


On an emotional level, this deeply troubles me—but to address the problem analytically:

It strikes me that an ad algorithm that forces grieving people to opt out of social media at a time when their friends and family are most interested in checking in on their well-being may encourage a broader pattern of falling back on non-ad-supported communication channels in the weeks or months to come.

It seems like there's a strong case to be made here that some ad-supported communication channels should consider either an ad-free grief mode when users are in distress, or at least sell a giftable ability to use the site without ads, in which case friends of the grieving could chip in to avoid this kind of distress.


Interesting idea, but I'm bothered by the idea that social media platforms would be effectively monetizing grief in this solution.

This sort of thing, to me, points at the deficiencies of appealing to purely statistical patterns without any guard rails placed on top. After all, statistically, the most likely relevant categories after "pre-natal" are going to be surrounding children who _were_ born safely. So if you appeal just to data alone, you won't likely solve this problem because it is relatively rare.

The biggest failure, to me, is the response of the system when the user goes out of her way to say that the "pre-natal" ads are not relevant. Statistically it assumes successful birth, but that doesn't reflect her actual intent. A simple dialog tree would maybe suffice: "not relevant" -> "this specific thing is irrelevant" vs "suppress this and all related content". One signal says maybe don't show that particular brand again, and the other says to pivot the relevance model significantly away from that topical section of the ad space entirely.


They already monetize joy. Is monetizing grief worse? Why? What about monetizing neither? What's the signal to pivot the relevance model significantly away from a world full of robots constantly self-optimizing to exploit emotions they cannot understand or share?

(Or if we can't manage that, maybe at least we could somewhat curtail the misbehavior of the ML systems they foist on the rest of us.)


If they monetize grief, they’re incentivized to maximize it


I totally see what you mean, but the concept of gifting temporary "ad-free" internet media to grieving people feels dystopian to me.


I'd prefer the systems avoided showing users upsetting ads for free, I'm just not sure how that system would work or how to prevent users from gaming it.


Yes, the proper solution here is definitely for people to give Facebook money to not insult them. The disruptor hivemind delivers again.


Personally, my method for avoiding being insulted by Facebook has been to not use their website or services for the last five or so years.

I was challenging myself to imagine what tactics they might try to discourage others from abandoning their platform in cases of extreme grief.


Your solution sounds diplomatic and I applaud you for that. You seem like a kind-hearted person.

That being said, I would never agree with it. Let's not pay companies to solve problems they created in the first place.


I don't mind the ads... I just wish we could manually update our inferred advertising profiles ourselves!

Like, "I already bought that product on Amazon so you don't need to advertise it to me anymore." Like, "I already shop at store X so you don't need to keep showing it." Or the obvious case in this article.

Sure, draw your inferences... but please let us correct them! If I'm gonna see ads, I want them to be relevant.



Fascinating! Thanks, I'd never heard of this.


You are assuming they optimize for a singular sale which we all know is not true. They want you to buy all the time.


Experienced this with a miscarriage in our family, it's the worst. Months and months later this stuff still follows me around the web like a taunting ghost.

The worst part of targeted advertising is when it becomes targeted haunting.


I know it's not ideal, but if you go to the below URLs for Google and Facebook respectively, you can choose which categories you get ads about, and can even specifically "turn off" some categories of ads for you (it looks like facebook even allows disabling some categories for a specific period of time!)

Again, it's obviously not perfect, but at the very least it can help the situation.

https://adssettings.google.com/authenticated

https://www.facebook.com/ads/preferences/


thanks for this. People responding here with "just use an ad blocker" ... I have uBlock Origin and it's great but it only goes so far.


My wife and I went through something similar a little less than a year ago (baby die at 6 months from a heart failure). Every now and then I still get the occasional diaper ad popping up.

What disturbs me the most are colleagues who said it was my fault since I hadn’t shared my grief on social media, and algorithms could not learn (I work in data « science »). As if targeted advertising was a law of Nature and there was nothing we could do about it. Just deal with it. And blame yourself for not showcasing your pain in front of hundreds of people.

The issue with algorithms is not that these events are statistically unlikely (unfortunately). They just have memory, enough memory to be accurate. So the day happy messages are replaced with the word « death », it hasn’t had the time to learn baby was not there anymore. There may be ways to improve that, but I bet you’d be trading accuracy for the remaining 95% people. Guess what version wins at a business meeting?

I’m not blaming colleagues who write these algorithms. They didn’t know this could happen, they didn’t think about it before this happened to me. We’re all math and CS geeks priding ourselves in [insert your metric here] score above the 90% mark, and that’s all we optimize for. I’ve heard a joke once that the difference between science majors and humanities major was that scientists build the atomic bomb, humanities major explain to them why it’s not a good idea to use it. Targeted advertising is not as bad as the atomic bomb, of course, but you get the idea.


An error in the inverse direction, if these sites were to add dead-child-detecting algorithms, would be far worse in my opinion. Imagine you're a mother who stops posting for a while because you're staying in the hospital with your newborn who is unwell but still alive. You decide to post a long-overdue update to friends and family on Instagram or whatever and are suddenly served ads for child coffins and funeral services. Definitely the worse of two evils in this case.


Suprised pregnancy doesn’t fall within the ‘no targeting based on health conditions’ policy of most ad platforms.


Target got in hot water in 2012 when a guy found out his 17 year old daughter was pregnant because of a customized ad sent to their home.

In the articles at the time, they mentioned your whole life gets turned upside down when you have a kid. It's an event that changes your routines, and all retailers would love to become the new convenient store you routinely go to. Huge value for them.

www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/


It has only gotten worse since. While you say Target got in "hot water", the advertisers have only expanded since. "adtech" continues to try to find new ways to invade and analyze your private life without morals or compassion for the purpose of greed.


from a marketer's perspective, it's one of the most lucrative life events, along with weddings and buying a house and falling ill -- huge amounts of money being spent on goods and services and new behaviors.


My life is my life, not a revenue stream for these parasites to capture. Adblocking isn’t good enough. There needs to be a way to punish these people and cost them money.


It looks like the author thought the ads were useful when the pregnancy was progressing normally, even clicking on at least one of them. The reason these ads are lucrative is that she is not the only person who found the ads useful.


Maybe I’m singing, “Don’t Ban The Bomb”[0], but algorithmic advertising literally pays the mortgage and puts food on my table.

You can live your life just fine. No one is interfering with it. No more than getting a flier in the mail asking if you want to drive for Uber is a big disruption. In fact, enough people find some level advertising useful, and have for literally hundreds of years.

[0] https://youtu.be/lSEtCz61O_8


Useful to whom, the companies employing predatory behaviour and psychological tricks to convince people to buy things they do not need?

There's no nice way to put this but the world would not be at all affected or a worse place if your company ceased to exist tomorrow. In fact you could argue many would be better off.

I struggle to believe competant software developers can argue "oh but I didn't have a choice". You chose to join this company to put food on your table. There are many ethically good industries you could also work in to do so. Maybe you'd earn slightly less, but you could have a clean conscience.


I didn’t say I didn’t have a choice. I said, there’s not a problem with this, and even people that supposed not dirtying their hands are being paid by advertising.

Let’s examine the naïveté you expressed. I work for Mozilla, which is basically a non-profit, which in turn gets its money from Google, which gets its money from advertising.

Your favorite website? How does it make money? Advertising.

Why? Because it’s about the only way to make money on the Internet. Even if you’re selling a product, how do you think anyone is going to find it it if you don’t advertise it? Even nonprofits have to “dirty” themselves by advertising.

So spare me the holier than thou routine. You’re in the muck just like everyone else.


It's way to lucrative of a market to ignore.


What she is asking will probably be even more creepy.

Currently these platforms are basically associating words to your user, which advertisers then select from a pull down.

Past some specialized categories created out of advertiser demand - such as political leaning, location, age and gender - there isn't much reasoning behind it.

First her user id got the pregnancy keywords associated, now she also has sad / still born words associated. The system isn't smart enough to reason that the two tags should be mutually exclusive in a culturally sensitive way, just like it doesn't with millions of other pairs around the world within 100s of other cultures & languages around the world.

But if it was, then how much worse could it get in other ways?


The author describes manually removing herself from being targeted, e.g. clicking "I don't want to see this ad" and "It's not relevant to me". Are we supposed to be content with platforms that are unable to incorporate such manual feedback? I mean, we (the average user on HN) get pretty irate when software ignores or reverts config options we've previously set. I don't see how this is much different.


Banksy on advertising, in 2004:

People are taking the piss out of you everyday. They butt into your life, take a cheap shot at you and then disappear. They leer at you from tall buildings and make you feel small. They make flippant comments from buses that imply you're not sexy enough and that all the fun is happening somewhere else. They are on TV making your girlfriend feel inadequate. They have access to the most sophisticated technology the world has ever seen and they bully you with it. They are The Advertisers and they are laughing at you. You, however, are forbidden to touch them. Trademarks, intellectual property rights and copyright law mean advertisers can say what they like wherever they like with total impunity. Fuck that. Any advert in a public space that gives you no choice whether you see it or not is yours. It's yours to take, re-arrange and re-use. You can do whatever you like with it. Asking for permission is like asking to keep a rock someone just threw at your head. You owe the companies nothing. Less than nothing, you especially don't owe them any courtesy. They owe you. They have re-arranged the world to put themselves in front of you. They never asked for your permission, don't even start asking for theirs.

The problem is that you can't graffiti or tear down a digital ad. Activism is nigh impossible in digital walled gardens.


I don't really see the difference between Banksy having himself plastered all over the media for shredding his artwork versus Colgate telling me to buy their toothpaste. Banksy understands and uses advertising better than anyone.


Is he paying for that advertising or are journalists voluntarily giving him publicity?

Big difference here. When I read the news I freely subject myself to what the journalists of my choice considered newsworthy.

I have little say in what kind of ads I'm bombarded with, as OP demonstrated and banksy implied.



That's a false equivalency. Banksy wasn't "having himself plastered all over the media" - he didn't do any media buys or pay anyone to promote him.


Well, that was 14 years ago. It's hard to remain on your outraged and righteous path when you start getting fat checks for it (see also Black Mirror s1e2, "Fifteen Million Merits").


The ending of the Black Mirror episode doesn't take away from the "speech" that comes before that, though:

> You pull a face, and poke it towards the stage, and we lah-di-dah, we sing and dance and tumble around. And all you see up here, it's not people, you don't see people up here, it's all fodder. And the faker the fodder, the more you love it, because fake fodder's the only thing that works any more. It's all that we can stomach. [..] Fuck you all for thinking the one thing I came close to never meant anything. For oozing around it and crushing it into a bone, into a joke. One more ugly joke in a kingdom of millions.

In my mind, the last person to blame is the one who at least made an attempt to be more, who made no "mistake" other than being one human -- facing millions who just watch -- and being oozed around and crushed eventually. Selling out will never suck as much as people, who never even had what another person sold out, being smug about someone selling out. It's like they think it vindicates them for not even having tried.

(not that I read your comment as what I'm complaining about, it's a general observation. Freebird!)


You can do better and just install an ad blocker though.

Activism against ads in public space is still a very much IRL activity.


The problem is that banksy is incredibly self promoting and making millions. Talk about a huge hipocrite. "I'll cash my check pretending to not advertise his work, oh and don't be an advertising whore, seriously where is my check"


>People are taking the piss out of you everyday.

>in a public space that gives you no choice whether you see it or not is yours.

>what they like wherever they like with total impunity.

>Less than nothing, you especially don't owe them any courtesy

>They never asked for your permission

Sums up vandals like Banksy pretty well too.


I've never been in such a terrible situation, and I hope I never will, but. Modern online ads are so annoying I can't help but making serious effort to eliminate it from my online life. Adblockers, filtering proxies and so on. I believe that's the only way to deal with this monstrosity in a short perspective.


Does anybody ever see relevant ads?

I guess retargeting kinda works, if I look for info about mortages I see lots of ads from banks, but for anything else?

We're all human, we're amazingly complex, everyone of us is unique, and nobody likes to be put in a drawer.

But then you have the smartest programmers in the world, working for the richest companies in the world, and all they come up with is "if someone googles for baby show them ads for nappies"?

It's remarkable how stupid these "smart" algorithms are, and I think the problem is a based on a simple fallacy: Everyone thinks of themselves as sophisticated, but believes others are simpletons.

I'm not sure where to go from here.


Based on a comment wesd linked to above (https://news.ycombinator.com/item?id=18535748), and applying that comment to all the large corporations behind ad-tech (because IMHO large corporate political problems are fairly similar no matter where you go)...

It seems like for these systems, there are often multiple teams developing the algorithms, with multiple algorithms in play behind the recommendation. Each algorithm has their own style of metrics system. Some of them are probably relatively simple with not very sophisticated AI/ML (or even no AI/ML at all) behind it. And, there is a lot of internal political conflict preventing better analysis of said algorithms to refine them.

In other words, you're absolutely right.


They are astonishingly terrible. I'm especially amazed by the spam sent straight from amazon. Like...high-end fashion. Huh? Maybe I bought a pair of men's socks and they think I want pricey handbags? Or...ads for books I've already purchased. Like, umm... They have almost two decades of my buying habits and that's the best they can do?


>Does anybody ever see relevant ads?

Are you serious?

Does anyone ever make sales from advertising? Yes they do, even without retargeting.


Sounds a lot like “A friend of mine died and I didn’t know because of algorithms” https://news.ycombinator.com/item?id=15956811


Two years ago my relationship of 11 years came to an end. While I was going through a very painful breakup, Facebook would constantly inundate me with “memories” full of fake algorithmically tuned enthusiasm. My ex was constantly part of that. Understandably, so. I just wonder what it would be for someone that has actually lost a child or a dear one. Yes, I know I can deactivate the relationship status and choose not to see that person anymore, but that won’t work with memories that don’t have that person tagged in, but will still depict something that you’ve still done together in the past, like the picture from a trip, a piece of furniture you built together, or a pic of her amazing cat (who also tragically died years ago).


I am surprised how many comments here complain about things that would not be a problem if one used adblocker. I mean, I have no problem with people wanting to see ads, but I don't really understand - why? I can count on my fingers how many times I saw an ad that was genuinely useful for me - and I probably could keep typing while I'm counting. But if I use a device without an adblocker, they start to annoy me in minutes, and sometimes just make browsing completely useless. Of course, it's hard to get rid of 100% of ads, but it's not complicated to get rid of about 90%.


Tough story...

What’s interesting in the post is that the author is not mad at ads in the first place, she’s angry at the failure of the algorithm to adapt to her situation.

So this is something that companies could fix. Like when they infer something about your state, they could ask you about it, and have a setting “on and off” ads.

Like: “hi, our algo tells us you may be pregnant, are interested in seeing ads for these kinds of stuff or not”? And that could be a setting you could remove.

Maybe it’ll spook some users off...

In any case there should be a button “stop trying to infer shit about me”.

Perhaps it’d be a good way to introduce pay $5 a month for total privacy Facebook.


It always amazes me how stupid these company's algos are sometimes.

Amazon tells me I've had an account with them for over a decade. It knows I've bought 2 kettles from them. When does it start serving ads for kettles? Just after I've bought one.

On the other hand I'm not sure most people would want facebook wt al 'knowing' they've just lost their child, so other than getting rid of targetted ads, I'm not sure what other solution is best.


>It always amazes me how stupid these company's algos are sometimes.

If they're so stupid, why are they making so much money? Is an algo with a positive RoI really stupid?


They are making money despite their algos.

Amazon just suggests things I've looked at regardless of whether I've bought it. I hopefully won't be buying a new kettle for a few years, so its just wasted. Why don't they try upselling me related things instead of spamming me?

Their algo has been suggesting things for over 10 years, based on over 10 years of buying data. I've clicked on zero of their ads or emails.


"Ads were my best friends - until they weren't"

I used to love ads. Every time I saw a new ad show up, I'd get a little bit excited, knowing that it would be my friend for the next few month or years. When I saw an old familiar ad, I made sure to greet it, so that it doesn't get lonely. I took it as part of my social responsibility - the same as picking up after my dog - to buy products when I'd seen enough ads for them.

That's not to say all ads were good - sometimes, I would see an ad for something that I didn't like. But this was always a cause for reflection - had I forgotten to tell the machine about some private detail of my life, the knowledge of which it could have targeted me better? On those occasions, I would make a promise to feed it more data, to make sure it was more able to help me. Then, I would dutifully mark the ad "I don't want to see this ad", to inform it of it's mistake. The machine would always learn quickly, and go back to showing me happy, friendly ads. I would be extra careful to buy those products, to make sure the mahcine knew it was doing a good job.

But. There came a time when the machine started making mistakes, and wouldn't stop. I had realized that only two people - me and my husband - simply couldn't buy that many things, and we needed more mouths as tributes to the machine. I had entered a new stage in my life, and was already buying more products than I ever had before. And the machine was cooperative, helpful, bringing me new friends every day. And that would have continued being the case, as my child grew the machine would have helped it select its' college. But it never grew up. Because it was never born. Because it was stillborn. And the machine didn't learn. And when I told it I wanted ads for antidepressants and cabbage, it didn't listen. It kept on showing me ads for products for a baby that I would never have. And I could follow, as the years passed, each stage of my unborn babies life, as the machine desperately tried to target ads at it.

What once were my friends have become my worst nightmare. What once was my partner in better supporting my consumerism, became my tormentor. What once was a happy member of society became a broken shell. It could happen to you too.

Also:

>there are 26000 stillbirths in the US every year

>when we millions of broken-hearted

thought that would slip by me if you broke it over two pictures, did you?


This might have already been said, but whatever happened to ads based on the content of the page you are on? If I'm reading an article about manufactured homes, an ad linking to a manufactured home seller could be actually useful.

Granted, I haven't turned of my ad blocker in a very long time, so maybe they still exist?


And now she starts getting adoption ads: https://twitter.com/gbrockell/status/1072992972701138945 .

"We're sorry your baby passed away, how about get a new one from here?"

That's sick.


I really really think it is unethical to generate a video from my stream to show me my last year or whatsoever.

Why? Fuck you why.do you really think I wanna feel good thanks to a standard algorithm which some software team wrote to engage me more?

No.

And yes it's fucking obvious to me how it works but it's not obvious to enough other people.


Not as dramatic and much simpler to fix, yet still broken: Search for one-time things (say a new phone) on amazon. Purchase said things. Continue getting ads for new phones for weeks.

Surely they must realize that after you purchase a phone re-targetting will only be effective at least a year+ later.


I'm coming around to the idea that personalized ads should be classified along with biological weapons. The latter attacks your biology while the former attacks your mind.

Consider the story about Russia hacking the 2016 elections through the use of a massive campaign of personalized ads on Facebook and Google to polarize communities through misinformation sites tailored to their personal biases.

Whether you believe the Russia story or not, this set of actions would as surely constitute an attack on a civilian population as a deployment of a bioweapon tailored to cause mental confusion and delirium.

Yet due to a blind spot and the daily delivery of these weapons on behalf of the major personalized ad aggregators, we do not recognize the severity of what's going on. Long-term elevated levels of stress hormones do real physical damage.

Can you imagine if the survivors of Nagasaki were to react to the nuclear destruction of their city with the same careless indifference that we have towards the continuous attempts to reprogram our very minds?


I never shared anything about our pregnancies on Facebook until the babies came.

Anyway, it's stories like this that make me want to share even less personal information. It also shows that we've gotten far too carried away with marketing.


This is why we need Ethics Boards in software engineering.


Interesting story, not sure if it's very actionable.


Another great reason to install ublock origin, privacy badger, and to put facebook and the gang in different browser instance / container


Early days on Facebook kept trying to get me to add my ex as a friend, they didnt understand "block" very well.


She could install an ad blocker.


I'm genuinely curious what the reaction would be with non-programatic ads that presented similar content. Would it be just as painful if you were watching TV and got an ad for Huggies, or is it because we know the ads are personalized?


Let's just ban ad tech.


adblock, ublock, zap every fucking ad


Twitter confuses me.

There is a character limit on text, so its users post images of text?

I know this is off-topic; it just seems an absolutely byzantine method of communication.

Reminds me of sending a YouTube video with Word. https://xkcd.com/763/


This is the black mirror episode


[flagged]


We asked you many times to comment civilly but it's not happening so we've banned the account.


[flagged]


> She's demonstrating a huge amount of entitlement and technically illiteracy.

No, she's not.

I do have a problem with all of the various people that we interface with, including insurance organizations and pharmacies, abusing data sharing agreements intended for subrogation and similar procedures to manage pharmaceutical sales quotas and conduct outbound marketing.

My wife was admitted to the hospital due to complications that from what was an early miscarriage. The health insurer sells data that allows an advertiser to surmise that there was a hospital admission to the OB department. The PBM provides anyone paying with information regarding prescriptions before my insurer even gets the claim.

An advertiser (infant formula company) determines that my wife is likely pregnant and likely to deliver on Month/Day/Year. Guess what arrives on that day? A Fedex care package of formula.

That was a very hurtful event for us, and similar violations happen thousands of times every day. Both behind the scenes data brokers and front of house organizations like Facebook and Twitter are villains in this story.

It isn't entitlement or illiteracy to expect that ethically vacant entities don't abuse their positions to push products on you in hurtful ways. It's naive to assume that people give a shit, but that's not a demerit on her. I look forward to the day in the distant future when the government forces them to care.


If we're talking about data privacy and control, I'm 100% with you on the situation.


[flagged]


We should be able to communicate and conduct our lives without service providers mining our data for profit without compensation or consideration.

My cellular company shouldn’t be selling my location data, my pharmacy shouldn’t be selling my prescriptions to some aggregator. The people running the companies that aggregated my wife’s medical history should be in prison as far as I am concerned.

This forum was vociferous against the NSA when Snowden came out. Why is NSA a bad guy when a formula company has access to similar telemetry to peddle product?


Who is the system for then?


Not everyone is as enlightened about data science as you seem to be. If you actually want GDPR-like reform in the U.S., then part of the discussion involves people pointing out the absurdities and injustice in how data science is currently implemented.

Imagine your line of condescending argument being applied to someone complaining about being wrongly convicted and incarcerated, and who is calling for criminal justice reform. "You're entitled because you believe that your unusual/rare situation should be accounted for and encoded in the overall justice system"

edit: and the author even points out how she has done the manual work to give feedback to the algorithm:

> And when we millions of brokenhearted people helpfully click “I don’t want to see this ad,” and even answer your “Why?” with the cruel-but-true “It’s not relevant to me,” do you know what your algorithm decides, Tech Companies? It decides you’ve given birth, assumes a happy result, and deluges you with ads for the best nursing bras...

You read this, and yet still saw fit to berate the author for being "entitled". And then, one sentence later, go on to proclaim about how the "crux of the issue is: should we have control over the data and interactions we perform"? How does that work exactly?


As technical people: We generally understand that data science works in the following manner:

  1. You collect and annotate data (This is probably about 90% of the work here)
  2. Someone has a goal of something to find or query
  3. Develop models that can solve that goal, or redefine
  4. Use the output 
For the results of data science, you are always going to get generalized approximations of segments (groups of people) That's just how statistics work. (It's also how medicine tends to work as well.. but I digress)

> Criminal justifce reform

The consequences from are a lot higher. We've seen this over 1000+ years about some of the conerning court cases. That's why we have ideas such as "spirit of the law," human judges, juries, and appeals. Additionally, this is why the law is so complicated and murky. Computers are very bad at this.

I'm especially hard on this individual because she's using her position with WaPO to try to publically "humiliate" those tech companies.

> Giving feedback

This is where some of the techincal literacy comes in a well, she doesn't seem to understand that the feedback requires human understanding is not something that is "intelligently" accounted for. Systems like that do not exist. The ad platform isn't something that is designed to accomidate her needs. (which is why I used the word entitled) It's a bit like being upset over those same ads being shown on TV. (media literacy covers, those are ads they may or may not be targeted towards you and those do not have to reflect your reality)

---

As far as the data privacy bit goes: We should have control over the data that is collected about us and have the ability to remove it later. At the moment the strongest thing we have is the GDPR, but that's for European Union citizens only.


1) Her situation isn't rare/unusual. Miscarriages happen all the time.

2) >She's making assumptions that all of these studies from data science and usage of recommendation/personalization systems are for her benefit. They're not, she's not paying the bills on that. That's for someone to sell her something.

Advertising (at its best) is for the consumer's benefit. A consumer needs to make informed decisions about their purchases, and they need information disseminated to them to make those decisions. That information being released is advertising.

As a human being, she is entitled to make the case that those who create and control the information dissemination systems don't cause literal terror.


Even if you take a very crass profit-extraction point of view, advertising your baby stuff to a mourning not-parent is a waste of money and an actual impediment to future sales to that not-parent-now-but-could-be-later.

Miscarriages do happen all the time, as kerbalspacepro says. It could be a win-win for companies to realize this and put a bit of time into rebalancing their models. It just takes a little thoughtfulness, and thinking about this sort of thing would be an innovation that could be useful in many contexts. Many people "change state" and advertisers recognizing that change of state rapidly would help optimize their dollars too. See above discussions of people transitioning or deaths.


> Miscarriages

They do, but they're not the event that the advertisers are looking for.

> Advertising

It's to get the word out about a business's products/serices to entice a financial reward for the business.

Unless she's paying the platform that is showing the ads, she doesn't get the right to dictate the platform's output. She is the product. I.e Youtube red (without ads) vs Youtube.


Where is the "literal terror"?


Protip: No, Facebook, Twitter and Instagram didn't see you Googling anything.


Riiiiight. Because none of these companies sell/buy data from eachother.


Google certainly does not sell your search queries to twifacegram, that'd just be silly.

If you have any evidence to the contrary, I would absolutely love to see it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: