Hacker News new | past | comments | ask | show | jobs | submit login
I was let go for refusing to deploy a dark pattern (peachesnstink.com)
615 points by codingclaws on June 25, 2021 | hide | past | favorite | 517 comments



Many years ago I added the feature to xbox.com to allow you to easily cancel your Xbox Live Gold subscription. We built a UX that was ideal for consumers - a couple of clicks, no "Here's what you'll be missing..." screens. It was awesome.

Before this feature was released you had to call Xbox support to cancel.

Once word spread that you could do it on the web, huge numbers of customers, that had been stuck paying for an Xbox Live Gold subscription they weren't using, began cancelling.

So our PM got a call from a VP. We were instructed to remove it from the site immediately. We fantasized about telling the VP to stick it and quitting en masse, but we knew it wouldn't change anything. We'd just be replaced by someone that would.

So we complied, but we all lost a little bit of our faith in Xbox that day.


It's so short-sighted too.

I signed up for America's Test Kitchen one time, because they had a nice program for learning the basics. Probably used it for a couple months, and then I was done with that content and wanted to cancel. Of course, even though you can sign up online, you have to cancel on the phone. On hold for 20-30 minutes during work hours, then talk to the rep, then listen to their retention offer, then it's successfully cancelled.

I actually loved the content, and would probably have resubscribed for a month here and there. (Cook's Illustrated is part of the same group and their content is also great.) But I will never do it again because of this experience.

How many people decided to get the new Playstation next time because of a frustrating experience cancelling their xbox subscription? You won't see those numbers in a spreadsheet.


I have a similar story for internet. I tried to cancel my Xfinity (Comcast) internet service through the online chat. After being transferred several times, the person said they will do it but then disconnected right after they said that. I had to restart the process and in the end they told me I had to call. So I called and finally was able to cancel after they wasted hours of my time.

I will never use Xfinity/Comcast again in my life if I have a choice and will try to make sure everyone knows how shitty they are. Unfortunately they have monopolies in many areas and can be as shitty as they want, but if you have a choice I recommend never using them.

I can't believe that in 2021 these tactics are still legal. It's also stupidly shortsighted because in the long term I'm pretty sure they lose money by making everyone hate them. If it was easy to cancel, I would happily sign up again in the future without giving it much thought and would think positively of the company.

Edit: I also want to add that I was paying extra to not have a contract so I could easily cancel.


I've heard the Comcast "retention agents" have a quota or retentions vs cancellations they must maintain, hence ask the disconnected cancellation calls.

For a time, I resorted to having an attorney cancel my Comcast service to ensure it actually happened.

But once, months after the attorney forwarded my Comcast cancellation confirmation, I received a notice from a collections agency for the exact Comcast account I had cancelled. The attorney took care of that too.

My new (and best) method for dealing with Comcast is to use a fake name and social. I've been using the cats' names for the past few years, and it works great!

I recently wanted to quit Comcast service at one of my properties, so I went online to chat. No agents available, so I just removed my credit card from the account and stopped paying. They'll figure it out eventually. And good luck of they're going to try to collect from Westley the Cat. He's unemployed.


If you want it done quickly tell them you're going to prison.


You say that, but my wife had to pay a cancellation fee to them despite the fact she was moving in with another Comcast customer.

I will never willingly use Comcast (and they don't care, because monopoly).


"I'm going to prison:"

Like shibboleet, but for cancellations.


This had me cracking up, it's genius though. When I had to cancel I told them I'm leaving the country but I'm gonna use the prison line next time.


Or just say you're moving.


>use a fake name and social

They let you use a fake social security #?


I recall when I signed up for 24 Hour Fitness many many years ago. They wanted my social security number, which I refused to give, so the employee just put some random number down. Turns out they needed it because my membership was a "loan" for the whole cost over X number of years, which I pay back every month with the membership fees. I have no idea how that fake number managed to get through their system.


When I canceled my Comcast internet service, I got much less resistance from them, but this was in the days where they still had physical offices that you can go to. I took my modem and dropped it off, telling them to cancel my subscription. When they tried to give me the retention speech, I told them I had already switched providers. End of conversation.


The best way to do this in my experience is to tell them you're moving in with a partner who is already subscribed. Works equally well whether true or not.


My foolproof flowchart:

"Why are you cancelling?"

"I'm moving overseas."

"Where to?"

"I'd prefer not to say."


"Where to?"

"North Korea"


Seriously. The reason we're annoyed at them asking us is because it's none of their business.

Ask me a stupid question, get a stupid answer.


Didn't get my wife out of paying an early termination fee with Comcast.


They're still pulling this kind of stuff? Back in 2014 a journalist recorded an agonizing conversation with a Comcast agent who refused, over and over again, to disconnect his service without arguing. One of the most infuriating things I've ever heard: https://www.cnet.com/news/could-this-comcast-rep-be-the-wors.... Listening to it on a bad day is not advisable


What I fear is that hostile behavior like this is actually not short-sighted. I suspect in many cases these companies are pretty savvy about how their business models work, and I suspect that they are correctly estimating that they will end up ahead both in the short term and the long term by making recurring payments easy to forget and difficult to cancel. I really don't see this sort of customer ill will being that effective at scale. I will probably never pay America's Test Kitchen one penny after hearing your review, but my impression is that they still have an excellent reputation. And do you really have great alternatives? Does Playstation not use the exact sort of hostile patterns as Xbox?


I suspect it is shorted sighted in a sense, and not in another. The choice to prevent cancelation likely hurts the long term revenue of Microsoft and at the same time maximizes the short term revenue of that executive. He will be working somewhere else by the time that chicken comes home to roost, so it maximises his value long term.


What I’m saying is that it will potentially also help revenue in the long term, and that “customer ill will” won’t actually spread and turn away customers at any noticeable scale.


how do you measure that?


There’s long vs short term, but I’d also add ease of calculating a decision’s value. Ironically I think a lot of the business and tech world is hamstrung by “data-driven” decision making which assumes

1. You have all the necessary data and

2. You are interpreting it correctly and completely

This is almost never true, so instead “data driven” is mostly “data covering-your-ass.” Maybe the future will yield leaders more capable of wielding data less like a cudgel, but I’m not optimistic.


It is the best decision for the executives that are in the position that they are in right now and for their foreseeable future. Now for the decades to come? Probably not.

I was able to switch to a new company with transparent simple billing and I will never go back to Comcast even if they offer me a better deal on a better product.

Their customer service policies were horrible and anti consumer. They also routinely throttle certain types of traffic. BitTorrent and Xbox were both verifiably throttled for me during different periods of being a customer of theirs. They increase billing every few months until you complain.

I had to constantly fight to prove that my modem was owned by me. And they just kept adding it back as their equipment and charging me a monthly fee for it. They wanted me to prove I bought it. Wanted to know where I bought it. Etc. I had to call and prove that I indeed bought and paid for it myself numerous times.

Point being. I will never go back. I suspect many others will not go back either once given another option. And options are coming through community fiber projects, 5G and Starlink to some of the areas where Comcast has been the only option.


I'm also afraid of that, good point.

I'm a fan of Serious Eats for cooking content. Again, ATK was great except for the subscription thing. Looking at their Support page, it looks like nothing has changed with their cancellation policy[1] (the fact that you can cancel your physical magazine subscription online, but not the website subscription is hilarious). I would LOVE to know if they have support for online cancellation for California customers, that they just disable for non-California customers. I've heard of companies doing things like that.

Not sure if Playstation did the same, but Xbox doesn't make this difficult this anymore anyways. In fact, recently I started a Game Pass Xbox subscription on my xbox account and used it for a couple days. Then I realized I should do it on my main microsoft account instead (so I don't have multiple accounts anymore), so I cancelled. They gave me a full refund automatically without me needing to ask or do anything. So companies do change, although I imagine it's just easier to implement it this way anyways. Phone-based customer service is really expensive.

[1] https://www.americastestkitchen.com/support#change-membershi...


I was actually going to buy a subscription but luckily I learned about that policy and never did.

Now I use a privacy card for all subscriptions to avoid the hassle


This is my perspective exactly. It's hard to be mad at someone for optimizing for their own desired outcome.

I'm mad that it works.


I once got a marketing email from British Airways with no "unsubscribe" link, I emailed back telling them to take me off their list as I never asked for marketing emails when I booked my ticket.

The crazy thing is they replied, but refused to take me off the list unless I sent them an actual physical letter in the post. A few emails in they claimed it was due to a "technical" issue. That BS annoyed me so much, I am now into the 3rd decade of my own personal British Airways boycott

Edit: I realise that's insane but it makes me giggle everytime I deliberately don't book BA, and I wonder to myself how much money I would be willing to loose by going for the next most expensive ticket, just to keep my boycott going


I engage in petty boycotts like this.

Even if it’s PEANUTS to them. I feel better, because at least I’m not participating in perpetuating a shitty system.

I feel especially good about it when those companies are ubiquitous and hard to avoid, because I feel rather righteous against an all encompassing behemoth that likely would have got my money otherwise.


>I once got a marketing email from British Airways with no "unsubscribe" link

I just use Gmail's "Report as Spam" feature in these cases. If enough people do it when they can't unsubscribe easily, it's gonna start eating into their deliverability.


I just mark as spam, every time. Hopefully it tanks their deliverability a bit.


I'm glad I'm not crazy, or at least not alone in my craziness !

I end up engaging in the same type of behaviour with companies that also do not make unsubscribing obvious, or, worst of all, have slightly annoying GDPR-mandated tracking-denying UIs. Trying to get me to go through two sub-menus and not having a "Deny all" toggle ? Good, good, good. See how petty I can get, $COMPANY !


> The crazy thing is they replied, but refused to take me off the list unless I sent them an actual physical letter in the post. A few emails in they claimed it was due to a "technical" issue. That BS annoyed me so much, I am now into the 3rd decade of my own personal British Airways boycott

I'm surprised BA even had email in 1991 or earlier. Since britishairways.com doesn't even make an appearance in the Wayback Machine until late 1998, perhaps we can forgive them their anachronistic practices of the day.


Sorry if this makes everyone feel older... but the 2000s, 2010s, and now 2020s... makes three distinct decades.


Thanks for the perspective.

If someone says they are "into the 3rd decade of" something (for example, programming experience), I would generally assume that means 30 years or more.

But what you're saying is that it could mean as little as 11 years, e.g., from 2010 to 2021 (yes, 2010 is still part of the '00 decade¹ and 2021 marks the beginning of the '20s, strange as that was to me).

¹ https://www.farmersalmanac.com/new-decade-2020-or-2021-10090...


To me, the natural interpretation (at least in contexts where calendar-decades don't have special importance) would be 20+ years. The first decade is the first ten, the second decade is the second ten, and the third decade begins after that.


"into the n-th decade of X" sounds like "n-th decade of X has started but not fully completed yet" meaning "n-1 full decades + something" for me (non-native speaker)


Yes that's a great point, I was trying to remember when it happened and I think I was about 23 which would make it 21 years ago.

It's much more fun for me to think of that as "in to the 3rd decade", rather than just over 20 years ago.


I know the majority opinion in the US is "less rules is good", but as a counter point: European consumer protection laws prohibit this, if you can sign up online it's mandatory to offer online cancellation without extra hurdles. Sometimes having some rules is a good thing...


Europe also goes through the pain of developing rules and laws that are incredibly detailed and specific. US just doesn't have that culture. The law will be vague and it'll require at least a few lawsuits before it gets settled as to what they actually mean. This and the general misuse of lawsuits (I mean "Do not iron while wearing the shirt") here is why people are against laws.


That's the different between Civil and Common Law. Common law also has its good points. It's more flexible and grows organically. Whereas Civil Law is more rigorous and programmatic.


That's interesting if true, because my gym in France (Neoness) just made me turn up in person to cancel my rolling monthly subscription, despite the fact that I signed up on line. (Obviously I will never, ever be giving them my money again regardless)


As far as I'm aware the letter of the law is you can cancel with a formal letter (lettre recommandée avec accusé de réception), and that's the most bothersome it should ever get. They shouldn't have the right to ask that of you, and it's likely they're just taking advantage of people.


Do you have a reference for that? Struggling to find one, and very interested.


Here is an English version of something the Dutch regulator wrote: https://www.acm.nl/en/publications/acm-consumers-should-be-a...

And the English version of the government website about it: https://business.gov.nl/regulation/automatic-renewal-subscri...

Which says: "Consumers must be able to cancel their agreement in exactly the same way as they signed up for them."

As well as disallowing an automatic fixed term renewal. After an initial contract the customer must be allowed to cancel at any point not just yearly. This one had a big impact on the telecom industry a couple of years ago.


I want to hit that sweet spot where my bad behavior is unconstrained by the rules but everyone else's bad behavior is.


This is one reason I won't purchase a subscription any longer unless they accept Paypal, which makes it ridiculously easy to terminate recurring payments to them (easier than calling my credit card company to dispute a charge). Having such a kill switch was how I was able to cancel my NYTimes subscription (another org notorious for making it impossible to cancel) without going through the same kind of hassle of dealing with a sales rep trying to keep my account.


Not just short-sighted, these sort of things is why I don't try anything that involves a subscription. That is also why I make them pay the Apple Tax and subscibe through Apple instead.


>It's so short-sighted too.

No it's not. The VP is maximizing their bonus and career growth within the company. That is likely tied to relatively short term metrics and especially to not having drops in metrics.


Just to be clear, you confirmed that it was a short-sighted decision.


No, there is no negative long term consequence for the VP so it is likely the optimal decision from both a short and long term perspective.


I guess you're right ....or anyway, it's only short-sighted if you're not a complete piece of shit


Engineers often practice resume driven development. This is basically the VP version of that. Maximizing personal goals over the goals of a bunch of stock owners that would fire you in a heart beat if it increased their share price.


Maximizing personal goals over the goals of a bunch of selfish jerks who just want to use you is one thing. Maximizing personal goals at the actual certain major direct expense of your users is being a piece of shit


Similar story. I contributed to This American Life monthly for awhile. I had set it up online. I wanted to switch credit cards and could find no way to do it (they had redesigned their website). I couldn't even find a number to call. Sent some random emails with no response. I think I ended up changing my old credit card number and I'm now very, very skeptical about reoccurring payments.

On the other hand, it's quite well-known how easy it is to stop/start a Netflix subscription.


> Of course, even though you can sign up online, you have to cancel on the phone. On hold for 20-30 minutes during work hours, then talk to the rep, then listen to their retention offer, then it's successfully cancelled.

I haven't been in this situation, but I always imagined that there is a simple way out: send them a certified letter instructing them to cancel your subscription. If they continue charging after that, it's chargeback time.

Any opinions on whether or not this would work?


Chargebacks are inadvisable in case there is digital content associated with the account. In every terms of service I've ever read there is a clause saying the consumer cannot perform chargebacks. They're likely to nuke your account for "fraud" if the bank reverses the transaction, potentially causing thousands of dollars in losses.


So DRM is once again shown to be evil.

And digital "purchases" aren't really purchases.


>there is a clause saying the consumer cannot perform chargebacks.

I'm pretty sure a business that accepts credit cards cannot decide to 'opt-out' of chargebacks!


https://store.steampowered.com/subscriber_agreement/

Says all purchases are final. You may only request a refund and they reserve the right to deny it. Forcing it via chargebacks basically locks your account.

https://www.playstation.com/en-us/legal/psn-terms-of-service...

Reserves the right to suspend and terminate accounts associated with chargebacks.


That has worked for me before. A company which explicitly said they do not process cancellations via support form or email. I sent an email saying I wish to cancel and I consider the email reasonable notification and I will chargeback any further charges. They cancelled the account.


You can also just chargeback the charge on your credit card. You’ll probably win if they make it hard to cancel and they’ll get the message fast


The answer would be in the user agreement. If it does not enumerate permissible ways of cancelling, then yes. Otherwise idk but probably not. Contract language which limits methods of cancelling is likely enforceable unless there’s a statute prohibiting it.


> It's so short-sighted too.

> How many people decided to get the new Playstation next time because of a frustrating experience cancelling their xbox subscription

If you make it hard to unsubscribe when people are short on time/money/interest, they will probably be less likely to resubscribe when they have the time/money/interest.

Long-term customers on something like Xbox Live have a larger incentive to resubscribe to recover access to their game library.

On the other hand, random web site X is probably just looking to churn through subscribers.

It's something of a tragedy of the commons; the incredible difficulty of unsubscribing from (everything that's a monthly bill) makes people wary of subscribing to anything.


It is not short-sighted and I think it's a disservice to spread that theory because then people can just shrug and say: "a more profitable & moral company will outcompete them". If you instead believe that locking people in is profitable, the next step would be to make laws against it (if you signed up one way you need to be able to cancel via the same method; phone, internet, fax).


While I suspect in most cases (at least for smaller brands) making it hard to cancel pays off I 100% agree there is a measurement problem.

If you are optimizing for money made in the next month - or even next two years - then by definition making it hard to cancel will bring in more money. But it does hurt the brand long term (which is harder to measure)


In 2008 or so I had a bad experience canceling Sirius Radio before they were Sirius/XM and before it was offered by Car Manufacturers, now I refuse to subscribe on any car I own simply because of that experience.


It is only true if you measure it. ;)

Companies rely too much on analytics.


There needs to be a law that any subscription that can be signed up for online should be cancelable online. I'm not in favor of just implementing new laws willy nilly. But it's clear that companies are never going to do this on their own.


Europe has that law. Not all countries have fully implemented it yet but there is a deadline for all to have it.


So the rest of the story is that Ohio was the first state to pass a law requiring users be able to cancel subscriptions on-line. So we had to add it back in, but only for users in Ohio and it was burried on the site and had like 7 pages of "account saving" stuff.

Eventually most states passed similar laws and so we opened it up to all US accounts. I'm not sure what the experience is like today.


California has that law


But in California companies can (apparently) still require you to chat with a real person online and thus still need to do everything synchronously, wait in a queue, and argue with them for several minutes before they will cancel the subscription.

Personally I think that any time you grant permission to anyone to bill you automatically on a recurring basis, you need to be able to revoke that permission. This ought to be a fundamental mechanism of personal banking that you ought to be able to manage on your bank account online. It's astonishing to me that your bank can't even tell you all the ongoing recurring payments that are permitted on your account (or if they do, it's an ad hoc implementation that tries to detect recurring payment amounts, vendor names, etc.)


Which is great for California, but what about everyone else?

I think that was the point of the comment.


Often times if you change your address to be a CA address, websites will suddenly reveal an online cancel option. I swapped my parents' NYT account to be my CA address and suddenly "click here to cancel" was displayed on the website.

While I agree it should be a default setting, it's a useful little trick.


Just be careful if the CA Franchise Tax Board ever gets wind of it, they'll come after you for 10% income tax for the rest of your life.


> Which is great for California, but what about everyone else?

Vote in the next election for people/party that are more consumer friendly than industry friendly.

For all its flaws, CA had some of the most consumer and employee friendly (such as no non compete) laws.


I feel the lack of non compete is one of the main reasons tech took off in CA. That and the weather. For as much as people say starting a business in CA is hard, and it can be. For tech it is so easy I can just quit my job and start my company.


> but what about everyone else?

Vote.


I think Germany recently voted for this as well, so we will have this soonish :-) so if you can sign up online, there needs to be a similar way for cancelling.


This has been law since October 2016 in Germany. See § 309 Nr. 13 des BGB


But what would actually happen in Germany is, you would be stuck to that service due to a 3 year contract. Which you have to cancel some three months in advance or it renews for next three years.

But hey, we get startups whose usp is cancelling services.

Seems pretty normal to everyone I complain, that I still pay for my O2 bill after sending the cancellation letter(Physical piece of paper) for the umpteenth time.


Never heard of three year contracts. It's been a long time since I signed any two year contract. Most contracts I have (Internet, Gas, Energy, ..) are annual contracts. My mobile contract is cancelable monthly.


I think California has something like that.


We do indeed! Doesn't stop many people from making it a pain in the ass but stops some.


I was once a subscriber to the NYT, and will never be again because of this. I wonder if they are actually making money with this tactic or losing.


They are the most subscribed to print/digital media source in the United States. Their market share is considerable and they have the reputation to bring in new subs. They offer college students free/heavily-reduced subs to 'get em in', etc. This tactic sucks for the consumer who wants out, but the frustration keeps them from picking up the phone and committing to cancelling because 'hey, its only x dollars/month'


We should be able to just tell the credit card company to cancel recurring payments. They'll try to charge the card and get denied.


As far as I know, you're still on the hook for contractual payments, like most gym memberships and cell plans.


Credit cards should be able to send a "Cancel" signal that companies should be obligated to respond to. Problem solved, no need to trust third party websites (although I'm fine with them also being obligated to have an easy way to cancel on their site).


Of course you would be, just like if you maxed out your credit card or closed your credit card account. This doesn't pose a new or unique problem.


Capital one privacy cards are amazing for that. They paid me 250 to open one 2% cash back and unlimited privacy cards


You can.


> You can.

My understanding is that this will be reflected in credit reporting as delinquency- seems like a lose-lose in that sense. Can someone who has done this weigh in?


I requested that from one card provider and they told me that they couldn’t and I’d have to get a new card number to stop the charges.

When you say “you can”, are you just saying “you can ask”, or that you’ve had success doing this?

If the latter, mind sharing which card provider that was with?


This is how all of my cards are. Can't cancel recurring payments unless it's fraud. They must issue a new card.

If there's hope for beating this dark pattern, it lies with banks/CC companies


If I was a lawyer, I would start a service for cancelling subscriptions.

I.e., user can cancel subscription on website, system automatically writes cancellation letter, letter is sent to corresponding company address. If cancellation fails, then an official complaint is served to the company, and from there it follows the usual court process.


"We fantasized about telling the VP to stick it and quitting en masse, but we knew it wouldn't change anything. We'd just be replaced by someone that would."

That's a very bad argument. If you guide yourself by that logic then you can't really blame anyone for doing anything, because everything is justified.


He isn’t applying the logic to everything, he is applying it to one thing.

It’s possible for the argument to be true in this context and false in another context.


so I take it you don't have a mortgage or a family to support?


But that is a different argument with a different justification: "we knew it was unethical, but we needed the money (and couldn't really personally afford taking this stance today as we didn't have alternative ethical jobs lined up that could pay as much; <- implied) and so, on the balance, we are OK with our decision" is simply not the same as "someone else would have done it anyway, so we can feel OK that we did it ourselves".


It is actually possible to be an ethical person and pay the bills, it just takes a bit of effort.


>we knew it wouldn't change anything. We'd just be replaced by someone that would.

Ye goode olde self-fulfilling prophecy.


If I need a job, if the money is right my ethics are negotiable.

This is the norm.


Sure; but then don't also try to justify the action by saying someone else would have done it in your place? "I need a job, even an unethical one, and refuse to feel bad about that" is a completely different argument than "I would have been replaced by someone who would have done the same, and so don't need to feel bad about it".


I think history has proven that there will always be someone willing to do what hundreds of millions of people wont


Certain bad things that people used to do don't happen anymore. This must mean that, in general, actions really do matter -- even if it is very hard to know ahead of time which ones will have a significant impact.


Thank you for sharing this Microsoft story. One way to interpret it is that the only people who would hire you and your colleagues and compensate you the most were [insert unpleasant description]. I say that sincerely as someone who many years ago worked in the software industry then left to work in other industries. I was not a developer but was someone who worked with management on a daily basis. I saw a marked difference in character between people working in management in the software industry versus the others. It is worth considering why you could not all quit en masse. Who else would hire you? What would they pay you? To be fair, as far as "dark patterns" go, specifically, making cancelling subscriptions difficult, I think that existed long before software companies. However, it is food for thought to consider what kind of industry is employing and compensating you the most for your "work", and what other industries are not. None of this implies any "blame" targeted at anyone, it just highlights what monetary value society places on programming and the cast of characters who set that value. I worked alongside programmers in other industries besides software and we never asked them to anything unethical. Some of them arguably could have tried to work at Microsoft. They had freedom of choice.


This really has nothing to do with software as a product sector. XBox subscriptions aren't even subscriptions to software, right? It's subscriptions to game content, which is no different from subscribing to any other form of media.

The one I'm still hooked on, that this reminds me I've now given something like $3,000 dollars to over 3 years because I've been too lazy to make a phone call and wait on hold, is the YMCA.

Even religious nonprofits are engaging in dark patterns.


> but we knew it wouldn't change anything

Maybe the immediate threat of developers quiting wouldn't have, but I think something might have eventually. Probably change from the top. My recent experience with the xbox gamepass app on Windows let me cancel my subscription fairly easily on both the console and the pc (two different accounts) without ever having to call someone. I like to think the culture at xbox has changed, maybe not microsoft, but maybe it will trickel up eventually.


> So we complied, but we all lost a little bit of our faith in Xbox that day.

You mean shady subscription business models right? Because Microsoft is certainly not alone.

Just the other day I had to CALL (as the only option) Network Solutions to cancel a security product.


>You mean shady subscription business models right? Because Microsoft is certainly not alone.

so what that Microsoft isn't alone, their (poster) experience allows them to know unequivocally that Microsoft and the employees that work there make decisions that don't align with their personal ethics -- so they take out their anger on Microsoft.

the statement '...but we all lost a little bit of faith in shady subscription business models ' makes no sense -- the shady ones don't self identify, and it's not the methodology that is generally despised, it's the entities that that employ such methodologies against the public.


Over a decade ago, I worked at well-known web site in one of the smaller properties. The site looked like it was from the mid-90s. A year-long effort was made to upgrade the site to something more modern, and it was beautiful.

It went live, and almost immediately the number of ad clicks dropped significantly. It turned out that the older site was so hard to use that customers would inadvertently click on ads, bringing up the revenue. The new site was so much easier to use that customers clicked on less ads.

The changes were almost immediately rolled back.

I think eventually the new site was put back in, but only after they ensured that ad clicks wouldn't go down precipitiously, but I had after by then.


dark patterns that require you to call the company should be illegal... like criminally illegal. Edit: how is this a controversial take


Someone should make a website that rates services by ease of unsubscribing and include a link to the unsubscribe link if one exists.

A few publications that I enjoy is on my "Never subscribe" list because of the difficulty of unsubscribing. On the other hand, unsubscribing from HBO Max (at least via Apple TV) was so easy that I don't hesitate to unsubscribe and resubscribe and have done so a couple of times.


> Once word spread that you could do it on the web, huge numbers of customers, that had been stuck paying for an Xbox Live Gold subscription they weren't using, began cancelling.

Unfortunately that's the allure of dark and customer-hostile patterns - they extract a lot of money from people, at least in the short term.

If you're Microsoft/Xbox, though, you might want to think a bit about long term.


> We'd just be replaced by someone that would.

IE, you're part of the problem. You're just rationalizing to make yourself feel better.


> IE, you're part of the problem. You're just rationalizing to make yourself feel better.

That's not true. Governments exist to deal with this kind of situation. If companies make it difficult for citizens to cancel subscription, it is time to regulate subscriptions.

Why is the responsibility of some developers to lose their jobs to stop a company doing something that is completely legal. If they have done that, they will have my gratitude and admiration. But, the developers are not "part of the problem". Microsoft is the problem, and the lack of regulation the other part.


"Microsoft is the problem" <- but "Microsoft" is nothing more than a bunch of people who apparently make problematic decisions in a large group because they think they can pass the ethical ramifications off to someone else. You can fix specific cases with regulation, but we shouldn't have to have laws on the books for every single scenario to make sure people do good things, particularly if they know the difference and just don't consider it worth their trouble that day to act reasonably. If you are told to go do something bad, and you go and do it despite thinking it is bad, you damned well better have a great sob story to tell me about why you felt you couldn't get an ethical job or why you absolutely needed the money you did, or you absolutely don't get some pass out of the ethical decisions involved: "someone else would have done it" isn't sufficient.


Only in the same sense that if you dodge a bullet, you're responsible for whatever person got killed instead of you.


Not that I think your analogy is a reasonable one anyway--as, if nothing else, you took the action here, picking up the gun and being told to fire it--but it doesn't work even if you pretend it makes sense: if no one ever tried to doge bullets then someone always gets shot... but if everyone tries (as they should) then maybe no one gets shot; it is simply incorrect to assume that someone else is going to get shot and so it may as well be you (god this analogy is weird ;P).


If you bypass the whole "attack only the points of the analogy which do not relate to the argument" and focus on the points of the "analogy" that make it a suitably demonstrative specialization of the more general argument (and thus a formally valid argument, which in general an analogy is not, and thus a neither is a strawman counterargument), then:

General Premises: 1. Agent A enforces Action X onto Agent B 2. Action X hurts C with some probability. Not Action X hurts B with high probability. 3. For all persons D under A, the probability of Not Action X is vanishingly small.

Reasonable Argument: 1. If B performs Action X bad things happen to C. This makes them partly responsible. However, under premise 3, B's counterfactual contribution is vanishingly small, whereas A's contribution is close to 100%.

Unreasonable argument: 1. As the final link in this particular chain of events, B is wholly responsible for the harm done to C, because had they suffered harm with high probability, they would have ensured the reduction of the probability of non-harm to C by a vanishingly small amount.


A few years ago, I had an NYT subscription. I had to cancel it for some reason, and the cancellation flow was some of the worst UX I've ever experienced. You had to actually live DM with some poor sod who was being paid to try to get you to change your mind like half a dozen times. It was worse than old GoDaddy upsells...worse than cancelling Photoshop (which, by the way, fyi, Adobe will drop price at least twice if you keep trying to cancel, cable-company-style)

I would have resubscribed shortly thereafter if the cancellation flow had been good, or even just average. But because of how bad it was, I have never resubscribed, and most likely never will. Their cancellation UX cost them 5+ years of subscription revenue from me. Not only that but I go around telling people about how bad it was


Why not fix it for everyone with an IP address that's not in your boss's area?


You think the boss even knew how to use that feature, or hell even had an Xbox? He just saw excel sheet with cancellation numbers, and asked someone what is happening.


Because subscriptions would keep dropping, the reasons would be investigated, and there's a high risk that the people responsible for the fraud would do time?


Jail time for refusing to implement a dark pattern?


No, for defrauding your employer. Having your heart in the right place is not a reliable legal defense.


> We'd just be replaced by someone that would.

Unionize.


That's treating humans like thermodynamical particles: if one way is easier than the other, things will end up on the easy side eventually.

That is human dignity from the sales point of view: that of a particle.


This is one of those things that needs to be straightforwardly and strictly regulated. Mandatory one-click unsubscribe


> We fantasized about telling the VP to stick it and quitting en masse, but we knew it wouldn't change anything. We'd just be replaced by someone that would.

I sympathize with you. If you have quit, you would have my admiration and gratitude. But, what you did is reasonable. Microsoft was not doing anything illegal.

This is the reason why regulations exists. If companies are abusing consumers by making it difficult to cancel a service, developers cannot be the responsible to bring justice. Governments have that responsibility.

> I ran the diff. Each file had the same change - they had added code that makes an ajax saveEmail() call onBlur. In other words, email addresses were being saved to the database when a user inputs an email and the input loses focus.

e.g. The article situation is solved by the GDPR. This would completely break the GDPR as it requires informed consent before saving personal information like an e-mail.

If your government is not solved this issues, it is time to get a better one. I wish people, including in this threat, stop blaming employees for the fault of the bad action corporations. Corporation morality is a good example when the total is less than the sum of its parts.


Also, imagine being that VP's bf/gf and trying to break up with them


Anything that makes me call to cancel never ever gets my business again.


If you can sign up without a call you should be able to cancel without a call. Its the #1 reason Im pro-apple enforcing its IAP stuff.


And this is why I find the apple walled garden to be pro-consumer. I’m happy to pay the Apple tax in exchange for being able to cancel subscriptions easily.


I was doing my taxes on taxact a year or two ago, when I noticed a small lag every time I switched fields. Every time I went to the next field, a network request was being made to some third party analytics thing with the data I entered.

Somewhere, in the back of my mind I know that all data I enter online is inevitably being hovered up and used for god knows what, but when you're suddenly made aware of it, it's really unnerving.

Something about this makes me want to have a 'falling down' moment. Let me get this straight, not only is our tax system so complex and error prone i have to pay money to a third party to figure out how much I should be paying to our government, but the software company I pay then turns around and sells my data? The government does nothing to remedy this? It really goes to show who our government serves, and it sure as hell isn't 'the people'.


A brief reminder, especially when talking about dark patterns, that Intuit and its peers have spent 20 years making this a reality and cloaking their legal obligations to provide free options, as detailed in https://www.propublica.org/article/inside-turbotax-20-year-f... :

> Internal presentations lay out company tactics for fighting “encroachment,” Intuit’s catchall term for any government initiative to make filing taxes easier — such as creating a free government filing system or pre-filling people’s returns with payroll or other data the IRS already has. “For a decade proposals have sought to create IRS tax software or a ReturnFree Tax System; All were stopped,” reads a confidential 2007 PowerPoint presentation from an Intuit board of directors meeting. The company’s 2014-15 plan included manufacturing “3rd-party grass roots” support. “Buy ads for op-eds/editorials/stories in African American and Latino media,” one internal PowerPoint slide states.

> The centerpiece of Intuit’s anti-encroachment strategy has been the Free File program, hatched 17 years ago in a moment of crisis for the company. Under the terms of an agreement with the federal government, Intuit and other commercial tax prep companies promised to provide free online filing to tens of millions of lower-income taxpayers. In exchange, the IRS pledged not to create a government-run system.

> Since Free File’s launch, Intuit has done everything it could to limit the program’s reach while making sure the government stuck to its end of the deal. As ProPublica has reported, Intuit added code to the Free File landing page of TurboTax that hid it from search engines like Google, making it harder for would-be users to find.

> What is clear is that Intuit’s business relies on keeping the use of Free File low. The company has repeatedly declined to say how many of its paying customers are eligible for the program, which is currently open to anyone who makes under $66,000. But based on publicly available data and statements by Intuit executives, ProPublica estimates that roughly 15 million paying TurboTax customers could have filed for free if they found Free File. That represents more than $1.5 billion in estimated revenue, or more than half the total that TurboTax generates. Those affected include retirees, students, people on disability and minimum-wage workers.


The audacity they have to run ads promising to simplify taxes when they're responsible for that complexity is amazing.


Saying "it wasn't me, it was my boss who created the situation, I'm just trying to make people's lives better within these constraints!" is the best way to calcify and perpetuate those constraints.


Oof this hits close to home these days. I just want to scream "we're doing this to ourselves!!!"


We must keep filing taxes complicated, for the sake of Capitalism! This reads like Huxley.


Consider installing Local Sheriff: https://addons.mozilla.org/en-GB/firefox/addon/local-sheriff... It detects these abuses and lets you know.


Letting JavaScript initiate requests or mutate data associated with forms was a huge mistake.


I mean... in a way... it is "serving its people"...

in a "To Serve Man" sort of way ;) https://en.wikipedia.org/wiki/To_Serve_Man_(The_Twilight_Zon...

installing ublock Origin should help block most of those nasty analytics hoovers


This happened to me with GMail recently. I fat fingered and accidentally opened dev tools and saw every single keystroke triggered a request back to home. I obviously knew this was happening, but seeing it happen in real time really, really unnerved me.


Gmail autosaves emails as drafts while you’re typing, so at least in that case there’s an excuse.


They also brought the dark pattern to automatically upload and save any file you select in the file uploaded control, even when you mistakenly selected the wrong file.

For more than twenty years, we had to click "upload" or alike to confirm the upload of a file to a server. I wonder how many millions or billions files Google steals (yes, that's theft) from gullible users who use their web interface.


How is it a "dark pattern" to upload a file that you selected to upload? And how is it "theft" to accept a file that was selected for upload? What benefit does Google get from this "theft"?


Well, that is my opinion.

If I take an analogy: think of it as if you were typing a message in an IM chat window. Anything you'd type would be sent immediately to the service owner, whether or not you click "send" at the end. I understand this feel normal to you but I personally find this abusive and a total treachery from Google.

To support my point: I used to have access to a corporate proxy that performs TLS inspection for a little less than 50k employees. I looked into this specific scenario to get an idea of how many employees were being abused by this. It is quite easy to find: just filter for queries sent to GMail's API when you cancel a file upload or you remove an attachment from a draft. We have several hundreds of those events each year in our logs.

These people quite candidly think they "cancelled" the upload, but it didn't. Google got the file, analysed it, extracted knowledge out of it and potentially adjusted your profile and your employer's. This falls under the definition of a dark pattern.

There is something worse than using GMail for your emails: it's using GMail's web interface.


> Google got the file, analysed it, extracted knowledge out of it and potentially adjusted your profile and your employer's

How do you know this? It's against their stated privacy policy.

This is also a kind of strange position to be arguing when you just described snooping on people's secure communications. Why is that OK and what Gmail does isn't?


I don't think it is a strange position, what you seem to describe is actually completely illegal... You may have jumped to conclusions too rapidly. Let me try changing your mind :)

We do not prevent our users from using GMail for personal use, but we analyze the traffic for malicious or deviant patterns to detect potential data leaks. Analyzing the amount of users who inadvertently send a file in GMail then rapidly "change their mind" is still, in our shared opinion, a relevant analysis to perform. We report the number of quarterly cases in an awareness newsletter and remind users to be attentive when selecting a file to be uploaded in webmails.

Good thing: the occurrences have decreased since we raise this concern. We cannot prove it is linked to our newsletter but our "ego" enjoys believing so.

I would have agreed with you if I actually had both a username and a filename shown on my screen, but this does not happen. Our report shows the total count of cancelled uploads per country.

We have someone at HR who can actually see the detailed events (including file names) for a given user. It seems to that this would match what you see as "snooping". But this an investigation led by Human resources upon a suspicion of employee misbehavior. It has nothing to do with our infosec activities.

I hope I changed your mind, at least a little ;)


My point was that in both cases the potential for abuse exists but that users have to trust that existing policies will prevent that abuse. Why should your users trust you and not Google to follow your stated policies?


In my opinion, we offer a more trustworthy relationship in three ways: 1. We don't use ambiguous wording in our policies. 2. Any employee interested in knowing how we process these requests can actually come physically and see in detail how they are processed. 3. Our business does not depend upon our ability to harvest our employee data and monetize it.


I’ve been using taxact for 4 years, and have found it better than TurboTax. This year something seemed off — quality issues and I think they were pushing for more upgrades. More dark patterns maybe, or at least grey.

Did something happen? Did they change ownership? Any good alternatives to taxact?


Tax preparation is one of those places I've found that going to a person rather than a website actually really pays off. TurboTax et al handle simple cases fine but if you have a complex situation (I own a home, owned a business, etc etc) having a person who's in the room with you and can directly ask you questions and interpret data that you've got on hand is invaluable.

My tax preparer is something like $250 a year and reduced my liability in a very tumultuous year from about $20K to about $9K by understanding what I actually had going on and helping to work through it.


And an actual tax preparer, not a script reader like H&R Block.

The first year I moved to the US, I had a whole bunch of things going on. Buying a house, working from home, buying a hybrid car etc., getting married. I had no real idea about the tax system, so saved all our receipts etc., and went to H&R Block.

"So what do you want to claim?"

"What can I claim?"

"What do you mean?"

"Here's a bunch of receipts and I can tell you all the details."

[vaguely confused look] "Let's go through the app."

And I watched as basically she transcribed our most basic information into their in house version of something akin to TurboTax.

I could have done that myself.

I complained. And did, eventually, get someone there who knew how to not be a glorified transcriptionist.

And then the next year found someone who had knowledge of their own.


Yup, that happened to me. I paid something like $250 for a tax preparer who just entered the info into TurboTax.

It hit a weird case and she didn't know what to do. Eventually I looked up the actual tax code and told her what to do. As far as I remember it was about splitting the cost basis over multiple years. I owned some weird stock where that came up.


AFAIK, TurboTax does sell access to a version of their software for the script reader, too. ;)


I recommend CreditKarma Tax. Intuit purchased them to destroy them but was forced to sell that division to square. if that's any indicator of how much of a threat they are to intuits revenue stream.


Me too. I have the exact same tax situation as last year, but now it pushed me really hard to upgrade, and at the very last step, it told me that something I entered on page 1 meant that they required me to go up by 1 pricing tier. They VERY CLEARLY made this product so that it wouldn't tell me that I needed to upgrade until I got to the very last step, so that I would feel intimidated at the idea of starting over with someone cheaper. So I switched to TurboTax, went through the same thing, and wound up paying $10 more as an F-You to TaxAct. Although turbotax did the same thing to me, so my moral high ground is pretty imaginary.


Pro tip: TurboTax Deluxe offline version (not the web based one) has all of the forms and schedules, even complex stuff like schedule C businesses, rental properties, and partnerships. They’ll try to upsell you to Premier or Business for “extra guidance,” but you can just say “no keep going in Deluxe.” Deluxe with State can usually be had for $40 or less, but it’s $60 even at full retail.


Sounds like it was not only shady, but incompetent in making the calls synchronous.


Related PSA: many customer support chat integrations on websites today give the rep a real-time view of all input changes before you actually send anything.

Think twice before pasting unknown clipboard contents, typing while angry, etc. Best, explicitly copy your finalized reply from another app and paste it into the chat.

(I confirmed that first hand when a support person replied to specifics of my message while I was editing the phrasing.)


Keep posting about this stuff. I can tell you this is a feature our customers often ask for, but we (https://Olark.com) refuse to implement, we do lose business over this.

I'd like to believe the karma for not letting any of our customers be creepy (and/or violate their customers' privacy), pays for the lost business many times over.


You guys really ought to advertise those kinds of things more. The only reason you're losing business over it is that your customer's customers aren't aware.

The least you could do is post a blog post about the dark patterns you refuse to unleash on your customer's customers and why.


Seconded, I’d like to read a blog post about that.


My company recommends Olark to our customers, and I wasn’t even aware of this, but knowing it makes me more comfortable. Thank you for not implementing this creepy feature.


There's companies out there who's marketing automation platforms do this by default when you use their embedded widgets.....


This one, IMHO, isn't that creepy. Or at least there is a legitimate purpose. It gives the agent information a few seconds quicker and those few seconds add up to real money over time.


> Related PSA: many customer support chat integrations on websites today give the rep a real-time view of all input changes before you actually send anything.

Related Related PSA: Tools like Logrocket record full-fidelity videos of your entire session on a website. They're great for debugging. It also means you should assume that every website you use has a video of everything you did that people can scrub and search through (it's more than a video actually, it's capturing and re-rendering the DOM, network traffic etc... like I said, amazing for debugging)


https://www.rrweb.io/ Is a cool open source alternative. We use it with sentry to see what causes our exceptions.

Also with a lot of these tools you can configure them to redact user input so you don’t capture too much.


We use fullstory which does the same, it's great not only for debugging, our company is Fintech and fraud and compliance departments can't live without it.


How do I block this?


Block javascript using something like NoScript (which will still let you allow javascript on specific sites you explicitly approve, but will block JS from running by default).


Something like

  ||*.logrocket.network^
  ||*.logrocket.io^
  ||*.lr-ingest.io^
  ||*.logicanalytics.io^
in uBlock/ABP filtering rules would be a good start. Not sure what logicanalytics is exactly, but it's used on logrocket's main page. Using NoScript like the other commenter said is safer though (this won't catch on-premises deployments of logrocket and it will stop blocking it when they change the domain).


You'll want to add hotjar.com to that list. They do the same thing.


Best to remove `*.`, otherwise your filters will match only subdomains of those domains -- unless that is really what you want to accomplish.


I really dislike this because it violates the user's assumption that the action they are in control of, "send message," represents a hard barrier between committing and not committing to their wording.

Unsavory.


We had a sales guy pitch exactly this when I worked at a big media corp. The big boss on the marketing side just happened to be in the room at the time, and told everyone that no one was ever to work with this company. I was impressed.


I know this is probably just a me problem, but man, that makes me feel kinda bad for all the times I've angrily typed out an insult-laden chat message into customer support chats for cathartic reasons before deleting it and sending something nicer.

Not entirely bad, because recording what I type without my hitting send is shady asf on the company's part, but it makes me feel bad for the human on the other end of the chat.

Anyways, thanks for sharing, I'll have to keep this in mind in the future and maybe do less angry-typing-and-then-backspacing.


Can confirm that Salesforce integrated chats do this


I wonder whether enterprise slack does.


really? how do you know? and which "chats" ?


Something similar applies when you're on the phone. When you're on hold, the agent can hear everything you say most of the time


100% emergency services can hear (and record) every word you say while while you’re talking to 911.

Am aware of several folks going to prison from confessions when they thought they weren’t being heard.


But what is the point. Why do you want to know e. g. if the customer is so angry that they want to haunt you in your dreams like Freddy Krueger.


This is how chat apps should work or have the option of working. Why make you look at an animated “…” until I’m satisfied with my message when you could be watching me write it instead? It’s closer to how conversation works and we have the bandwidth and the latency.

It’s immoral because you’re lying by hiding that this is happening and not making it go both ways, so why not show users and just make it part of the UI?


You're reading other peoples unfinished thoughts before they've said them?!


Wouldn't that be closer to a real spoken conversation?

As long as you're told what the behaviour is, I don't see a problem. I think for short lived chats such as those support chats it would speed up things.


Yeah. The problem here is that they do it without letting you know.


Google Wave did this (and with nest threads)! I never tried it for serious conversations, but it was quite fun to write at the same time as 1-3 other people and see them amend their messages in real time.


I work outside of tech, and so many of my inter-work communications would be so much better on Google Wave. Of course, certain types of staff-wide communications would be solved better by anything other than email, whether that be Slack or a different platform, but I will always resent the death of Wave for what we missed out on as a society.


Agreed. I like the behavior of traditional unix talk, which worked like that.


That's extremely obnoxious. I wonder if I can make uBlock Origin block this.


Years ago, I had a boss who really wanted me to do popup ads and spam. I was the only programmer, so the buck stopped with me. Every time, I told him to shove it.

Eventually, he threatened to hire outside developers to do the work. Previous to my employment, he'd used foreign contractors who were quick to cash checks and slow to do the work, and rarely satisfied the order. I called him on it. "Go ahead, go back to your past contractors... but by the way, wasn't your dissatisfaction with them the whole reason you hired me?"

I think he managed to find an external marketing company to spam for him, but he never got popup ads under my watch. The funniest was the time he discovered popunder ads -- he thought I'd be cool with those, for some reason. Sorry, guy.


Semi-related, not exactly a dark pattern put a job I bailed on early in my career for ethical reasons.

I was hired to build a rudimentary tool for detecting nudity in images, over large datasets, as fast as possible/reasonable, with a pretty generous margin of error. The agreed pay was extremely good for the performance the client wanted.

Not long after I started, after getting an advance payment, one of the clients called me and very tactfully broke the news to me that what they actually wanted was a tool that would detect women in bikinis, or showing lots of skin, and that they would be crawling social media and photo sharing sites, and their 'service' was a private premium forum that included a section where members could trade pictures of girls they knew, and they wanted to add a gallery of girls who post bikini pictures, with their real names and locations and links to their socials.

I re-payed the advance that day, and it very much shaped my approach to consulting.


That sounds about right. I was interviewed for a fairly well known company on contract a number of years ago. I was told the product was top secret and it was going to be amazing etc etc.

So roll in on first day after signing up to fuck knows what and it was a ticket touting company. I listened to their pitch which lasted until lunch, went and got myself a sandwich, sat on a bench and thought “fuck it, this is wrong” and just went home.

When I told the agent he went crazy at me because I’d burned his commission. Gave them the finger too. In some places it’s bastards all the way down.

Next time someone pulled that on me, I cut the interview off and only worked for predefined work for a number of years. If something is off grid on your contract, no is the answer.


For anyone wondering (I had to look it up):

ticket touting = ticket scalping.


What's unethical about ticket scalping?


>> What's unethical about ticket scalping?

It artificially inflates ticket prices by inserting a completely unnecessary middle-man in the purchase process. This is done by people with no intent to actually use the tickets they bought, so it's very much not the same as "oh I can't go would you like to buy my ticket?"

This is similar to a dark pattern I've seen at shopping malls where they offer a valet service, but also rope off all the close parking spaces for valet. This creates an artificial scarcity of close parking spaces which helps to drive the valet business. If they never did this there would be little desire for the valet service.


> It artificially inflates ticket prices

On the contrary, it naturally inflates the price.

Tickets that can be scalped were priced at below what you might naively consider "market" prices. The purchasers gain some value from this, and usually the sellers do too — often in the form of hype, perennially useful for promotional purposes. Someone who scored a hard-to-get ticket for a good price is likely quite excited about it.

But the difference means there is a strong incentive to turn the difference into cash, and even with inefficient processes in the middle, that incentive is substantial.

You can of course spend all day saying it's "wrong" and it's a position you are welcome to take; there are interesting questions we could ask about who should rightly "own" abstractions like the hype, and why, but it is not protected by normal property law, and if property rights don't exist or aren't enforced then you know that the necessary conditions for free-market efficiency do not exist.

But again, it's as natural as any other economic effect.


How is the price inflation natural? The supply-demand relationship is unnaturally muddled with when someone restricts supply by purchasing all the tickets at once, especially with automated tools average consumers don't have/use. They then just resell the tickets - not at some supply-demand balanced value, but at their determined value, often 200%+ the original price.

Supply is already fixed on things like tickets anyway, due to venue sizes, so this is just further restriction. It's not natural.


I'm not a particularly big fan of Kid Rock, for a number of reasons. But he and a few artists had started doing things to thwart scalpers.

Namely, he'd announce tours slowly. And basically the strategy was that as scalpers bought out shows, he'd add another show in the same city. And keep doing that until there was no demand, no resale market, so scalpers were forced to sell at face or near face value. "I can keep throwing dates at you, and you're paying for the seats, so it doesn't hurt me, but no-one will buy them from you".

Eventually the scalpers learned to not, or minimally resell his tickets.


I mean, the enabling principle behind scalping is that demand is more than supply but supply does not increase to match. More people want the tickets than there are tickets, scalping means it's based on cost instead of who you know or how much time you're willing to spend or how lucky you get.

This doesn't just solve scalping, this fixes the lack of supply of tickets which allows scalping to exist

edit: I'm not sure I completely stand by this. There are various good reasons to sell tickets for cheaper than the maximum price you could and still sell out. Still, it's a harder problem to solve than it looks


That’s pretty funny actually.


They then just resell the tickets - not at some supply-demand balanced value

It is the supply-demand balanced value. If it wasn't, they wouldn't be able to sell them.


Playing devil's advocate: In a market where value is signalled by scarcity, creating artificial scarcity creates artificial value.


>> Tickets that can be scalped were priced at below what you might naively consider "market" prices.

Maybe. The scalpers are taking a risk buying tickets they may not sell, so it might serve as a mechanism to find the market price for the tickets. OTOH it also creates artificial scarcity which artificially raises the price.

In the end, the scalper is inserting themself into a transaction between two parties that didn't ask for them to do so and were mutually satisfied with the situation prior to that (nothing changed for the seller, and I think most buyers would appreciate the lower price).


> But again, it's as natural as any other economic effect.

Often I hear people supporting scalping as an example of a free market working, and I get that argument.

The problem is the market that actually exists is anything but free, and largely based on deceptive practices and even outright collusion, which is I think what is what a lot of people really object to.


Yeah, that makes no sense. Valet should use the farthest spaces possible.


> >> What's unethical about ticket scalping?

> It artificially inflates ticket prices by inserting a completely unnecessary middle-man in the purchase process. This is done by people with no intent to actually use the tickets they bought, so it's very much not the same as "oh I can't go would you like to buy my ticket?"

I think this is called "retail."


Retailers are generally providing a service. Somewhere to go and look at and try out a product, many products conveniently under one roof. Scalpers / touts are not, they are parasites.


Retailers are also have strict liability for the products they sell. When something goes wrong, it is up to them to make customers right and go after the manufacturer to recover those damages.


> Retailers are also have strict liability for the products they sell. When something goes wrong, it is up to them to make customers right and go after the manufacturer to recover those damages.

That's sonewhat misleading about how chain-of-commerce strict liability works. The injured party can sue any/all parties in the chain of commerce directly, its not that the retailer is exclusively directly responsible, and then they have to work up the chain.


I agree. I just find it amusing that similar behavior can be viewed so differently depending on context.

There is nothing inherently wrong with middlemen asserting themselves into a transaction. Our whole economy depends on it.


>> There is nothing inherently wrong with middlemen asserting themselves into a transaction.

Some middle men serve a function to buyer and seller. Specifically they are motivated to put those two in touch with each other and in some cases act as a sort of mediator for a transaction. The ones who insert themselves between parties that are already in touch and ready to deal offer no value.

Example. When the Wright brothers wanted to sell an airplane to the US army, after much effort they got the army to solicit bids on a flying machine that basically matched the capabilities they had (even that was a big deal because nobody though it could be done and the army didn't want to fund "research"). Strangely they got 3 bids. The Wrights IIRC was $25,000. One of the other two dropped out because the reality was they had no product. The third intended to bid lower than the Wrights - enough that he could buy a plane from them and sell it at a markup to the army. The Wrights told him they would not sell an airplane to him (fuck off dude) and he dropped out. That other guy was what I'd call "trying to insert himself as a middleman" where one was not needed in any way, strictly to enrich himself while adding no value.

Back to scalpers - they do offer one particular thing that might be of value. Some can get their hands on premium seats, and by selling those at very high prices they allow rich people to pay extra for special privilege. What you think about that varies from person to person ;-) It's still something the venue could have done a better job of.


Not really, as scalpers insert themselves between a retail ticket vendor, and the consumer.


Fans who really want to see their favorite artist/sports teams get extorted, and the money goes to scalpers instead of the artists and venues which are actually providing value. Scalpers provide no value to society.


Planet Money has done an excellent episode on this - https://www.npr.org/sections/money/2013/06/25/195641030/epis.... While I broadly agree with this, I would say that scalpers can actually provide a service of exchanging money for time and convenience. For example, you may have a free or cheap concert that sells out extremely quickly, but with scalpers, those with lots of money can always get a ticket. Much of the economy is built on the similar concept of arbitrage, where someone buys something cheaply and sells it for a markup to those who lack the ability or knowledge to get it from the seller's source.

That said, scalpers in particular seem to cause a whole lot more harm than good in general. As the above podcast addresses, it's a very difficult problem to solve systemically if you are intentionally undervaluing your goods.


Sounds like an oxymoron. Like hoarding toilet paper, gasoline, water during times of emergency, it limits the true supply.

You could argue the “true price” is what the scalpers charge (who will stop drinking water when the prices skyrocket?).

But in reality they squeeze the supply to create artificial scarcity. Any economist knows this is market manipulation.


> For example, you may have a free or cheap concert that sells out extremely quickly, but with scalpers, those with lots of money can always get a ticket.

But how do you square that with scalpers causing the tickets to sell out so quickly? I mean, they're the ones creating their own market. They're not really providing a service if they're the ones creating the annoying need for the service in the first place.


Then just don't pay the inflated price and don't go to the event?


I'm sort of trying to propose a situation where 1) the price doesn't get inflated and 2) I get to go to the event.

Obviously, yes, if I am mad about scalpers' prices, I have the option to not pay them. I would like to go the event, though.


Come on, man.

Remember those guys last year who would drive around buying up all the masks, selling them online for 10x? You don't know why everybody hated them? This is why:

The original seller has a reputation to protect and doesn't want to be seen as taking advantage. Maybe it's a musician who would rather sell to kids who are willing to wait in line than to whoever has the most money. Maybe a pharmacy selling masks in 2020. The arbitrage opportunity is for somebody with no reputation or scruples, who chooses to see themselves as just an Angel of the Free Market. To everyone else, he's a jerk.


I like to call this line of thinking "capitalist determinism". Basically, if there is money to be made, it must be made. Letting any other concern interfere with this extracting of value is silly, or unnatural - even immoral.

People talk about it in these detached terms, call it laws of economics and say that it is something that happens naturally. But of course, when one remembers that these so-called laws of economics describes interactions between people, and that you always can choose what kind of influence you want to have on the world.


It's a weird one.

Events should auction off a percentage of tickets, and reserve a percentage "for fans" -- and all fixed to a name & photo id.

One has to suspect that many events are in-cahoots with scalpers, and are just pricing their tickets below the market for PR reasons.


> all fixed to a name & photo id

You're not the first to think of this I assure you.

Most modern scalper platforms collect your credit card and personal info in advance and use it to buy your ticket with their bot. So even if the venue is matching purchase info to your ID and credit card, it all lines up.

Scalpers aren't hawking tickets on show night 200 feet from the venue on the sidewalk anymore.

You pre-buy through them to guarantee that you'll get the ticket you want since everything sells out super fast (because of scalpers!) and you pay the markup for that service.


People tend to not like scalping of any products, it raises the price of the product while adding little to no value. The hate towards scalping is quite visible in the GPU market due to its limited supply


Is there anything not unethical about ticket scalping? Tickets being non-essential entertainment is the only reason it's legal.


You know what? I know that scraping is currently legal, but then we end up with things like that company or Clearview AI. We need some laws, ASAP, prohibiting data for being scraped for specific reasons.

Also, I have some friends in insurance companies, and they say that the insurance companies right now are actively trying to learn how to scrape people's social media - secretly - so they can catch "dangerous" behavior or violations of their rules. My dad's client was telling how there was a guy who was running a happy hour secretly in his insured bar, and his company which scraped Facebook found posts from other people saying "great happy hour at this bar", he reported it to the insurance company, and they sent the bar the bill. That's freaky and should be illegal as a violation of privacy.


> You know what? I know that scraping is currently legal, but then we end up with things like that company or Clearview AI. We need some laws, ASAP, prohibiting data for being scraped for specific reasons.

I don't think there's anything about scraping that makes this disgusting. It would be equally bad if individual people uploaded compromising photos of their exes.

The issue here is that people need control — not ownership — over their image and personal data/information. (The difference I intend to draw between control and ownership, is that the legal notion of control would be written in such a way that the fine print is irrelevant. Most online systems have some fine print somewhere giving the site owner certain rights over your content. Such fine print about a person's image needs to be rendered such a risk that if a business owner suggests including something like that to a lawyer, the lawyer starts quivering in their boots. "If I include such a clause, I will never get paid, because within half a nanosecond of it being visible you will be sued into kingdom come and your great grandchildren will still be paying off your debts."


I think I get the tech side of this and how creepy it is to be caught that way. But what is nefarious or needs to be secret about a "happy hour"? Isn't that just a promotional event for bars/restaurants where discount food/drink sales for some period?

Edit: Maybe I get it. It seems certain states have made it illegal to run "happy hours", presumably because people drink too much and behave badly. https://spoonuniversity.com/place/why-did-these-8-states-mak...


Even if it’s not illegal in that State, it’s also possible that the insurer deemed it to be a higher risk and would have just charged a higher premium.

If the bar owner didn’t want to pay that higher premium but did want to run a happy hour at his bar, and told the insurance company that he didn’t have happy hours at his bar, then well, he lied to the insurance company. They could have found out another way, by sending a mook down the way, but this saved labor and expense claims, and maybe even on their own insurance bills if something happened to the mook in the bar during the happy hour.

By the way, just pointing out another hypothetical here; we don’t have sufficient information to be making judgement calls on that specific situation.


In some places, like Massachusetts, happy hours (happys hour?) are illegal, I think the on the theory that they promote binge drinking.


God forbid people are happy for an hour, we've got to raise their insurance premiums. You know in Indiana, it's actually not even legal to have any happy hour promotions wherein the drinks are cheaper but only during a certain window. If you want to make people happy with drink specials, you have to offer them the same drink special all day long. (And so that's how we do...)

Happy hour here is just when you can get a $2 cheeseburger, or $0.50 wings on special. I had never even considered something like a happy hour being reflected on your insurance premium.


I don't think it's freaky at all, I do find it freaky that there's people who will choose fraud over code as soon as code was anywhere in the loop that caught the fraud.


Yes - because where does it end? China, which has surveillance on anything and everything? I don't want to live in a society where everything is monitored by people I don't know and haven't heard of secretly, and neither I think do you.

Remember, this wasn't the insurance company that was spying. This was a data broker whom you've never heard of, who scrapes social media pages, and gets paid by insurance companies for reports. A bounty hunter using computers and scraping. That's dystopian.


Customers using their speech to praise your business, in the process revealing you committed fraud, and the insurance company hearing the customers, is not "China".

If all this speech was in a central repository by government mandate, I agree it's China


It's more that I don't think it's the damn business of any private, nameless entity to scrape my social media posts and collect them in a secret database. I signed up for _Facebook_ and their privacy policy, not for anonymous people who can affect me in the real world without due process to save all of my posts and harvest information about me from them. And then link it to all of my profiles from LinkedIn and elsewhere across the internet into one central database they can sell people and my insurance company, because make no mistake that's what they do.


> Yes - because where does it end? China, which has surveillance on anything and everything?

The anti-Chinese rhetoric on HN is starting to grate.

You don't live in the DPRC. How do you know what China does with surveillance if any?


> You don't live in the DPRC. How do you know what China does with surveillance if any?

The same way we know what happened with the SS or the Stazi - lots of detailed evidence, first hand accounts, reports from other intelligence agencies etc.


The thing is, the people you mention have a bone to pick. Its like taking edward snowdens view of America over say...cardi b


I am concerned that the type of defense of privacy exemplified by your argument (that reporting the happy hour to the insurance company was a privacy violation and should be illegal) will backfire sooner or later.

Assuming that the guy was running a happy hour in a place where they are banned, your argument reinforces the view that privacy is only needed for those who break the law.

Explaining why privacy is important is hard enough as it is. Please don't make it harder.


> running a happy hour secretly in his insured bar

Can you explain what this means and why it is a problem? --Confused


Not OP, but a happy hour is usually a period of time when bars serve discounted drinks and such.

https://en.m.wikipedia.org/wiki/Happy_hour

It's illegal in several States. It was only made legal here a couple of years ago.

>The reason for each ban varies, but include: to prevent drunk driving, avoid the nuisance to neighbors from loud crowds and public drunkenness, and to discourage unhealthy consumption of a large amount of alcohol in a short time.


I did not realize happy hours were so contentious. Call it the California bubble.


Yeah, it was a big deal when they made it legal here. Bars, restaurants everywhere started doing them like immediately. Now it's rare to find a place that doesn't offer some kind of happy hour special.

Gotta say, I haven't really noticed any of the problems occuring that the wiki article mentions was the reasoning behind most bans myself. Haven't seen any news reports about those things since the laws changed either or anything.


That's a little bit symptomatic of your biases. Laws against happy hours are far more a bible belt thing in reality than a nanny state thing (even if some of the reasons touted publicly are nanny state-ish).


Happy hour is when drinks go on a huge discount for a set time period. You get a lot more seriously drunken people since it encourages binge drinking. I would assume that just carries a higher insurance premium to have one.


[deleted, wrong post]



Thank you for having the fortitude to put your money where your morals are. It's one thing to turn something down, it's another entirely to give up money already earned and the prospect of more to come.

So, Thank you, for truly leading by example.


Scraping for bikini pics?

Reminds me of this lawsuit against Facebook:

https://www.wired.com/story/facebook-six4three-bikini-app-la...


Props to you.


I am trying to find the unethical part of this, care to enlighten us?


Grabbing images of women from social media, and aggregating them without their consent, is not necessarily illegal, but it's certainly exploitative and creepy.


This is just restating the question. "Exploitative and creepy" doesn't have a whole lot of semantic content beyond "wrong".

I took a whack at articulating _why_ it's wrong while not quite fitting a clear definition of malfeasance here: https://news.ycombinator.com/item?id=27632365


The pictures are one thing and bad enough. Including location and links to profiles could make you a party to stalking or worse, and that is quite unethical.


I disagree with this comment, but it's a shame (albeit an unsurprising one) that it's flagged into oblivion instead of responded to. I think the question illuminates something interesting about the incompatibilities between our pre-social-media notions of privacy and our current public posting behavior.

I don't have my beliefs on the topic clearly articulated, but I'll give it a crack. There's the argument that public information is public, and that there's no issue in aggregating it or otherwise making it more accessible, as long as the access is through legitimate means. I'm sympathetic to this and understand why people believe it, but I think it contradicts other consensus moral intuitions about privacy rights. A salient example that other HNers may be familiar with is the doxxing of Scott Alexander; any intellectually honest person familiar with the internet can tell the difference between "you can find out who he is if you do some digging" and "real name published by the NYT", despite the pathetic attempts at dismissing the possibility that doxxing him was bad (amusingly, including by people who I am 100% sure would find the bikini example to be a horrible violation). Hell, I was a reader of Scott's for years before I first came across his real name. The entire social Internet is built on security through obscurity, because opsec is hard and many people aren't constantly vigilant.

There's even precedent for these intuitions outside of the social media context. It's uncontroversially okay for someone's face to show up in your photo taken in public; once you've taken it, nobody cares if you study the guy in the background. However, aggregate and operationalize this, and it changes not just in degree, but in character: It's practically a trope in thrillers for universal CC cameras + alphabet-agency elbow grease to stitch together comprehensive tracking of an individual, and the public is rightfully a little creeped out by the thought.

The main difference here is that technology, as always, is democratizing the ability to do this, pushing the threat model from the unrealistic "NSA spends huge resources to track you" to the prosaic "facial recognition can just track and store everyone's movements at low cost" (or "some under-the-radar shop is aggregating your bikini shots") and a million other mundane violations of our moral intuitions. To my mind, we're in the uncomfortable period before a new norm equilibrium is reached that matches the technological context. This has already happened locally: I'm sure this group knows people who have good opsec since the early 2010s, and "treat everything you post as if it's public" is at this point an age-worn piece of wisdom.


> I'm sympathetic to this and understand why people believe it, but I think it contradicts other consensus moral intuitions about privacy rights.

I mean, I guess that's the reason why it was downvoted, don't you think?


"This is reasonable and illuminates something interesting, but I don't agree with it" is pretty widely considered a poor excuse for downvotes without comment, let alone flagging. You're obviously free to downvote whatever you want, but behaving like that is explicitly making this a meaner, dumber, less interesting place.


I should have quoted only the latter part of your sentence. I disagree that it's an illuminating comment or that the reasoning behind it is strong. And in particular, "I disagree with it" is not in the same category as "most reasonable people consider this to be immoral" (you can question whether we have the same definition for "reasonable", but at some point you'll have to recognise that we all make some decision at some point as to what we consider reasonable or not).

That said, I didn't actually downvote it, I just gave some argument for why I think the downvote was justified.


> in particular, "I disagree with it" is not in the same category as "most reasonable people consider this to be immoral"

IMO, this distinction is usually illusory, and only taken seriously in the kinds of conversational spaces that aren't worth being part of. "Most reasonable people think it's immoral" can be applied to any number of horrific things over the course of human history. If you want to hide[1] a potentially sincere groupthink simply because it doesn't comport with groupthink, there a million and one fora full of dumb, narrow-minded people you can do that on. HN isn't all the way there yet, and I think it's worth pushing back against the tide.

This doesn't suggest that it's impossible to post something so alien that there's likely little of value to discuss, but this is demonstrably untrue of the parent comment, as evidenced by my response to it and the half dozen people who found it interesting enough to upvote it.

[1] Again, we're talking about flagging, not just downvoting, though it applies weakly to the latter too.


I agree that the flagging goes a bit too far. The downvote is entirely justified though. Downvoting for disagreement is explicitly allowed as per the site guidelines. Now myself I try not to overuse that, but if I really believe that I can't follow or agree with the reasoning behind something at all and I find it somehow disturbing, yeah, then there's nothing wrong with downvoting.

> IMO, this distinction is usually illusory, and only taken seriously in the kinds of conversational spaces that aren't worth being part of. "Most reasonable people think it's immoral" can be applied to any number of horrific things over the course of human history.

Your solution to the fallibility of human judgment, especially when it comes to ethics, is to assume that there can be no moral judgement anymore, because one might be wrong. I don't think this is productive. I'm quite sure there are a number of things you would consider deeply immoral that you would be shocked to read here. People are allowed to have a sense of ethics and to use that to guide downvotes. If you disagree, just upvote instead, or discuss why you disagree. But you yourself admitted that most people would find the behaviour in question immoral.

> HN isn't all the way there yet, and I think it's worth pushing back against the tide.

Your mistake is to assume that HN is somehow above basic human nature. But HN is also full of explicit and implicit biases and those can often hide behind a veneer of supposed rationality.

> This doesn't suggest that it's impossible to post something so alien that there's likely little of value to discuss

I found your contribution to the debate to be actually sort of interesting, but more as an answer to a question such as "how can we explain why we find that sort of behaviour to be immoral" and not to the OP's implicit "I fail to see what's immoral here".

Also, it was just in a sense a low-effort comment. I'm sure that poster can perfectly well understand why someone would find the behaviour in question immoral given that that person presumably has spent time around other people, including women who might object to this kind of objectification. So if they still disagree that it is immoral, they could at least try to argue why.

(Also, the sole reason why I'm engaging you, as opposed to OP, here is because I find these sorts of meta-ethics / meta-rationality discussions to be quite interesting and important in a world where "reasonable people" seem to be less and less able to agree on how to ascertain both what is true and what is moral. This is, I think, a discussion worth having, I just happen to disagree with your conclusions.)


> Downvoting for disagreement is explicitly allowed as per the site guidelines.

Right, I mentioned that the case against downvoting is weaker, since at this point it's basically describing my opinion about what makes a forum a worse place to hang out. No real disagreement here.

> Your solution to the fallibility of human judgment, especially when it comes to ethics, is to assume that there can be no moral judgement anymore, because one might be wrong.

I don't think this is what I expressed; the intent of my second paragraph is to explicitly clarify that I'm not defining away the ability to signal (via downvote/flag) that certain content isn't welcome in a forum. My point was that moral judgment without care and thoughtfulness is extremely unproductive for a forum of this sort. (I actually hold the stronger opinion that it's evil, but this is so much stronger a claim that it would derail this conversation significantly to go into it).

> Your mistake is to assume that HN is somehow above basic human nature.

I don't follow how this applies to what I've said. Differen communities are suited to different types of discussion, and HN is better-suited to thoughtful consideration of non-consensus views than others. It's not perfect at this goal; I'm certainly pretty hard on HN in my meta-comments, but that's largely because I was lucky enough to have found a couple other fora that are even more highly-selected for intelligence, intellectual honesty, compassion, and open-mindedness. The default state of an Internet (or non-Internet) forum is to allow people to perform "thinking" while basking in the warm fuzzies of guaranteed social approval and never having to challenge their beliefs. If the behavior you're defending isn't pushed back against where possible, every forum in the world will become the same formless sludge (and inability to empathize with those outside of your bubble). As I mentioned, "suppress this without discussion because I think it's immoral" has a horrific track record; were this 50 years ago, I'd be saying "seriously, why _is_ being gay so worthy of persecution" and you'd be saying "it's immoral to even think that, no need to engage, just downvote".

I know I was pretty hard on the concept of mutual-approval societies, but I actually think there is value to this approach. The term "safe space" is often used derogatorily, but it has significant value. If there's a forum dedicated to discussing the minutiae of Christian theology, I think it's completely reasonable to keep it a "safe space" from those who want to argue the basics of, say, God's existence. There are many types of productive discussion that require holding constant certain assumptions (rendering questioning of those assumptions unproductive).

HN is fairly high-percentile when it comes to acceptance of non-consensus ideas, expressed in good faith. That is (historically) the culture of this forum, and its value. As I mention in my previous comment, there are non-consensus ideas which one can judge do not come anywhere near interesting topics, but as evidenced by my response to him/her, this comment was not one of them. Note also that this doesn't even preclude downvoting, though it's not ideal; my comment specifically mentions "downvoting without replying".

Particularly without a reply, it seems much more likely that the downvotes come from the knee-jerk reflex to pattern-match that afflicts the especially-stupid ("this guy must be a misogynist! I must come to the rescue! I'm such a good person"). People like this are _everywhere_, and by definition are extremely unlikely to learn or be learned from. I get that there are hordes of these people even on HN, but their comments and their anti-thought impulses are precisely what I would like to push back against to preserve the distinct value this place still retains.

> more as an answer to a question such as "how can we explain why we find that sort of behaviour to be immoral" and not to the OP's implicit "I fail to see what's immoral here".

What's the difference? If there is a difference, why does the comment fall in the latter bucket instead of the former, given that his comment is _literally_ a question?

> Also, it was just in a sense a low-effort comment

If there is a difference in intent, why should that matter? I'd happily lose a million reflexive downvoters from HN to retain a single poster of "low-effort" questions that probe a Sacred Tenet of Groupthink, even if I disagree with the assumed implicit conclusion of the prober. Downvoting-without-response is both lower-effort _and_ more harmful than asking these questions: If the comment is so obviously wrong, surely a low-effort response should suffice, right? If you've spent much time on HN, you'd know that easily-rebutted comments are rebutted thoroughly and repeatedly.

I think perhaps where our views here diverge is that I couldn't care less about "punishing" the commenter, and am certainly not willing to damage the quality of discussion here to do so. It's not even a good idea from the pragmatic perspective of stigmatizing these views: when I see a downvoted and unanswered question, I don't think "he's definitely wrong", I think "1) I can't think of a rebuttal and nobody else seems to have either, so they just suppress it and 2) boy, HNers have gotten even fucking stupider".

Sorry for the length, and I likewise appreciate the conversation!


You might actually want to get a psych eval if you can't figure out how that's unethical.

This is not sarcasm, or me trying to be mean.


> or me trying to be mean

If you have to say "I'm not trying to be mean" before anyone accuses you of it, then you're clearly aware that what you're saying sounds mean, but you don't care to try and avoid it, but you want to pretend you are still nice. Which, ironically, is something that would cause some intensive pencil scritching during a psych eval.


I'm heartened to come here and see people refusing to do things for ethical reasons. You need better protections in your work though. I work in the medical field, and refusing to do something is a common occurrence and certainly not a sackable offence - but I guess the impact of what we do is immediately visible on someone present. A lot of decisions people make in other lines of work will impact a nameless person somewhere out of sight, so I guess it's easier to be unethical.


I have been waiting for doctors and other medical staff, especially in ERs, to stand up about the insane over-charging that takes place. Everyone seems to have a story, but in mine before I agreed to anything I asked how much it would be. I was given an estimate of around $1,200 for a scan, decided to proceed because I knew I had $1,800 in my HSA. Turned out they charged me almost $4,000 and I ended up in collections. I would have never, ever agreed to the scan if they had been honest.


Medical billing is simply insane.

A month or so after an emergency visit to the hospital, I started getting random bills in the mail ranging in $50-$100 amounts. I went back to the hospital and asked them what the total was so I could just pay them the full amount right there, which was only about $1000. They simply wouldn't tell me. They told me to wait for all the bills. As I paid them off, there was no indication of whether I had completed all the payments.

Guess I missed one, as I also ended up in collections a year later for $50. My credit score was harmed instead of theirs.


Last time I got a mortgage it turned out I had a couple of dings on my credit record because of unpaid medical bills (totaling a few hundred dollars) that I had missed and that had gone to collections. The banker processing my mortgage application said "It's not a problem. Everybody in America has unpaid medical bills. We ignore those."

I got the mortgage.


Get them to put it in writing.

Also you can usually mail a check with a short letter explaining you are paying the estimated cost and caching the check is the same as honoring that estimate.


The situation in the US is insane.


Purchasing member data from data brokers. I can’t stand it. There’s almost never a legitimate business reason to do it. It’s invasive, aggressive, useless, expensive…anyway I make new enemies every week when I push back (less and less respectfully every time.) but it’s something that I feel strongly about, and if my bosses want blind loyalty they can fire me and get a dog


But what about making it easier to auto-fill member's information into a contract? (sarcasm)


This is why laws that regulate data collection are so necessary. The ethical considerations that apply to software are so nuanced that they are very easy to get lost or ignored in a typical SDLC.

It will be much easier for a developer (or an outsider) to throw up a red flag that is taken seriously if it’s a legal concern.


Totally agree. I think a lot of the "evil" that happens in business is done by well-intended people who make incremental decisions while trying to compete. Some of them probably make decisions they don't like, but because if they don't their competitors will eat their lunch, they have to.

In my experience, honest business folks don't mind honest regulations. It keeps the playing field level.


*well-intended people who make incremental decisions while trying to compete

May I take issue with this statement?

Why do you say they are well-intended when you subsequently say that the intention is to make money? (compete to make money)


I think you can view money not as the end goal, but as the means to accomplish your goal. If I have a goal to help people become educated, I use business as a way to organize resources (including people) to accomplish the goal. Money is necessary to make that happen. Even for open-source projects, money is necessary.

I respectfully disagree with Milton Friedman's oft-quoted line, that the purpose of a business is to maximize value for its shareholders, at least if we set value = money.


Because those concepts are not antithetical? We all feed our families somehow.


There are laws that regulate data collection, it's just that companies who break them have every expectation that they'll never get caught at it.

For example, here's an HHS page that exists to brag about the effectiveness of HIPAA enforcement[1], and even by their statistics, about 0.3% of complaints result in reviews. Now imagine what percentage of violations never result in complaints.

GDPR is arguably more successful, insofar as they levied around $150M in fines last year, but most of that was from a couple big cases (Google chief among them) involving companies who are so big that even getting a $50M fine isn't going to change their underlying practices. It's the cost of doing business if hoarding private data is your business model.

Pretty much every website I've ever looked at that had a GDPR compliance notice was implementing it in violation of the actual regulation: they set cookies first and then notify you about them. That's not how it works, dude. If you consider that GDPR applies to any business that transacts with EU citizens, the number of non-compliant websites is so huge that $150M is nothing. The only effect of GDPR in practice has been to fill the world with meaningless banners, not to protect anybody's data.

And in a nutshell, that's how data collection and privacy regulation has worked out in the real world: a lot of meaningless compliance theater, while business goes on as usual.

[1]https://www.hhs.gov/hipaa/for-professionals/compliance-enfor...


GDPR is successful because they hand out fines but every website is in violation of it? I would think that a successful law results in compliance.

As someone who has worked with HIPAA data, I have personally seen the data treated with a great deal of thought around compliance. I think the numbers you cite are a result of that, not in spite of it. As your source points out, many of those complaints aren't eligible for enforcement. This isn't because HIPAA isn't enforced, it's because many of the people who are complaining don't know what HIPAA does.

But the US needs their own general privacy law too, GDPR is not enough, as it isn't applicable to a lot of software built in the US.


I was fired once for consistently resisting sleazy practices as well as defending my team from abuse from leadership.

The big realization I had, which I'm surprised I didn't have earlier: if you don't want to do gross things, don't work for B2C startups.

The only way to get an edge when trying to grow a B2C startup is to start continually start squeezing whatever revenue you can from your users. This is inevitable even if you start with a solid product as your base.

We would track metrics on how many users were upset with how royally we screwed them with fine print and as a long as that number was small enough that it didn't impact monthly revenue "leadership" didn't care. We had no product vision at all, it was just iterate random ideas to get more money, and most of the ideas that stuck involved dark patterns that means the user didn't realize they were taking a bad deal.

I was honestly smiling when I got fired, it was as though I had passed some secret test that a company like that would not want me.

My experience with B2B has been much better, especially if you have enterprise customers. In the B2B space customers can easily be paying 100k-1M a year, and, if you treat them right, will be your customer for many, many years. It typically makes economic sense to treat the with respect, and make products that are beneficial to them. Dark patterns, and sleazy tricks have a serious penalty because losing a contract can be very, very painful.


As long as this is done in a professional way, with a sincere effort from engineers to understand the business requirements, I think refusing to do things that constitute user privacy or safety issues is a good and right thing to do.

Dark patterns are more borderline in many cases and I think the best approach is building a culture of respect for users that doesn't result in dark patterns, but in this case it looks to me like this was more than a dark pattern, it was an actual privacy violation.


I did have a similar example to this a year ago. We were partnering with an affiliate marketing agency, and they asked us to include their tracker on our website. When probed about how the tracker worked, they revealed that it worked by scraping everything that looked like an email address and sending it to their servers in order to attribute user signups, and they required it on every page.

I said that was a gross violation of our users' privacy and that we would only implement our own significantly restricted server-side tracking that didn't reveal any user info. Their response implied they really weren't challenged on their practices often at all.


Was this in US or EU? I wouldn't be surprised if it is EU but theoretically this should be so GDPR non-compliant that Merkel personally throws you out.


EU. Absolutely non-compliant and I don't think they could have cared less.

We had to get special sign of from their senior leadership to implement the server-side tracking because it meant they could lose out on revenue if we didn't get it right.

Ironically the server-side version bypassed adblockers/tracking protections (all we did was ping back after checkout with the total order value, no user data), so it was likely that they would make more revenue given than ~50% of users have some sort of blocking in their browsers.


CCPA probably makes that illegal, which gives devs with a conscience a nice out: “that is wildly illegal if we do it to anyone in CA, we will have to develop it my way for them anyways, and it’s cheaper to support only one system”


The GDPR isn't enforced at all for anyone but the biggest players.


Dark patterns aren't a line in the sand that's for sure. At what point does a newsletter pop-up become a dark pattern? There are some concrete examples, but it can also be pretty nuanced. I think newsletter pop-ups are always egregious, but a business owner can have a very hard time grasping why. They can think they're building stuff people like, and they can often act like they're simply herding sheep. Users are ephemeral zombies to some, and from that lens, it's totally fine to capture them with sneaky tactics.


I wouldn't call over aggressive marketing a dark pattern. As with the newsletter pop-ups, they aren't shady or underhanded or misleading or coercive, they're just in your face. I often find them obnoxious and repulsive and they drive me away from a site, but they're in the light, out in the open, not pretending to be something they aren't.

I think there is a fairly bright line in the sand for defining a dark pattern, and the line is between hiding and being open, between being misleading and straightforward, between trying to force someone to do something and seducing them with plain intentions. The later behavior can still really suck, without being dark.


I disagree, simply. There's so many novel UX patterns in websites and apps, it's impossible that you've got a clear idea of what is or isn't a dark pattern in every circumstance.

Uber Eats used to have a "$0" option for tipping, but removed it, it's no longer clear if you're going to be giving a tip or not, but if you don't click anything you don't actually give a tip. Is that a dark pattern? I think so, because the UX makes it seem like you have to select something, and that no isn't an option. But it's not hard to argue in the negative, since doing nothing will not take money from you.


You seem to give a pretty good reason why it's a dark pattern, and a pretty flimsy one for why it's not. I think it's not as ambiguous as you say.


I didn't realize this was going to be such a controversial take. It's possible you think there's a clear definition because you haven't seen examples of what some industries think is ethical. I'm trying to pick relatable examples but clearly not doing a great job. Again, I tend not to think these things are ethical, but I'm not the ethics police, so the fact that one person thinks it's ethical and one person does not means it must be blurry.

I feel pretty strongly that tracking cookies are evil, but the entire ad-tech industry did not think that was a dark-pattern.

I feel pretty strongly that GDPR "accept all or leave site" dialogs are a dark pattern, but the EU is happy with that implementation.

I would also love to point out that "dark-pattern" is still a blossoming terminology, there's no definitive body of law or ethics that describes what is and isn't a dark pattern.


The "accept all or leave" has recently been ruled illegal because the law requires "free and informed consent". Refusing service without consent isn't a free choice to the user.

Similar for Google that went the "you can disable cookies in your browser" way, I doubt that will survive a lawsuit in the coming years.


Newsletter popup that asks for email address and if you click Cancel or X doesn't save it: annoying but not a dark pattern

Popup that asks for email address that doesn't have an 'x' (or the 'x' doesn't appear for a while) and the email address field secretly records what the user types and even though they clicked cancel their information was secretly shared: dark pattern


There are some clear cut examples no doubt, but you've hardly covered the thousands of possible UX interactions. It's not always so well defined, partly because obnoxious marketing tactics can already toe the line of ethics.


> At what point does a newsletter pop-up become a dark pattern?

At what point does it not? Pop-ups exist to steal attention. It's abuse. Why do these marketers believe they are entitled to anyone's attention? They are not.

> a business owner can have a very hard time grasping why

Of course. Businesses generally couldn't care less about how much they're abusing their workers or their consumers. The only thing they care about is their profit. Dark patterns are just value extraction mechanisms, it's their abusiveness that makes them dark.

> they can often act like they're simply herding sheep. Users are ephemeral zombies to some, and from that lens, it's totally fine to capture them with sneaky tactics

Yes. It's an inherently abusive and sociopathic view. They don't consider us human beings.


I am in agreeance by the way. I think perhaps my comment came across as trying to soften the seriousness of dark-patterns when really I was just trying to say that there are lots of different perspectives on different types of coercion. It's not that it's unclear when something is coercive, it's that it's unclear how much coercion and trickery is unethical, as it depends on who you ask.


> it's that it's unclear how much coercion and trickery is unethical

Any amount of coercion and trickery is unethical. This is absolutely clear to me.


I assume everything I enter into a form is recorded, even if I don't hit submit. I'm surprised they were listening to onBlur instead of onKeyPress. Partial information could be exploitable too.


This hasn't been implemented by any teams I've been on. If companies were doing this, I believe (or at least hope) we would hear about it from implementors.


I believe (or at least hope) we would hear about it from implementors.

I was acquainted with some black hats in junior high and high school. Guessing how they are as adults, I'm pretty sure you can always find someone who will do it for the lulz, if not for the money.


There are form builders out there where you have the option to record data as it is entered. It is simply a checkbox in the form builder.


Good on you! At one point I thought I might want to be a product person at Google and interviewed for it. Their example product was a smartwatch for kids. I told them that I thought that I would object to that product on the basis that kids need fewer, not more screens. I was naive enough to think that this was a trick question but, needless to say, they passed.

I'm still proud of my answer.


? You could've said that it doesn't have a screen. You could've said that it respects children's attention by not offering distractions on demand. It seems to me you can design a smart watch for children with these kinds of characteristics rather than opt out of the exercise altogether.


Those discussions did happen, and I agreed that there was a niche possibility for a product targeted at parents of young children. Maybe. But they didn't want to talk about a niche product (it's Google, after all) and after thinking about it I didn't feel comfortable with the idea at any level - it trains kids and parents to be dependent on tech, trains kids it's okay to be constantly tracked, and while there are undeniable benefits, there are undeniable costs, too. The older the kid gets the more problematic it all becomes.


As a parent I've actually been thinking about this for a while. These watches are now very common btw, roughly half of the 7-9yo in the neighborhood have em. IMO ~9 should be the minimum age and the tracking should only be activated on press of an emergency button. I haven't yet found a product like that, they tend to give zero control to the kids. Aren't people concerned with their kids learning responsibility?


I’d love to be able to give my kid something that’s primarily an analog watch but with an “I’m lost/scared” button.


I feel lost/scared just thinking about this world we have built, with all its perverse incentives.


A traditional 'dumb' phone seems to loosely fit this definition.


Just get a flip phone with a prepaid call/text plan.


Can I ask where you're located? I'm in the heart of Silicon Valley and haven't seen this at all.


Switzerland. It’s an expat heavy neighborhood, not sure whether that has an influence.


I'm confused by "the older the kid the more problematic it all becomes"... Older kids eventually turn into adults. Do you mean that tech in general is bad for people?


Yet older kids are not adults, they are teens with lots of free time yet responsibilities that effect their future. If they’ve been groomed to have short attention spans and not curious by screens that play endless video and ads, it seems like a problem in the making.


I don't disagree with that. I am however wondering what is meant by "The older the kid gets the more problematic it all becomes". The grooming of short attention spans seems more of a problem for younger, more malleable brains than it is for kids who are probably a bit more reasonable as they get older.


But how will you show the kids ads then?


Haptic feedback when you get close to the right Cheerios box or Lego set. Some Pavlovian type stuff


I quite literally shivered in abject horror at the thought that some advertisng schmuck will read that and jump on it. We need to ban advertising.


You'd love The Santaroga Barrier by Herbert.

I typed of this excerpt and have used it countless times:

>To those men in their oddly similar dark suits, their cold eyes weighing and dismissing everything, the people of this valley were a foe to be defeated. As he thought of it, Dasein realized all customers were "The Enemy" to these men. Davidson and his kind were pitted against each other, yes, competitive, but among themselves they betrayed that they were pitted more against the masses who existed beyond that inner ring of knowledgeable financial operation.

>The alignment was apparent in everything they did, in their words as well as their actions. They spoke of "package grab level" and "container flash time" -- of "puff limit" and "acceptance threshold." It was an "in" language of militarylike maneuvering and combat. They knew which height on a shelf was most apt to make a customer grab an item. They knew the "flash time" -- the shelf width needed for certain containers. They knew how much empty air could be "puffed" into a package to make it appear a greater bargain. they knew how much price and package manipulation the customer would accept without jarring him into a "rejection pattern."

>*And we're their spies, Dasein thought. the psychiatrists and psychologists - all the "social scientists" we're the espionage arm.

The Santaroga Barrier,

Frank Herbert, 1968


ok, this is the guy we want to hire.


And they would probably have passed regardless. No distractions and screens probably means no ads and that's most likely a no no for the former do no evil company.


I suspect that, with their bet on cloud in particular, they're hiring in many places where ads don't matter. Even in the mobile OS world, where ads is eventually where they make some of the money, they have a clear adoption challenge first...


You forgot the other thing they love: humanity-scale dragnet surveillance (to feed "the algorithms"). If it can spy on lots and lots and lots of people, they might still be interested.


I once was contacted by a recruiter from the company owning most of the big newspapers in my country. They were putting together a team to be the first in the region to make news tailored for the reader. I turned them down, and actually told them it was on ethical grounds. With how bubbly and divided the news landscape has become the later years I feel that was a good call.


I don't know, I'm considering one for my kids. The oldest is seven, and it would be great if they could go to the nearby park by themself. Kids watches that function as cell phones would work very well for this.


There are smart watches for kids, and as a parent I thought they were great (my kid is old enough for a phone now). They had a very limited set of features but my kid could call me or text me silly pictures anytime, and I could call him or text him back. And I could see his current location. They had terrible battery life though.

But I'm not one of those parents who things screens are inherently bad anyway. I think you would be hard pressed to find any credible evidence to the contrary. I grew up on screens and now I have a good job as a programmer where I can provide a good life for my kids.


> I would object to that product on the basis that kids need fewer, not more screens.

The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't. Not without your help. But you're not helping.

Why javajosh? Why aren't you helping?

Would it be too on-the-nose to point out that this hypothetical smartwatch would probably be running Android?


> They had no version control.

Had to read this two times and check the year of the article to make sure this wasn't from 2001


Hacker News ghost story

I work with clients every day that don't have version control. Some of them are even using PHP!


Do they at least work on files locally? Or do they ssh in to modify the PHP file?


Also the use of "remote" to mean contractor legitimately confused me for a second, as if it never crossed the author's mind that a full-time employee could be remote.


i mean, even without the dark patterns that's enough to justify quitting.


What are we, as an industry, going to do about our complete abdication of responsibility for taking care of people’s personal data? It can’t just be up to individuals installing protective browser plugins. What would a future where people have real privacy on their computers look like? And is there a way to get there from where we are now?


It can never happen because there is a never ending supply of people who will set aside ethics for personal gain. I have worked for over a decade in the software industry in over 5 different companies large and small and I have never once seen an example where a leader put ethics over profits. Most small businesses live or die based on their next sale and most large businesses need growth at any cost to legally satisfy their fiduciary obligations to their shareholders.


Lobby for something like GDPR. Half the stuff in this thread is illegal in the EU.


> Each file had the same change - they had added code that makes an ajax saveEmail() call onBlur. In other words, email addresses were being saved to the database when a user inputs an email and the input loses focus.

> I told my client - apologies, but I don't want to work on this task because it's a dark pattern. And they reply - no, no, we are just sending people 3 email reminders. And then I try to explain that it's basically saving email addresses secretly.

Perhaps because this is a short post, but it seems to be missing context. The email addresses are saved, which may or may not be questionable. It's unclear whether there are any mechanisms to auto-remove the email addresses once three reminders have been sent.

It doesn't seem immediately obvious what the dark pattern is here. What am I missing?


Users expect forms to submit when the click the button. If they change their mind about signing up or purchasing something, they don't expect to get emailed if they didn't click the button. That's the dark pattern. Not the worst I've seen, but not great either.


Exactly. I've been a victim of a dark pattern like this years ago when I was trying to find mortgage rates.

Many of the online mortgage rate tools are really just lead generation sites, including some of the big name brand ones.

One wanted me to create an account, and I got about half way through (I stupidly gave my phone number) and then exited without creating account. I believe they had a sort of "enter email, next. enter phone, next" kind of page by page pattern.

Anyway, within an hour of exiting I was getting 3 phone calls per hour from mortgage companies soliciting me for business.


"abandoned cart" technology is why many ecommerce websites ask for your email first when checking out. My kid recently added some items to an online shopping cart to determine shipping rates, which required their email, and now they get spammed incessantly.


Switching to FastMail and the easy availability of arbitrary-business-name@mydomain.com was worth it just for that.


But how many people enter their full email and then back out of the form? This seems like a small thing at the end of the day, especially for 3 emails that doubtless contain an unsubscribe link.

Edit: That got a lot of push back. I'm just saying, this is not the hill I'd die on. At best it would spare some small single digit percent of visitors 3 unwanted emails.


I’m sure you meant your question sincerely. FWIW, I have backed out of many signup forms, and often wondered/worried whether they were doing exactly what the OP talked about. The reason for back-outs is because there is another very common dark pattern this gets combined with: a multi-page signup where email is asked for early, and then something more onerous is requested on a later page, which is where the cancellation is much more likely to occur. The single biggest reason in my experience for late cancellation is when the signup asked for a credit card, even though the signup was advertised as free. It’s a bait and switch, and the deal ends up being a subscription that you can cancel after your first free day/week/month. I’ve had other things besides credit cards, though, like required personal information.


Yes, I can see your point. If you click through to the next step though, you should assume that submitted the form. Regardless of whether it was done through an old school POST or Ajax in a single page app.

I think it's still a dark pattern to email someone who hasn't expressed interest in receiving those emails. There should at least be text informing you of that and giving a way to opt out up front.


Yeah, exactly, it’s about expressing interest, communicating intent, and offering a way to opt out. The broader point is that form submission is not the criteria for saving an email address at all, explicit consent from the user is. When the site itself provides an expectation that a signup process can be “cancelled” they are signaling an expectation that the information will be discarded and not used to market to you or sell your email address to other companies. When they do that despite having communicated to you that the process was cancelled, they’re playing dirty. And I do think selling the email address to third parties is the bigger danger here, not getting a couple of direct marketing emails from site you visited.


And if I fill out most of the form, but some custom BS UI chunk can't play nice and borks the page... I'm done with "company", leaving the page since it never finished loading, but my email address is already saved?

Absolutely f*ing not. Not a small thing in any way. This is unethical and if I got an email from that company you can bet I'll push back.


By that logic let's monitor their clipboard for email and phone numbers. Let's tell them that they should enable notifications so we verify they're not a robot. Let's ask them to download a little executable to see if we can find helpful ways to reach them.

We'll provide an unsubscribe link, I promise.

... But let's just record they clicked unsubscribe because that means they're engaged. I mean maybe they clicked in error? Let's ask them to confirm.

... But maybe they confirmed because they were confused and didn't understand the great value we provide.

We can beat around the bush our way to full-blown scammers all day.

Saving emails from a form NOT submitted is bullshit, and you know it. The user never intended to submit.


It's not out of the form. It's out of the email fields. So if you press tab to go to the next field, the email is send somewhere without notice.


Probably an awful lot for them to want to pay a developer to do this.


Small things can still be unethical.


This is about control: does the user understand when the data leaves their hands and is given to another? If the user doesn't understand this, then they find themselves unwittingly giving out their personal information, not just to legitimate actors, but also malicious ones.

I think this is a pretty important thing to try and get right.


> But how many people enter their full email and then back out of the form?

Most people. Because we have form autofill.


In that case, why implement the feature at all?? Why die on the hill to capture email addresses from the “small single digit percent of visitors” who never consented to receive your spam in the first place?


We found a new kind of psychopathy test here.


Really?


The dark pattern is capturing an email address even if someone doesn’t submit the form. Imagine you start a guest checkout, type your email address and nothing more, then close the window. That email would still get captured for marketing purposes. Because this isn’t what a user would expect, it’s considered a dark pattern.


And the browser’s autofill may fill in the email address without the user intending to, conceivably.


Or, conceivably, the wrong email address. I'm sure I'm not the only person who has multiple, context-dependent email addresses that (in some cases) are email aliases I share with others, all saved with appropriate supplemental information in my browser.


Well, if email was saved secretly, as author wrote, it means users were not aware of that their personal info is saved. Also, since it's Javascript, it means that they weren't first-party, who had the email address anyway.

If they added a text for user like "Your email address is saved by XXX Inc, we will just send you 3 reminders", then it would be ok.

For example of the same dark pattern: if you look at any hotel booking page (not aggregator like booking.com, but hotel-owned), I bet you will see at least 5 third-party tracking scripts, they all store every action you make on the page without user explicit knowledge.


5 is pretty low. There are many popular websites that share data with >100 third-parties.


> If they added a text [...] then it would be ok.

I’d agree with that as long as the text is visible before entering the email address, and the text also mentions that email will be saved before completing the form.

> I bet you will see at least 5 third-party tracking scripts

Analytics and tracking scripts is a good point. Sometimes it’s implemented in a way where tracking scripts don’t have access to keystrokes, for example by iframing them, and you’d hope that the web site owner would care enough about their own security to do that. But you’re right that unfortunately it’s common. In this case I think we need some legal protection in the US and elsewhere similar to GDPR that clarifies that collecting such information can only be done with explicit consent.


would it be surprising to a user if they received an email from a company they hadn’t given their email to?

if yes, it’s a dark pattern.

if no, it’s not.

i side with, yes.


It's not surprising because we're used to spam. /s


For me, this is only ok if at the top of the form there is big easy to read text that says any data typed in the form is saved immediately prior to the user hitting submit. Anything else is a dark pattern.

Let's not kid ourselves. We know most users naively think the data is just on their screen and nowhere else until they hit submit. If we write code to circumvent that expectation we know it is a deception. After all, the default behavior is that the form data is not available early. The programmer has to explicitly do something to counter the default.


> It's hard for people to understand or care about ethics in programming. One possible reason for this is that the issues are too nuanced

Oh. "too nuanced". The author is too soft. Most probable reason is that business owners are assholes.

EDIT: oh, and because most developers are spineless creatures not being able to stand up for their principles, unlike the author. If everybody really cared about privacy as they say they do we wouldn't have so many issues. I mean, somebody is implementing all those features.

sorry for the harsh tone but I don't know how to say it any other way. When developers - professionals who enjoy royal treatment on the job market at the moment - are too afraid to upset their employers over an ethical question, it makes me mad.


Our best bet for dark patterns to disappear is via legislation, not putting the onus on developers. The initial premise that everyone cares about privacy simply isn't true, and with the influx of developers in the job market, it's just a matter of finding someone with similarly lax ethics.


The saying “a sucker is born every minute” kindof applies here. Just because you and a majority of people care about ethics, doesn’t mean every programmer does. And all it takes is one who doesn’t for the company to be able to implement it.

The free market only works to an extent, and harm to consumers is a valid reason for the government to step in.


Long ago in my career, I was asked to essentially cheat benchmarks. Our software was often benchmarked by third parties to compare the product's performance across competitors. Our product was not as good, so after squeezing in every legit performance improvement we could possibly find, I was eventually asked to write code that queries the application running, and if it was a known benchmark program, just drop expensive API calls in a way not very noticeable. This would produce incorrect, but fast, output but only for benchmarks. I told my manager respectfully that's where I draw the line. He said, that's totally OK, I kept my job and he just assigned someone else who didn't care about the ethics. Quit a few months later for this and other reasons.

I'm a big believer in being part of the solution, not the problem, but there are always plenty of developers who don't care at all and are fine with "sure, boss, whatever you say, boss".


You reminded me of a saying I've heard a while ago which May be roughly translated to "the mother of suckers is always pregnant"


I love these non-sequitur literally-mostly-correct-but-conceptually-totally-wrong translations of aphorisms and titles.

One of my faves: in a Spanish-speaking country, an Ernest Hemingway book published (legal status unknown) in English, titled "Goodbye to the Weapons".

Another: from a customer service rep in Southeast Asia: "If I am in your shoe..."


If OP's client is in the EU, then what was asked is already illegal.


The fact that legislation exists has done me wonders fending against dark patterns.

Real example:

Client> We want the "I agree" button checked by default

Me> I'm not a lawyer, but I think this goes against the GDPR in that users must give their consent by action, not by inaction. I would advise to at least check with your lawyers first.

Client> Oh! I'm so tired of those laws, we'll get back to you after checking.

(a couple days later) Client> Yeah you were right, lawyers say we'd better leave it unchecked.

I know that client very well. I would have _never_ been able to convince them on a morality argument.

What's more, after a few interactions like the above now they don't bother going to the lawyers anymore. I just say "I'm not sure this is acceptable" and they immediately agree to whatever non-shady alternative I propose.


>Our best bet for dark patterns to disappear is via legislation

And what would the language you propose look like for this legislation that doesn't have loopholes, caveats, etc for people to get around once the lobbyists are done with it?


The knee-jerk reaction that writing good legislation is just too hard seems to always come up. But it's harmful. We don't need to have model language ready to ship just to be allowed to discuss whether an individual or collective solution is appropriate for the problem, and "you haven't perfected the details" is an unhelpful conversation stopper. It represents the worst kind of cynical resignation. In the long run, that's just as fatal to democracy as the lobbyists you're worried about, because it shows you've already given up.


The knee-jerk reaction is a knee-jerk reaction to the thought that legisilation is the end all be all of a problem. Marijuanna legilisation in the early states made it so difficult that it was better to stay underground. I remember in California the pro-marijuana people were actually opposed to the rules as written and being voted.

Talking is cheap. We've been talking. Most of the congress critters in office now do not understand most what's being legislated (if they've even read the full bill past the talking points). That's why most bills are actually written by lobbyist groups, and the congress critter just puts their name on it. Surely, there's nothing that could go wrong with that now is there?


It's not the be-all-end-all but it's also not a knee-jerk thought. Legislation (and government) is how we solve the collective action problem. That's what it's there for.

Sure, individual developers should stop deploying dark patterns. Many many developers do, though, and saying they shouldn't isn't going to change things. Change requires collective action, rather than expecting a huge number of developers to magically reorder their priorities. And unless you think a nationwide ad campaign is more feasible, that means legislation.


>And unless you think a nationwide ad campaign is more feasible

On a tangent, I wonder what the response rate of Facebook's nationwide ad campaign looks like. Not really sure what the point of Facebook's campaign is, but it's a really lame ass commercial. I hope they are paying above market rate for it too. Any laws being passed to regulate Facebook would immediately be suspicious to me wondering how much FB spent to write said legislation. Just like when the rules regulating TV owners could only own a certain percentage didn't make anyone actually sell anything. So many pieces of legislation are like this. I don't see "Don't be evil on the interwebs" regulations being any different. Maybe I'm cynical, maybe I'm too pessimistic, but it's not like I'm grabbing my hesitancy from air.


So the law is likely to get watered down. But what's a good goal? What would be an effective limitation on online advertising companies?


Wasn't trying to derail conversation into ad companies. Just used some examples on how laws aren't always about public interest, but more the interest of those being affected by the law so the effect is the least disruptive as possible.


Personally, I just think specificity makes for more interesting discussion. It's easy to say "there oughta be a law" but it's kind of empty calories, intellectually. Responding with aggressive negativity to people asking you to be more concrete is a real conversation stopper.


I don't mind a specific discussion. I react with negativity to two things: the expectation that the next step in the conversation can't happen without some fleshed-out perfected proposal, and the demand for someone else to come up with that solution, rather than making a positive contribution oneself. The problem isn't "asking you to be more concrete" because you can do that while avoiding those two negative contributions.

For example, a response could have been:

> How would you tailor that law? I have a hard time describing exactly what's wrong with dark patterns in a way I think would be enforceable, even though I know it when I see it.

or

> I think a law would have to be carefully written. One starting point might be requiring cancellation to be available using the same process as subscription - for example, if there's a signup form online, there must be an equivalent cancellation form.

which are at least contributions rather than just an insistence that there's a problem that has to be addressed before these ideas can even be considered.

Even just "what language would you propose?" without all the baggage of the original comment at least moves the discussion forward.


The conversation stopping seems to be the agenda of modern day. Nobody actually wants to solve issues, because then they can't argue on the issue any longer. So when the questions get tough, pivot, gas light, whataboutism, whatever to avoid getting pinned down.

For actual furthering of the discussion on wording, something vague like 'any dialog presented to user must be done in the most consumer friendly manner' isn't very good because it's too vague. Getting specific like 'automatically checking/highlight/enabling the options vendor prefers' is too specific because then the vendor would use something not a check/highlight/etc. These kind of things are ripe for "spirit of the law" interpretations, but we've long since given up that kind of good intentions.


Writing good legislation is really easy. The issue is that the more power you concentrate in one decision making body (in this case, legislators) the more important the issues they deal with are.

That consumes bandwidth, and means that these sort 'they should step in and decide how people write software!' ideas are going to get corrupted by pressure groups who want to use the power to achieve their own ends.

If I have to vote for someone who is going to enact policy I actually want, and the trade off is they are shady on web implementation policy - I don't care about web policy that much. No-one does. The government can't possibly put out reliably good regulation on the range of topics that people want good regulation while that dynamic is in play.


The government can't possibly put out perfect legislation in every area, no. But it's ridiculous to assume that everything is inherently corrupted and that there's never any hope. I mean, sure, if every member of Congress is shady on web policy but shares your views on more important issues, then web policy falls behind. But that's just your cynicism speaking - you've got no reason to think that web policy is automatically lacking support.

The other fault in your model is that Congress actually delegates the vast amount of lawmaking to administrative agencies, who are equipped to deal with the less important issues. The FTC, the CFPB, or the FCC could all credibly claim some authority over dark pattern regulation, though I don't know enough admin law to know who would be most appropriate. Agency rulemaking isn't a perfect process either, but it doesn't have the concentrated power problem you're describing and it deals with tons of lower-priority topics. Go read the Federal Register: https://www.federalregister.gov/


There are 10s of voters, maybe low hundreds, to whom web policy is important enough to change their votes. There is never going to be any accountability for the policy on dark web patterns.

> The other fault in your model is that Congress actually delegates the vast amount of lawmaking to administrative agencies, who are equipped to deal with the less important issues.

Those bodies typically end up full of industry insiders who lock out new entrants and promote the status quo. They aren't going to stop dark patterns, it is more likely they'll want everyone to have a license before starting a website (which is the obvious enforcement method).


If your cynicism were correct SOPA and PIPA would be law. They aren't.


I don't think it is reasonable to cite 2 successful attempts at fighting off legislation in favour of passing more legislation/the integrity of the legislative system. That is an argument in favour of less regulation - so far 2 attempts have been so bad as to produce mass outcry.

That is exactly the sort of corrupting force that will gather to any power that gets gathered to the regulators in the name of pushing back on 'dark patterns'. It isn't a case of 'of this is too hard', it is pointing out that you're implicitly advocating the creation of a point of failure for these people to attack in an arena where they have an advantage.


The knee-jerk reaction is running to the government to try to make whatever you don't like illegal. Which is getting more and more common. On college campuses it's getting more common for kids to call the police when they see something that offends them, including speech that offends them. They just assume that they have a right to a higher government power to make illegal whatever is bad in the world. That is the knee jerk reaction here.

The idea that software patterns can be made illegal is just absurd, and goes against the early vision of the free software movement which recognized the most important thing was the freedom to write software, rather than the freedom to get a job writing exactly the software you think you should be writing, and the right to throw employers in jail if they fire you for not doing what you're told on the job.


> The knee-jerk reaction is running to the government to try to make whatever you don't like illegal.

The things I advocate the government to ban are things I believe are bad for society.

I don't like ice-cream, but I don't advocate for an ice-cream ban because I don't believe ice-cream is tearing apart our society at the seams.

Dark patterns are bad for society, and prey on people's miscalculations and inefficiencies of our chemical-brains. In the same way that child labour came up because it was profitable short term, but it was decided against that it was still a bad thing for society.

The government's job is to steer society to a better place, god knows we'll take it down a bad path left unsupervised.


> The idea that software patterns can be made illegal is just absurd, and goes against the early vision of the free software movement

The free software movement has never been about a libertarian, free-for-all, wild-west. In particular, it has always been about taking power away from developers, and giving it to users (e.g. see gnu.org/philosophy ).

Whilst FSF, GNU, etc. don't advocate banning certain software (AFAIK), they're certainly not averse to government intervention for social good. GNU Taler is a good example: a cryptocurrency which offers anonymity to buyers, but sellers remain identifiable for tax collection.

Also note that 'freedom 0' (freedom to run the software for any purpose) only requires that a license itself doesn't impose restrictions on users. It's taken for granted that users are already restricted by other mechanisms. For example, a license with a clause like "the software must not be used to commit bank fraud" would be non-free, but that doesn't imply that the free software movement approves of bank fraud; simply that (a) in places where bank fraud is illegal, such a clause is redundant, and (b) in places where bank fraud is legal, such a clause would constitute a developer exercising unjust power over users.


Sometimes it comes down to a judge simply deciding "I'll know it when I see it."[1] I think the bigger problem is our judicial and legislative system doesn't have the periphery to fully grasp how vast and damaging of a problem dark patterns present.

[1] https://en.wikipedia.org/wiki/I_know_it_when_I_see_it


Do judges have this kind of power any more? Thinking in terms of all of the mandatory minimums and what not that tie judge's hands in deciding cases.

Judges being able to "know it when they see it" would be the only way "spirit of the law" type of legislation would work. Then you have the problem of Judge A in District 1 being much more lenient than Judge B in District 2. Not sure if that's a bad thing or not, just mentioning things.


I would guess that EFF would do the hard work of combing through the text, and sound the alarm if it has such glaring issues.


> And what would the language you propose look like for this legislation that doesn't have loopholes, caveats, etc for people to get around once the lobbyists are done with it?

It already exists - it's called GDPR. It deals with the root cause of this design pattern which is collecting data about clients secretly. It's enacted in the EU and it's excellent. It's not perfect - there's still some oddities and extra work for implementers (and the 'accept cookies' on every page thing is a faff for users) but the overall effect is extremely positive.


The GDPR is absolutely not enforced. I've reported this exact dark pattern to the ICO over a year ago and nothing happened.


Lax enforcement is a separate issue from saying it's not possible to even write a useful law in the first place. Both are needed but the rules need to exist before we can even consider enforcement mechanisms.


> EDIT: oh, and because most developers are spineless creatures not being able to stand up for their principles, unlike the author. If everybody really cared about privacy as they say they do we wouldn't have so many issues. I mean, somebody is implementing all those features.

Why holding only developers responsible for non-ethical features? What about the actual decision makers and legislation.

By the same logic, if only factory workers decided it's immoral to build weapons, there would be no wars. But those spineless creatures just want bread on the table and don't care about world peace.

The only way to avoid these dark patterns is a set of laws that punish them. The reason why we don't have such laws is that we as a society don't know and don't care. It's we all, not only "spineless developers".


> By the same logic, if only factory workers decided it's immoral to build weapons, there would be no wars. But those spineless creatures just want bread on the table and don't care about world peace.

I think the question is, are developers engineers or just programmers? The responsibility of an engineer is to give pushback when it is deserved.

As you say, the responsibility is on decision makers. But they care about the politics. They will try to make it look like you're on your own. But if you clearly demonstrate the mood of the company's employees, they will reconsider their position. The result will be compromise not capitulation but it's better for something to be better even if it isn't great yet.


I don't agree. Responsibility lies with the doer in this case -- the person who types the dark pattern into the code. If my manager tells me to slap my coworker, she's only barely to blame for demanding something dangerous and unethical. I'm the asshole if I do it.

There's plenty of blame to go around. But at current, we really do have to do better as a community and refuse these requests far more often than we currently do.


>When developers - professionals who enjoy royal treatment on the job market at the moment - are too afraid to upset their employers over an ethical question, it makes me mad.

I feel like a lot of people saying this lately haven't tried to get a job since the pandemic.

Invitations to apply from recruiters whose job it is to get people in the door are not job offers, nor royal treatment.


When your industry isn't on this list, https://www.aarp.org/work/job-search/info-2020/coronavirus-o...

it's understandable why people feel that way.


There are many people that have made quiet choices and can't talk about them for numerous reason. NDAs for example, or, my favourite: Sometimes there is no defence for a certain type of technique. It's just something nobody thought to do but any company could do it, and yet nobody seems to realize it. There is this huge knowledge gap between software developers and almost all business owners.


Most probable reason is that business owners are assholes.

Some business owners are. Some of us do try to run our businesses in ethical ways, despite knowing very well that we could probably make a lot more money if we included a dark pattern or two. It would be nice if people could at least not insult us while we're doing it by lumping us in with the $$$ crowd and normalising the bad behaviours we want to resist.


> sorry for the harsh tone but I don't know how to say it any other way

You shouldn't have to apologize. Nothing wrong with being harsh. We should be harsh, especially with people who perpetuate these unacceptable practices. Indignation is a perfectly valid response when faced with this.


> When developers - professionals who enjoy royal treatment on the job market at the moment - are too afraid to upset their employers over an ethical question, it makes me mad.

Sure, that’s understandable.

But to be understand it, also remember that when highly paid experts that get royal treatment so long as they conform confront employers over ethical issues...you get the Google Ethical AI massacre.

“Royal treatment” isn’t unconditional, and means you have a lot to lose.


>spineless creatures not being able to stand up for their principles

Why assume they have principles?


My wife hit this the other day. She entered her info on a website, gave up, and then got a polite email from them, something like: "Hey, we noticed you started filling in our form but didn't complete it, are you sure you don't want to sign up with us". She was furious, and we wrote a reply saying how unethical we thought this was. It was a small business (swim school I think).


Good on you for standing up to what you saw as a violation of trust and ethics! You're light on the details of the departure though. Were they explicit that the relationship "fizzled" because of your refusal to deploy the code?

At a meta level, this post seems a little strange to me. Is the linked site your blog? Or a discussion forum you're trying to bootstrap and drive traffic to? If telling the story of this ethical dilemma and the consequences were the driving motivation for this submission, it seems a little low effort and light on details of the consequences. But if it was to drive traffic to your forum, then I guess it did a good job.


Thanks. Yes, it is my forum.


So isn't this illegal? In the same way that a website can't send you their newsletter if you don't check the box at the bottom of the form when you register for something? Is that just a Canadian thing?


Well, who says they're going to send you a newsletter? It could be used by sales to contact you directly to ask why you didn't proceed. Instantly, that is what I assumed the email was for.


The impressive thing here isn't that some random company is doing it but how many don't recognize it is a dark pattern even when told it is one on HN.


Some larger companies have trained psychologists to nudge people to rationalize their deeds.

You aren't just working for an ad company. You are working for the benefit of manind. You bring joy to everyone and restore justice in this world... with ads.


It's THE thing that I don't like about modern corporate culture, most of us (like +95%) aren't building a new tomorrow or addressing rising inequalities. We build stuff to make money. That's it. That's the goal. It makes me angry when you have internal corporate propaganda trying to spin this as a greater mission.

Then you have the coworkers that drink the koolaid and talk as if you were in a cult.


Thanks. Exactly that. I once was lucky (OK expected to attend the all hands) listening to the world lead of our business unit talking about how we would make the world a better place by making hospital emergency rooms more efficient by applying a Uber model to it. How Uber was the epitome of making a bad experience better for the end user and how this company's mindset should be applied to the world. The ER experience was just an example that would be ripe for disruption in his view as it had not changed in the last few decades from a "user perspective".

I filed his talk under "corporate bs for 500" but sadly our local leadership drank the cool aid.

I do not have a problem with making money. Or helping companies do so. I know that this is the center piece of capitalism. Great if you provide added value with your offering but making money is paying our mortgage in the end.

But for the sake of it. Don't try to brainwash me into believing that what I do makes the world a better place in any significant way.


The centerpiece (or perhaps incentive) of capitalism is the cooperation between people that money facilitates more efficiently. Not to ignore the challenges of liberalized economies or the way this can be twisted...

Not sure about your scenario but writing code does at least sometimes improve the world. I helped create a product that was later used to restore the rule of law in a country trying to recover from genocide. The homicide and assault rates for the entire country dropped by double digits following use. That seemed significant.

While most of my efforts since have had far less dramatic outcomes but the general circumstance is that even mundane things like keeping the lights on are instrumental for peace in our societies. Such things are far from the dark patterns side of the industry but there's something about babies and bathwater. We can contribute to supporting the more constructive and healthy efforts.


> I do not have a problem with making money. Or helping companies do so. I know that this is the center piece of capitalism. Great if you provide added value with your offering but making money is paying our mortgage in the end.

Exactly! Why do we have to pretend we have a goal other than making money without compromising our principles? The best way to make money and keep on doing so is to give people something that is worth spending their money on. A good product at a fair price isn't healing the world, but it's not breaking anyone's leg either.

I wonder if a lot of this goes back to early 20th-century businessmen who tried to apply Transcendentalist thought to business, promoting unity and loyalty by creating a feeling of a noble cause in their employees (Charles Ives, an insurance executive and composer, is the example I'm most familiar with).


The restorative power of meeting people's needs can be underestimated.


And if you call them out on this obvious bullshit you get fired for creating a hostile work environment or some equally bullshit reason.


"even when told it is one on HN" I think this is an appeal to authority, a logical fallacy. https://www.thoughtco.com/logical-fallacies-appeal-to-author...

Is there a reasonable list of "dark patterns" that this tactic in the OP falls into? I don't doubt it is a dark pattern. I'd just like to know which one it is, and have it explained to me in a little bit more detail. Thanks!


I think the point of that part was, "Even the dark-pattern-addicted HN crowd recognizes that this is a dark pattern, so why doesn't everyone?"


I’m seeing a lot of people who didn’t understand what is being described. I’m a front-end dev (kind of) and didn’t understand the description in the blog post. Once it’s been explained on HN, most people here seem to agree.


> It's hard for people to understand or care about ethics in programming. One possible reason for this is that the issues are too nuanced.

In my experience it's because the marketing team has already sold the dark patterns.


Experian's FreeCreditReport.com (in)famously, within the company, sells off your email many times IF THAT IS ALL YOU FILL OUT on the site.


Reading the comments here, I am surprised how many people are outraged over the collection of an email address.

You're going to flip when you find out what they know about your location from your mobile phone.


I have one comment about this: If a bridge or a building collapses they would go after the engineer who signed the project. Same with accountants and other professions. We should start holding tech companies and software engineers accountable as well. Is your software spying on people? Using dark patterns? Well, I think you should face the consequences of your actions. Software are as real as buildings or bridges now a days. Storing the users passwords as clear texts? Using admin:admin for credentials? Come on...


Is it really a dark pattern to secretly store the email address? Aren't dark patterns about tricking the user to something he doesn't want to do?

This is just illegal data storage.


I mean, technically, you could put something in the web site terms and conditions to the effect of, "all user input is recorded for quality assurance purposes", and then it wouldn't be illegal anymore.

How is this any different than when telephone conversations are recorded on phone support lines, or when a video game records all user input and streams it to the remote server?

It's all about disclosure up front. As long as the web site has done that properly, it's legal, right?


What would be the problem with naming the company here if everything said is the truth and there is no IP being disclosed or other issues like that?

This is similar to when I see people posting in local contexts (like on FB groups) that so and so local company provided terrible service but the poster doesn't want to name names.

What good is done if names aren't named? If we have a name, then things get fixed. Otherwise there is no feedback loop so there's really no point.


People don't want to risk get sued for nothing


Similar but not exactly the same...

Zoominfo sells a tool that will "auto fill" visible or hidden form fields based on the user's initial input.

https://www.zoominfo.com/solutions/formcomplete

The tool is marketed specifically as a way to reduce the number of "visible" form fields.

At least in this case, the user has to actually submit the form.


Crap… I just hit the “Accept all cookies” button on this site on Chrome iOS. Did my browser just autofill a bunch of fields and submit my info to them?


Does Zoominfo get the browser to autofill a bunch of fields that are obscured from the user? Interesting...


Yes. You can and are encouraged to make forms with just a few fields, and let Zoominfo auto-fill the rest.

When my client made the initial request I responded with "I don't know if that's possible OR legal" and then a couple weeks later they sent me Zoominfo welcome emails and asked to integrate into their web forms.

FWIW, my clients' forms actually display the fields (none are hidden) so I at least can sleep a little better at night.


There is no ethics in business anymore, in Tech industry and any other industry.

The right to buy food and pay your bills is what you get from your payment if, and only if, you are willing to compromise your own ethics.

If you start to make your work decisions based on ethics it will be really hard for you to stay at your current job or get a new one.

Unfortunately, this is the world we live now.


Do you think there are some exceptions to this rule?

I have the impression that grocery retailer Costco is relatively good in terms of respecting their workers and customers, and I think that is one factor in their success. I'd be interested in learning about other companies that have priorities beyond maximizing this quarter's numbers, or learning why my impression of Costco is wrong.


I think some exceptions are small/medium businesses, specially small businesses. If you look at the majority of small businesses not only in America but all over the world, they don't even make profit, they just pay the bills, they don't have the mindset of grow at all cost.

About Costco, I really don't know, but I know some big companies can have ethics because I've worked for one. It was the best place I ever worked.

Unfortunately, it is rare these days.


Anymore? It used to be worse.


Do you think it was worse? Why?


What sort of ethics do you have that need to be comprised to work in tech?


Tell me one tech company that does not use dark patterns? And I think the problem is not only tech, it's everywhere. People turned on the "money at all cost" mindset. They know their actions will do harm to someone but they simply don't care.


All depends if you mean actual dark patterns or something you find distasteful. Either way my current company doesn’t, we work in the electric Vergil’s industry. Not a single distasteful thing expect some of the meetings with clients and management plans. But tech wise, none that I know of.


to be honest, that's pretty tame as far as dark patterns go.

If a user had already typed their email onto the field, then the user had at least some intent to sign up whatever the user wanted to sign up for.

This makes what ad trackers and ISPs do with how we browse seem like war crimes if you were to call this a 'dark pattern'


A dark pattern is tricking the user into doing something they didn't mean to do. I don't expect or intend my email address to be uploaded before I click submit. I expect it to happen after.


Where is it written that it's a dark pattern to collect form information before the user presses submit? Maybe the company wants that information because sometimes the form breaks, or the user isn't able to figure it out, and they want to be able call that person on the phone or write them an email and be like, hey, can we help you fill out the form? That's perfectly reasonable in my opinion. But if the intent was to sell that email address to marketers who would put it on as many nonconsensual spam lists as possible, well, then that was going to happen regardless of whether or not the user pushed the button, because the whole website is a dark pattern which means leaving was the ethical thing to do, so let's not blame technology when the problem was people.


The dark pattern is saving the email without informing the user or asking for permission to do so. Normally, permission is given when the user clicks "submit," but if you save the email before they click submit, it's a dark pattern.


> they want to be able call that person on the phone or write them an email and be like, hey, can we help you fill out the form? That's perfectly reasonable in my opinion.

How is the reasonable? The real-world equivelant is going to a shop, looking around, and then leaving without buying anything and going home. If a store representative then knocked on my door and offered to help me find whatever I was looking for, that would be extremely creepy.

Sure, it might work out if someone did get stuck with a form, and actually does want someone to help them with it, but that's a tiny perentage of people. The real reason is not altruism, it's to make money.


At Seamless circa 2010 we were asked to make the default sort order for restaurants appear by how close they were to the next tier of the contract. More orders = More Margin.

The tech team at the time was entirely located in Utah at the time, and all of us were repulsed at the idea. We countered with suggestions for improvement and made it clear the project would not be worked on:

- You can make a "promoted" restaurant section above the fold - The UX concerns since users expect some reasonable ordering. Like distance from location or alphabetical.

We didn't get fired, and it was really scary. I appreciate standing up for ethics and principles. bravo.


Respect.


thanks


Not enough information here to know if they made the right call.

I can imagine situations where this is to the user's benefit and the company could handle the data ethically.

Example: Situations where the user would be likely to have very unreliable connection and be on the page with the email field for some time with other data entered that they wouldn't want to lose.

It might seem a bit contrived but I've had similar scenarios in the past.

EDIT - as others mentioned localStorage is a better solution for this problem.


I think there's a very reasonable general expectation on the web that your data isn't used until you've clicked submit/save/etc. I don't think that this practice could be claimed to have "informed consent", even if the privacy policy mentions it explicitly.

You're right there are places where this sort of thing can be useful, but they're rare, and could even be handled client-side in many cases.


    I think there's a very reasonable general expectation on the web that your data isn't used until you've clicked submit/save/etc
I definitely have this expectation and would be angry if it was violated.

I wonder: do non-engineers have this expectation? In 2021?

I've been involved with web development since the more or less earliest days. Back when websites were mostly static pages, and eventually spouted some forms here and there.

But, would a non-greybeard engineer have this expectation today now that UX is generally so different and "app-like" and anything you see in your web browser can basically be assumed to be in constant communication with the server(s)?

Please note that I don't think the current state of affairs is really an improvement. In fact, I more or less hate the "modern" web. I'm just curious about how user expectations have changed.


> Example: Situations where the user would be likely to have very unreliable connection and be on the page with the email field for some time with other data entered that they wouldn't want to lose.

Data entered in a form don't require an active connection. If submitting the form fails you can resubmit it.

Usually such issues start to appear when companies decide to implement their own custom UI from the ground up in a way that the result can only be described as undefined behaviour and things like input fields reset their state when you look at them the wrong way.


> Data entered in a form don't require an active connection.

Such an important property of the web. Server sends a description of what it wants, client renders and handles interactions. Things like server side rendering are anti-web.

Server-side rendering, streaming video, and such aren’t incompatible with internet philosophies in the slightest, just not aligned with the intent of the www. Apps, even the ones that aren’t merely web views, are more www-like in this regard.


That's why LocalStorage exists.


Possibly beneficial and useful features can still be dark patterns. Whether or not there is any potential user-value is irrelevant to the ethics of design decisions like these.

If you're dealing with user data you should never compromise a basic respect for that user's intent. Designing for implied consent where consent should be explicit (e.g. sharing personal data) is unambiguously a dark pattern.


I know of a situation where someone refused to do something maybe 1/4th as shady as this. It sounded semi-reasonable to me.

In this case, the boss simply assigned the task to someone else. Employee stayed employed.


Then save it to local storage


Then you can't send them reminder emails, can you? No, this was pretty sleazy, and quite probably in violation of the GDPR.


In 2007, I had a script on my website that records every click and every mouse move. When my classmates are reading my webpages, I'm watching a video simulation of their mouse moves.

My website doesn't contain any textbox, so there's no personal data except IP address. Nevertheless, I more or less know who's reading my page because every student has a static IP.

Nowadays, I don't collect mouse moves anymore, but I have navigator.sendBeacon() for video playback behavior.


You just described Tools like HotJar.


I used to work for a company that did this, but it was supposedly covered by their client sites' TOS. The client site would install our tag, and our JS code would listen for emails to be entered, and would send them to our backend to collect and cross-reference with visitor on our other clients' sites. This allowed us to send emails to visitors who left sites with buying anything, even if they never entered their email on that site.


Sadly this is standard for certain set of services and websites.

I think some of these widgets similar to intercom do this.

In short, I assume that the sales team is just following industry practices for that vertical / type of the sites. If not, then it is your job to explain the best practices for your industry (i.e., how these kinda of things will generate a lot of spam, will make potential customers weary the service, etc.).


If the regarding company is EU based or offering their services also in EU you are obliged to report them to data protection authorities.


I don't know why this hasn't come up in any of the threads. With the GDPR this is illegal. Your data officer would be furious and your VP probably out of a job for getting a 100.000€ fine. There is no reason you need that when the user did not submit it, so you have no explicit consent. As a developer in Europe I live the GDPR. I love my privacy and the privacy of the users. All the marketing and sales stuff has been proofen to not work very well, so what the heck?


"Employee surprised when they wouldn't perform their work duties and were let go because of it."

We get it, you personally think dark patterns are here to destroy humanity. Lesson learned, don't shit where you eat. There's a lot I don't like about my work too but I'm not about to risk my stability just to make a point to some people on a blog post.


A reminder that WE are the product. Our emails, our phone numbers, what we're shopping for, what we drive, where we live, who we voted for....all of this is in a ginormous Oracle marketing database that they rent to other companies. Google, Facebook, Microsoft, Amazon...they all do the same thing. They track us and every digital footprint we make.


> The first thought that came to my mind was - I'm not going to be here much longer. And it's not because I quit (I probably should have).

I would say, no! If you voluntarily quit instead of being let go, there can be certain disadvantages, depending on the employment laws and such in your jurisdiction.

Unless you have something lined up, of course; then it is largely moot.


> They had no version control.

In my mind this gives me some relief that the company was a small outlier, which is not to say I don't think there's a problem here, which is that morally objecting to work assignments doesn't seem to be a thing in the corporate world.


I previously worked on couple of companies that flirted with the line on some of these dark patterns. As far as they go, this one is pretty benign. If I recall correctly, I was for doing this pattern. There is enough intent there (putting your whole email into the user form) that it really doesn't feel like abusing the user. That being said, me and my coworker had to fight back and refuse to build a few other dark patterns.

The real issue is the legitimate companies that take it way past dark patterns. Credit rating bureaus that scrape 100% of your data and the data of anyone you've ever been near, Facebook stalking you across the internet, etc. As far as I'm concerned, if it's okay for Facebook to do this exact dark pattern on everyone in the world, I can make a stupid directory site have a little more juice.


I don't see how this is benign in any sense, but I may be missing something. If I type my email into a box on a website ...and decide a moment later NOT to press "Submit". My expectation is that my email was NOT saved to the server. I didn't consent.

How is this pretty benign?


It sounds like they mean relative to the stuff they were working on. Very colloquial.


If it's "pretty benign" to save a user's email before they click submit, then it wouldn't be a big deal to inform the user that you just saved their email instead of trying to silently hide the fact.


Many forms autosave, I don't see this as a clear example of a dark pattern.


A bit off-topic, but does anybody have any examples of websites which use this dark pattern? I am interested in trying to disable this "feature" client-side for the sake of privacy.


Once you have entered your email address on a page with JS and the page has sent any additional network traffic, you have to assume that the server has your email address if they want it. JS is just too complex to offer any other guarantees.


You could disable JavaScript, otherwise there's no general way of preventing this type of behavior -- as far as I can imagine there are too many different ways of scooping up user input data to generally block via a browser add-on.


If it's been implemented, show the URL and where the code is.


Hire me. I’ll implement it.


Many years ago I was asked about adding a feature to our website, which was a shopping portal for eyeglasses, that would bring up a popup when the user tried to leave the site. I immediately said I would not do it. I didn't get let go but the look the manager gave me was one of shocked disbelief, I imagine it was strike one. I quit not long after. The manager was from a big corporation (Luxottica) that had bought us recently. It seems relatively normal that the kind of people who achieve positions of power at large companies just have no respect for other humans. It makes me sad sometimes.


I work in e-commerce and this kind of thing is totally standard. You get salespeople trying to sell you this and much worse regularly.

I was once pitched the exact dark pattern as in the OP - covert email collection. I was gobsmacked. I wrote about it at the time thus:

> Towards the end of the slide deck, Dom excitedly explained how if a user enters an email address in any form field anywhere on the website, then regardless of whether the form is submitted, that email address will be captured by IntegriMart and paired up with a browser fingerprint for that user. This, presumably, allows us to “continue to build a dialogue” with that user.

Full story for those interested: https://www.michaelbromley.co.uk/blog/the-covert-opt-in/


there's probably a lot more context, but this is a pretty common marketing feature

some examples:

https://github.com/carlsednaoui/ouibounce

https://www.hubspot.com/products/marketing/exit-intent

i like how this page describes them more as 'needy' than 'dark', since dark seems to imply immoral

https://www.nngroup.com/articles/needy-design-patterns/


I was curious how that works, so I clicked into the ouibounce repo and read the README. I still don't get how it works, but I enjoyed how they flirt with self awareness:

> it's very easy to create something spammy-looking.

...and then they list "good" uses that are... all spammy.


I still don't get how it works

These actions can be used to infer possible intent to leave: scroll up (to reveal address bar on mobile), move mouse toward top of viewport (to move mouse toward address bar), swipe down (to reveal address bar on mobile), loss of UI focus, probably others.

I hate these patterns, and they are 100% appropriately described as dark. I and the other devs spent months at one place arguing with a PM about how janky and broken the third-party intent-to-leave detector they injected using Google Tag Manager without dev involvement made our app feel. (GTM was the product that convinced me Google gave up on not being evil)


Okay, so it's all indirect? I usually just ctrl-w, and beyond the browser native dialog that sites can use to ask if you want to confirm saving your work or something, I wasn't aware of any way to intercept that or do anything about it.


The most common way those exit-intent popups work is to check for when the users' mouse leaves the viewport e.g. https://stackoverflow.com/questions/923299/how-can-i-detect-...


Left out of this post is a form or CTA that follows me on scroll, it doesn't make me feel like it's easier to find the form, it makes me feel like I'm being hounded to give my information.


Not to play devil's advocate here, but as an employee if you refuse to do something your manager tells you to do that is both within your job description and not illegal then you are in the wrong in terms of your duties as an employee.

The feature you describe is annoying (I think it's fair to say that everyone hates these pop-ups) but rather innocuous and certainly not unlawful so there isn't any reasonable ground to refuse to do it if you're employed as a web dev.

That's the nature of employed work.


The concept of professional ethics exists as is well established in other fields such as medicine, engineering, legal work, and journalism. Why should software development be regarded to take place in an ethical void?

Granted, if you refuse to do something your boss tells you to do because it violates your concept of professional and personal ethics, they may decline to continue employing you. And you might not have a legal basis to challenge your termination if what they asked you to do is legal. But walking away from a job may be the best option in some situations.


Conversely, if you work for a company that lists it's "values" as something like "Being customer-focused", it's easy to find ground upon which to push back on things that go against the company's supposed values.

Software engineering is largely think-work. Some of it is creatively coming up with ways to solve novel technical problems, but an often understated part is thinking about how your implementations will affect your users, and optimizing for solving your users problems.


That's all nice but quite irrelevant to my previous comment's point.


Quite the opposite. If a middle manager tells you to do something you think harms the company or it's customers and you don't push back as a knowledge worker you're not doing your job.

Ultimately you may need to comply if the decision is made, but they literally hire us for our judgement and ability to work independently for the companies objectives.


> Ultimately you may need to comply if the decision is made

Yes, you 'need' to comply because you're an employee. That's all there is to it and I'm very surprised by the emotional reactions to my simple statement of fact. It's odd.

A software dev is not hired for their judgement on company strategy or marketing. It's never a good idea to tell others how to do their jobs.

I would also say that pushing back and refusing are two very different things. If you think something is not good for the company you may say it constructively (though be careful). In the end, "disagree and commit" or quit are the two professional options.


This comment only works with a very narrow definition of "duties of an employee".


...something your manager tells you to do...

There's a principal-agent problem here. The owners of the company don't care about a random manager's feelings. The manager certainly does, however, so much of what any particular manager will require is more about making the manager feel better than about making profits. Other employees are correct to disagree with such misuse of resources.


Something being legal has nothing to do with it being ethical. Many terrible things used to be entirely legal. Some still are. Having said that, this kind of thing puts your job on the line. Standing your ground on ethics will always come with a price.


I specifically wrote 'legal' because that is in principle a valid ground to refuse to obey your employer's instruction. 'Unethical' as in 'I don't agree with it' is not.

My reply was also specifically to a comment about a pop-up window when a visitor leaves a web page. It's over the top to bring ethics into this and I feel the term is being completely diluted into meaninglessness these days and too often used as an excuse to choose to do only whatever one agrees with, which both unprofessional and, frankly, childish.

Again, I'm very surprised that my comments are being so badly received when they are simple statements of fact and reality of the employment relationship. Maybe many readers are still quite inexperienced...


Oh I was ready to accept the consequences.


Congratulations. Getting fired for doing the right thing might hurt in the short term, but it's the right thing to do and eventually (perhaps already) becomes a great story to tell.


+ karma


> They had no version control.

I wouldn’t have accepted the position in the first place.

If I really would have needed the job, I would have left as soon as possible (6 months max).

They did him a favor by letting him go.


Thank you for having strong ethics. But, what a bizarre dark pattern.

How many users:

1. type out a complete email address

2. and then decide at the last moment to not hit "submit?"

Who even does that? Like 0.00000001% of all users? I've been using the web since the mid 90s and I don't think I've ever done this. Ethics aside, it's hard for me to believe that this dark pattern even yielded a useful return.

To be perfectly clear, I'm not defending this dark pattern. It's bad. I'm just questioning its efficacy.


I’ve done this myself, and it’s typically because I will want to go through the checkout process to see the total cost of something (including shipping which wasn’t shared on the item page).

I’ll use auto fill to save time and quickly click through to the page that finally shows the full price, then I will back out.

More recently, instead of auto fill I take extra time to type a fake email address just to prevent the “lost cart” marketing emails that were mentioned in the article.


Me too, but the author describes something different.

You say that you "click through to the page that finally shows the full price." So, you are explicitly submitting multiple times. There's a reasonable user explanation that your information would be received by the server.

In the author's example, they're describing something more insidious -- saving the user's information before they submit the form, thereby subverting user expectation.


If I haven't consented to receive your marketing emails or have not entered into a business arrangement by purchasing something, I do not expect to receive emails even if I expect that my information has been received by the server.


Autocomplete would be my guess. Which makes it more obviously scummy.


> apologies, but I don't want to work on this task because it's a dark pattern.

While I understand the intent, words matter. If those were the real words that were used, yes it will lead to souring of relationship. Unless we notice a systemic pattern, accusations should wait and that sounds like an accusation.

Perhaps the client was indeed trying to do something customer friendly and they did not know better. It would have been better if the OP had offered a solution rather than excuse of them engaging in dark patterns and refusing to work on it.

And, if the client had not taken that solution, yes, I will agree there is something nefarious. But till then, I wouldn’t have jumped to that conclusion so quickly.


The bigger smell is no version control :D Saving emails `onBlur()` is probably not a dark pattern.


You're getting downvoted while being right. A dark pattern is tricking the user. Not saving the data they enter. But most people around here call anything that is distasteful a dark pattern.


> Dark Patterns are tricks used in websites and apps that make you do things that you didn't mean to, like buying or signing up for something.

https://www.darkpatterns.org/


Yea it isn’t tricking them into something. It’s capturing data they’ve input.


It's not tricking, but it is deceptive since they haven't chosen to submit. The current mental model of forms are to save data entries when you actively submit.

Yes, there are some settings panels that automatically save changes, but I have never seen it for forms.

If there's an submit button on that form then that would be an even stronger signification to the user that nothing will be sent to the servers until you click that button.


It is a dark pattern - it tricks the user into thinking their data won't be submitted until they click "submit," but instead saves it right away. It would only not be a dark pattern if the form clearly said "your email address will be uploaded to our servers immediately after typing it."


Why were they tricked into thinking that? OP didn’t mention any text stating that. Lots of sites send updates constantly, I’ve never been tricked into thinking they didn’t I just wasn’t explicitly informed. Which is a difference.

As I said, it is distasteful but not a trick.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: