Here's my question not to you specifically but the royal you of any bigTech employees reading this forum or similar: why do the employees not stand up at all hands meetings and raise this issue as a serious problem. Of course I know the answer in they just don't care and their personal paychecks are too high for them to risk becoming a squeaky wheel.
However, it is obvious that management does not intend to fix this issue. They clearly do not feel the negative comments on some techy nerdcentric boards or twitter followers amounts to enough to cause a negative impact on the bottom line. So instead, people with respect lose respect for those "yous" that work at bigTech.
So, the people complaining at an all hands are likely to be completely ignorant about the facts on the ground. They're not working on these systems or processes. They're just gullible enough to believe what they read on the internet, and take action on it.
The team responsible for the account suspensions / content takedowns / etc on the other hand will have the numbers to show that they're doing a good job, and the expertise to predict what the implications of doing the policy changes asked by the complainers would be. Not just how much it'd cost directly, but the second order effects on abuse from making different tradeoffs.
So let's say that you want to give high-touch customer support to billions of free accounts? Not only is it going to be fabulously expensive, but it's going to be abusable as hell. The abusers will quickly learn just what kind of sob story will have the best chance of fooling the humans, and get their accounts falsely unbanned or even use it to hijack the accounts of others. The only way to avoid this is to make sure the customer service reps have no agency. But then you're paying tens or hundreds of millions / year just to have humans execute a script.
And these predictions will be a lot more credible than those from random employees parrotting social media posts and making vague claims about potential brand damage or loss of trust. Ignoring the complaints is going to be a pretty easy choice for an exec.
> The team responsible for the account suspensions / content takedowns / etc on the other hand will have the numbers to show that they're doing a good job
I think it should be evident by the number of cases we see posted about on HN alone that they are not doing a good job. Or, I guess maybe they're doing a good job based on the metrics they were given, but I wouldn't consider it a good job in the sense of living in a fair society where negative actions taken against people (by governments or by private entities) should have a reasonable appeal process involving real humans.
My view is that if even one person loses an account who shouldn't have, and there is either no process to appeal and fix it, or the process ends up not giving that person their account back, they're not doing a good job, full stop. I know that's an incredibly high bar, but I don't care. The loss of many of these types of accounts can cause real financial and emotional harm to people, and that's just not ok.
It should be even more evident that you can't judge whether they're doing a good job by looking at just the numerator. You need to know the denominator as well. Or rather, the denominators.
In a simplified model there are two groups of users {good,bad} and two outcomes {suspended,not suspended}. You're saying that success can be judged by whether there are any people claiming to be good (though you have no idea of whether that's true, they're just claiming that on the internet) ending in the {claims-to-be-good, suspended} bucket.
But actually to judge whether they're doing a good job, you'd need to look at the {good, not suspended}, {bad, suspended} and {bad, not suspended} buckets too.
The first one is a baseline. Obviously if you've got a billion users, all your numbers are going to be 1000x higher than a somebody with a million users just due to the higher number of users. The number of internet complaints will be 1000x higher too. But the actual harm to the average user from the mistakes is the same.
The second bucket are the successes, and they are going to be totally invisible to everyone not working on the problem. Not only to the random HN commenters, but to the random bigco employee too. They literally can't judge the success. The only visibility will be if that number is too low, since it obviously means the third bucket is too high. And that will be visible as the platform being overrun by spam and scams.
Now, I understand that your view is that it must be completely impossible for a possibly good user to lose an account. That's just the kind of thing that people who don't understand the problem space would say at an all hands open mike, and then get ignored because their view is just so detached from reality. It's not even a matter of resources; even if you threw infinite resources at the problem, what you'd end up with is a worse experience in the aggregate.
You'd have scammers reinstated, and continue scamming more people. You'd have accounts be hijacked because the scammers are going to be better at social engineering their way into accounts than the real users will be at social engineering their way to account recovery.
It's all tradeoffs, and absolutist statements about how it's unacceptable for even a single good user to suffer any harm are just as unrealistic as absolutist statements about how even a single piece of spam can't make it through. The best you can do is try to find the best place in the tradeoff space.
What is the base for a good job? 99.99%? 99.999%? Something like Amazon or Google has billions of users. Getting it perfect is impossible. So what's the bar?
i too rage against the insane, bewildering, almost unimaginable scale of anti-customer-service behavior that we seemingly must succumb to, as customers.
Raising complaints internally always has value in any organization. At a minimum, it highlights the fact that people outside the organization think your colleagues are doing a shit job.
The patronizing "Oh you silly plebs, we know what we're doing" line is unlikely to get any traction in a developer community where second-guessing and challenging established designs, decisions from SMEs with decades of experience is common.
>The patronizing "Oh you silly plebs, we know what we're doing" line is unlikely to get any traction in a developer community where second-guessing and challenging established designs, decisions from SMEs with decades of experience is common.
Sounds like you need to start working with smarter people. I used to think this way, until I started working with smart people.
Now I still question every decision, but I have experts that I believe will have answers worth listening to.
The answers these days are rarely, we know what we're doing, and much more common to be; we tried that but it didn't work because [reason]. Complaints without suggestions or requests are normally ignored.
> If you're not producing results, your experience and expertise and knowledge are no longer relevant. Everything has a shelf-life.
ahh, line on graph must go up, huh?
this isn't a critique of the suggestion you didn't make which I assume you meant to of: everyone should always be seeking out new ideas to improve the status quo. But the meme that if you're not improving what you're doing has no value is a trash take. CPR hasn't changed dramatically in decades. Good chest compression is still the single most important thing if you want to survive to hospital discharge. But you're right, the paramedic's skills who isn't inventing a new method has no value.
I'm not saying there's no value in raising complaints internally. I'm saying that the proposed method of having somebody uninformed do it in an all hands meeting based on internet anecdotes has no value. It has no chance of affecting change.
> why do the employees not stand up at all hands meetings and raise this issue as a serious problem.
This definitely happens for all kinds of problems at big tech companies. But you might be absolutely shocked to learn that many times, random engineers complaining about something doesn't result in management taking immediate action to fix the issue.
> Of course I know the answer in they just don't care and their personal paychecks are too high for them to risk becoming a squeaky wheel.
Stuff like this does get brought up by employees, especially Google has a lot of internal memes criticizing various aspects of Google's culture or policies. But executives mostly just deflect or ignore it, probably because they don't see the money in fixing it.
> So instead, people with respect lose respect for those "yous" that work at bigTech.
I guess the more ignorant ones do? I figured it was common knowledge that when something is broken policy-wise at companies, and they're clearly avoiding fixing it, it's rarely the non-management employees that are the problem. Almost always, it's a strategic decision by management to not address the issue (or sometimes they do address it, but poorly).
> I guess the more ignorant ones do? I figured it was common knowledge that when something is broken policy-wise at companies, and they're clearly avoiding fixing it, it's rarely the non-management employees that are the problem. Almost always, it's a strategic decision by management to not address the issue (or sometimes they do address it, but poorly).
Yikes. I'd call that ignorant myself.
By supporting the "strategic decision by management" you implicitly approve of it. This is particularly true with well-paid FAANG employees who could absolutely take their expertise elsewhere.
If they were torturing puppies then sure, but the context of this discussion is bad customer service. Having subpar customer service seems to be typical for corporations (and governments) in general, so no, it doesn't trigger my instinct to leave. Especially when the issue is providing customer support at scale to millions, if not billions of users (many of whom don't actually directly pay anything).
I wouldn't leave a company just because execs there seems vaguely anti-union either, even though I think unions are good, because again, that's most companies.
> By supporting the "strategic decision by management" you implicitly approve of it.
You could say that about a lot of things. Your government does something bad and you don't immediately hightail it to the next city/state/country? I guess that means you implicitly approve.
> (many of whom don't actually directly pay anything)
They are paying, though, with their habits and user data. That's not direct payment, but I don't think the distinction matters. Someone with a Google or Facebook account does pay. Not in currency, certainly, but having those people on the platform is certainly valuable to Google and Facebook, because they monetize their presence in other ways.
Correct, they're still a source of revenue, they're a customer. But legitimately good customer service is expensive, and it may not be viable to provide it even for marginal customers
> Especially when the issue is providing customer support at scale to millions, if not billions of users (many of whom don't actually directly pay anything).
What about those who do pay? Cause I can promise you, you don't get any better support, even if you're paying them tens or hundreds of thousands a year. Maybe if you're paying them millions.
And the context here is NOT customer support, the context is cutting people off from their friends and family because the AI was wrong.
> By supporting the "strategic decision by management" you implicitly approve of it. This is particularly true with well-paid FAANG employees who could absolutely take their expertise elsewhere.
I mean does somebody grinding down asphalt to repave a road implicitly approve of some random government policy?
Not necessarily, but the same principle applies. You can express discontent by voting with your feet and going somewhere else. And many millions, if not billions of people have done exactly this.
And yet, it's also extremely common to implicitly tolerate bad behavior by government, and part of that is that governments do a lot of things and probably all of them fuck up somewhere. If you tried to avoid local governments in the US with "NIMBY" tendencies, you'd rapidly go insane.
Essentially, kinda. They just have different titles for similar roles. If you compare the charter for a city to a company's incorporation papers, they are very similar. Both types of papers are filed with the state. Probably not the answer you were seeking though
the toxicity that this whole type of signaling represents from a company just means the give 0 shits about users. therefore, that means that its employees are placated by paychecks to also be happy to receive the negative aspects and laugh it off on their way to the bank to cash their large paychecks. this is the loss of respect others have towards the "yous"
If you 'lose respect' for individual employees because the megacorporation they work for has bad customer service or UX design or what have you, not sure what to say.
Most companies seem to suck in some way or another, reflecting that onto the individual workers just seems silly to me.
They're not "laughing it off" because they're paid well; if they were paid badly instead, what would change? Do people with low wages who work for corporations do something differently here?
For me, myself, and I, we have changed jobs when it became obvious that management wasn't going to change. I had made my very public comments at all-hands meetings as well as other attempts with coworkers to attempt internal changes. When it was obvious we were on the wrong side of the internal motes, it was time to leave. I've even taken pay cuts to not continue to be involved in the insanity. So, yes, I've walked the walk after talking the talk. I did not want to be associated with that company.
I think tech companies could probably do better with customer support, but I also recognize that it's an extremely difficult problem to handle realistically at scale, especially when most individual users pay little to nothing directly for many services. A higher-touch CS model would do better, sure, but that's expensive. It's different imo when you're a store or similar business where your customers are constantly actually giving you money.
Because when you're dealing with a billion users, a one-in-a-million mistake still screws a thousand users, and everyone realizes that getting the rate of mistakes anywhere near that low is impossible.
At the same time, you're dealing with actual bad actors that genuinely need to be banned, and there is a lot of this abuse. Mistakes will happen. You can't stop banning users. The bad actors will file appeals (in some cases including public escalations - see the various cases where someone complained about being unfairly banned from a game for cheating, and after a bit of a shitstorm, the company posted the evidence of him cheating). The appeal processes often work, sometimes don't - the public shitstorms are often cases where multiple things went wrong.
There is no easy solution, and at the scale these companies operate it's obvious to everyone involved that "just having a highly skilled human review every case" is completely impractical (not just "too expensive for the company to want to pay for it"). Because for each genuine user affected you have many abusers.
The pay-for-support proposals have several issues (PR impact from "screwing customers then extorting them to pay", stolen credit cards, engineering required to make it happen).
Abuse teams are (understandably) rather tight-lipped, and also tend to insist that telling the user what they did wrong would enable abusers to dodge the protections - this is something I don't understand (the abuser presumably knows what they did, while a user wrongfully accused does not), but it seems to be consistently said by abuse teams from many different companies.
All this combined makes it very hard to push for improvement, because there is no clear path towards a solution. You can ask a question at an all-hand generally raising the problem, but you'll get the usual "our abuse teams are working very hard on this, it's a hard problem" non-answer.
At the same time, yes, genuine users are absolutely getting screwed, and for the individual affected user, the consequences can be pretty severe.
(This reply is probably too late, I missed it while scanning this thread in the morning.)
> Abuse teams are (understandably) rather tight-lipped, and also tend to insist that telling the user what they did wrong would enable abusers to dodge the protections - this is something I don't understand (the abuser presumably knows what they did, while a user wrongfully accused does not), but it seems to be consistently said by abuse teams from many different companies.
The abuser will have done a lot of things. They don't know which one(s) got them caught. The more information they're revealed, the faster they can iterate to avoid getting caught next time.
But also, the real user doesn't actually benefit that much from knowing what got them banned. What are they going to do? Go back in time and not do it?
Now, a thing the user will greatly benefit from is knowing exactly what (if anything) they need to do to get their account back. That's where the focus on informing users should be. But since the abusers will also be told the same information, this needs to be something that legitimate users will find easy, but abusers or account hijackers will find hard to do at scale. So it can't just be "write a sob story about how you need the account back because it has photos of your dead grandmother". The abusers will be more competent at that than real users.
Typically it has to be spending a limited resource, or at least proving you have access to some limited resource and rate-limiting recovery actions by that resource. Some examples: non-VOIP phone numbers, proof of real world identity, social vouching by accounts in good standing.
>it's obvious to everyone involved that "just having a highly skilled human review every case" is completely impractical (not just "too expensive for the company to want to pay for it").
It's not obvious to me. These are trillion dollar companies and it's not like the appeals process doesn't take weeks to begin with (too long, but I digress). On top of all that you gotta keep in mind that some influencers are literally making the company money. To have them take weeks over a false negative is unaccetptable.
So yeah, get a proper review team, make sure they know the actual message that got them banned instead of needing to scour an account, give them a proper minute to review and decide.
One of my previous managers told me that they kept me exactly because "I am not a sheep," i.e. because it required non-zero effort to convince me to do anything, and I always tried to poke holes in any proposals. So, in a healthy organization, this would not be a risk.
Companies make decisions that ultimately affect their users in a negative manner including when the company's decision is a mistake. Not having a real method of correcting the mistake is a huge sign to me that says company is not someone that I want to do business with at all. I understand mistakes happen, but claiming absolute immunity and acting like no mistake was made is the absolute worst customer service position to take.
Meta has 6 billion or so customers. Not every decision can be positive for every customer every time.
There are processes to correct problems there. It requires someone to champion it, and to make a good enough case that others will join and for stakeholders to be convinced.