> The problem is that Facebook is giving the village idiot a megaphone
While you're not wrong that it's giving the idiot a megaphone, it's missing the greater picture. it's giving _everyone_ a megaphone. The real question is why can't people discern the difference between the idiot and the non-idiot?
I'd also note that a big issue now is trust -- trust in "elites" (technocrats, wealthy, those in positions of power) has been declining for a long time. i think people are not so much seeking out the village idiot, but massively discounting "experts".
A list of things that come to mind which have broken trust: 60's saw hippies which wanted to break norms of their parents/grandparents, 70s saw vietnam war, breaking gold standard, 80s greed is good, iran contra etc, 90s tough on crime policies, y2k fears, 00s - iraq/afghanistan, 9/11 attacks, governmental data dragnet, manning/snowden/asange, Covid statements which did not pan out as planned...
People have good reasons to be skeptical of elites, but I think anti-corruption work is more important than trying to silence the idiot.
That's also missing the greater picture. It's giving _everyone_ a megaphone... but giving the loudest megaphones to the people who can get most people to listen to them.
You'll have noticed on the internet that there's a tendency to prioritise engaging with things you disagree with (hell, half of my HN comments are because I felt motivated to write something to disagree with some OP at some point - even this one).
What that means is the traditional small-c conservative 'village elders', 'parish priests', and 'elected officials', who hold authoritative positions not because they're controversial, but because they historically represented positions of neutrality and consensus end up with quiet megaphones, and the madmen claiming the world is flat and there's a paedophile ring run out of a pizza shop end up with the loudest megaphones.
Half of the population is below average intelligence, and giving the wrong people the loudest megaphones has a devastating effect on society.
It's not everyone though. FB algorithm is giving preference to the most controversial person. People that are reasonable are boring and don't cause engagement, so their posts also are not displayed.
When you get away from the site, FB will start bombarding you with the messages from people that you would mostly react, because they want you back.
...and not just "the most controversial", in a lot of cases on both the content creators' as well as the sockpuppets' sides, we're not even talking about "village idiot" type real humans anymore, but — in contrast to Meta CTO's words — about extremely skillful mass manipulators sitting somewhere in Russia and hiding behind international proxies.
Not only was that tolerated in the name of profit, these individuals were able to create official looking, completely unverified "pages" with bogus attribution to create their "engagement" campaigns meant to poison and destruct western society (and arguably being successful in that).
How this is legally anything different than complicity in treason is hard to comprehend for me.
Yeah I think this the crux of it. Facebook is prioritizing the most "engagement" which means prioritizing the most "reactions" which means prioritizing the most divisive and enraging content on both sides. If Facebook instead prioritized the "ah that's nice" kind of content we wouldn't see the divisiveness we see today
It's also common for people to accept defaults without substantial customization. That's why the algorithms matter. Some people will deliberately seek out rage-bait even if the default algorithm delivers just-the-facts news and heartwarming pictures from friends' families. Most won't. Also, most people won't customize their settings to eliminate rage-bait if that's what gets prioritized by algorithmic defaults.
It also doesn't matter how much you customize your settings if they're inherently useless, minimally functional, or never there in the first place. A lot of the content control settings that Facebook loves to tout are practically useless.
Sure, I can hide every post from the "Controversial News" page, but I can't stop viewing content from third parties entirely. I'm only interested in first-party content - what my contacts create. Unfortunately that goes against the monetization model of Facebook.
I want a more closed loop social network and think that's the model we should return to, but unfortunately that's not where the profit/engagement is.
Yeah but it is also human nature to fuck as much as possible but we have rules and laws against things like rape to control those tendencies. Just because we are naturally inclined to do something does not necessarily mean that it is best for us
I highly disagree that it's human nature to 'fuck as much as possible'.
Certainly it is the goal of some humans. I can speak with personal experience that my nature isn't just to 'fuck as much as possible'. And neither is it most ppl I know. And the thing that's stopping us is not just anti-rape laws?
Fucking is great, but if you have a family and young kids, you care a lot about taking care of you family and not just going to the club and fucking more people.
Yeah you're right, not my best take. But I guess the point I wanted to get across is that we shouldn't let our natural desires dictate what is legal or not, time and time throughout history.
It's about surprise... content that is surprising has more information content (from the perspective of the surprised person), which drives engagement. The problem is that when you don't make a distinction between true surprising content, and false surprising content, it's a heck of a lot easier to generate false surprising content.
Was going to say exactly this. Reasonable people with reasonable views have no reason to promote themselves or their views on Facebook. However non-reasonable people with non-reasonable views promote heavily for clicks/engagement to sell you something, or just to “idiot farm” to sell the idiots something later.
Facebook’s unregulated revenue model will keep ensuring this dynamic.
> Half of the population is below average intelligence, and giving the wrong people the loudest megaphones has a devastating effect on society.
I don't think humans need a lot of intelligence to not be gullible. Scientific method for example is simple enough way of extracting truth from lies and does not require massive amounts of intelligence to apply to your measurements of the world.
Generally I don't think people believe in a "paedophile ring run out of a pizza shop" because they are stupid. They just see how unreal politicians behave from their tall platform and extrapolate that behaviour to domestic questions.
Working as a software engineer from a poor family I had seen so many times my CEO (or my upper middle class coworkers) being awkward or plain out low-key abusive with cleaners/waiters/etc that I can totally see where people who don't have ability to directly speak to people in power assume things about them which are not totally true. But let's be honest, they have a reason to think this way.
> You'll have noticed on the internet that there's a tendency to prioritise engaging with things you disagree with (hell, half of my HN comments are because I felt motivated to write something to disagree with some OP at some point - even this one).
It tends to be that platforms with more disagreement have healthier discourse. I think that actually the opposite is more harmful. Echo-chambers allow for extremist ideas to grow and encourage hostility towards those that don't go along with the echo chamber.
Intelligence is a tool that can be used for good or evil. Vladimir Lenin and Mao Zedong were quite intelligent by any objective measure, but giving them megaphones resulted in horrific disasters far worse than anything caused by Facebook users so far.
> Half of the population is below average intelligence
Maybe the real problem is that everyone assumes they're in the other half. Or possibly that intelligence and wisdom are the same thing.
I guess what I'm saying is that I would generally agree with your post if it weren't for this statement. I don't think intelligence really has anything to do with the problem as even a lot of otherwise 'intelligent' people have engaged with today's bullshit conspiracy theories and nonsense.
Extra pedantic mode: if the population is an odd number, the number of below average intelligence will not equal the number of above average intelligence.
You're right that if the set is of {99,100,101} then of course 100 falls into neither category and each category has cardinality 1, but we're talking about real-valued variables, so I don't think that's a worry. I imagine it happens 'almost never' (except where n=1).
This is what I'm seeing also. Youtube, Facebook, etc. all prioritize engagement. It's not only the megaphone problem, it's a quicksand problem. As soon as you watch some misinformation to even try to understand what the hell anti-vaxxers are claiming, then you get a ton of related misinformation promoted to your homepage. How the hell are technically ignorant people supposed to keep up with this? Youtube and Facebook will lump you in a category and show you what similar viewers watched.
that's all good points. I agree. I think it's not Facebook gives the wrong people the loudest megaphones, but our human nature and nature of the population are drawn to those megaphones held by the wrong people.
What could we do about this? How could we identify the wrong people so that we could take away the megaphone from them? Who to decide which people are wrong? Some of them are obvious, But some of them are not that obviously.
Maybe we could say madmen claiming the world is flat and there's a paedophile ring run out of a pizza shop are obviously wrong. We might know Nazi is obviously wrong, but what about What about Antifa, what about "woke"? What about all those theories behind "group identity"? The most dangerous wrong people are the ones hold "good intentions" (they could be self-deceiving or could be truly genuine) but bad ideas, and its hard to discern.
History repeats itself. I suggest reading history of China in 1930-1950, the rise of Communist China, and then read "Culture Revolution" in 1970s. You could find that how the people with "good intention" ended up being the most evil in the history.
How could we avoid that to happen here? I don't have an answer.
> why can't people discern the difference between the idiot and the non-idiot?
Because it's not idiots that are the problem, it's bad-faith actors, and they're very good at manipulating people. In the past they'd have to do that 1:1, now they can do it at scale.
I've been in several debates where I was written off being in "bad-faith", when in reality, I just didn't agree with with the popular opinion on a particular subject. It seems people are all too eager to justify their own position by labeling others as being in "bad-faith".
Except in this case we know that there have been many organizations, FB pages, and fake individual FB accounts set up specifically to spread misinformation and FUD about COVID and vaccines. That's the definition of bad faith.
Certainly, from there, real regular people pass on and help spread this misinformation. Hard to say how many of those people are also acting in bad faith or have just been manipulated and scared into believing the bad information. But it seems certain that the source of much of this garbage is bad-faith actors.
>Just twelve anti-vaxxers are responsible for almost two-thirds of anti-vaccine content circulating on social media platforms. This new analysis of content posted or shared to social media over 812,000 times between February and March uncovers how a tiny group of determined anti-vaxxers is responsible for a tidal wave of disinformation - and shows how platforms can fix it by enforcing their standards.
OK so I read through the reports here and... there's nothing there. The report focuses on the reach of these people, which is legitimate, but the report also operate on a tenuous (if not entirely false) premise that the information compiled is "false" or "misinformation" similar terms. The very few (and obviously cherry-picked) examples for each supposedly nefarious actor are hardly damning. The report's authors do absolutely nothing to "debunk" the claims (and I recognize that there are claims that cannot be debunked by virtue of their design), and simply says "This is wrong."
Forgive me for not trusting either side here, but simply saying "you're wrong" to someone else doesn't meet my criteria for believing that you're right.
I would say the most dangerous are not the bad-faith actors, but those self-deceived ones who have genuine "good intention" but act out the "bad" consequence.
And everyone of us could be that self-deceived ones, including you and me.
Given enough capability and knowledge of manipulation methods bad actors can not only shape conversations and promote controversial / conspiratorial information, but also fan the flames of the backlash that make collective reason impossible. So long as there are holes in these platforms and a combative stance by platforms to resolve them, this power can be had or hired. You don't need nation state resources to pull it off at this point.
It's pretty widely acknowledged that what happens or begins on social media is now shaping the behavior of politicians and the narratives of legacy media. So if you successfully seed something on social media you get to enjoy the ripple effects through the rest of society and the media. If I have enough resources and motivation, I'm fine with that success rate being 1%, even 0.1% if it gets significant traction. And once it's out there, the imprint on those exposed really can't be undone by a weakly issued correction that never gets the reach of the original false information.
They've been able to do that since at least radio and arguably since the printing press. At worst, Facebook is an evolutionary step along that spectrum.
> The real question is why can't people discern the difference between the idiot and the non-idiot?
And here is the real problem with FB, the algorithmic feed. Normal life is pretty boring day-to-day, and doesn't trigger 'engagement'. Conspiracies, etc... cause an enormous amount of engagement. When a person is fed conspiracies all day by the engagement algorithm, even the most critical thinkers will start to drift. It works for the same reason advertising works, familiarity and repetition. The solution is never use FB, but that ship has sailed for most.
I have had decent luck in reverting facebook back to what it is for: sharing pictures of my kids with people who care to see pictures of my kids. That means every time someone shares something - no matter how funny - I block the place it was shared from permanently. Slowly facebook is running out of things to show me that isn't pictures of my friend's kids.
Same, it didn't really take as much as I thought it would to get it to stop showing me reshares, but now I see either groups content or family/friends original content.
I dropped all groups too. Facebook's algorithm is a terrible interface for groups - it doesn't show me everything instead just what it determines might be interesting, and doesn't have any means to help with discussion of topics.
> The real question is why can't people discern the difference between the idiot and the non-idiot?
Societally we solve this through trust organizations. Individually, I have no way to validate the information every expert/idiot I might come across. So is “connect the frombulator to the octanizer but watch out for the ultra convexication that might form” gibberish or just your ignorance of the terminology in use in that problem domain? Most people don’t try to figure out how to navigate each field. Heck, even scientists use shortcuts like “astrology has no scientific basis so it doesn’t matter what so-called SMEs in those fields say”. So you rely on trust in various organizations and peers to help guide you. These structures can often fail you for various reasons but that’s the best we’ve managed to do. That’s why for example “trust the science” is a bad slogan - people aren’t really trusting the science. They’re trusting what other people (some times political leaders) tell them the science is. Add in bad-faith actors exploiting uncertainty and sowing chaos and it’s a mess.
Silencing the idiot is fine as long as your 100% certain you’re silencing someone who’s wrong and not just someone espousing a countervailing opinion (eg Hinton’s deep learning research was poo-pooed by establishment ML for a very long time)
Facebook is not just a hosting platform, through the Facebook feed it exercises a great deal of editorial control over what posts/information is surfaced to users. So while Facebook might be giving everyone a megaphone, it doesn’t turn everyone the same volume. It needs to own that.
Erosion of trust in elites just so happens to also be a long-term goal of polluters, quacks, scammers, and other powerful parasites of the common wealth when they run up against government or science.
I think in general Facebook has a bias towards inflammatory posts - and other platforms for that matter as well including HN actually. Also it's easy to blame the village idiot for everything, but I don't think Donald Trump or Alex Jones are village idiots. They are surely idiots but left the village quite some time ago and gained popularity before Facebook (InfoWars was founded 1999) - although FB surely was an accelerator stage.
That said, the village idiot is harmless and I think aristocracy (rule of the elite) is definitely not the solution. But what is true that the normal filters in an offline community haven't been translated online yet.
> it's missing the greater picture. it's giving _everyone_ a megaphone
I think this can be argued against, because Facebook does recommendation and algorithmic curation.
Even if Facebook didn't purposely tweak things to propagate disinformation, you could say it is easy to manipulate their algorithms to disproportionately push the information.
So for me it's a case of Facebook not doing enough to fight potential abuse on their platform.
There's an element of responsibility here, because we are prone to some material more than other. There are primitive instincts in us, and content designed to take advantage of that is parasitic, and it is manipulative and addictive in that sense.
Crazy theories, appeal to emotions, controversy, nudity, clan affiliation, and all that are ways to take advantage of our psyche.
Even a smart person is as smart as the data more readily available to them. If the only thing about gender psychology I ever heard about was Jordan Peterson because he's been recommended to me, even if I'm the smartest most reasonable person, this is now the starting point of my understanding and thoughts around gender psychology.
So I think a platform that is optimized to show information that is most designed to make people susceptible to it, and that also targets information to the most susceptible people to present it to is by design going to result in the outcomes we're seeing.
Are they, though? It seems like FB amplifies things that they think will generate more engagement and "stickiness". Sensational things that cause outrage tend to do that more than cold, hard facts. I would not at all be surprised if misinformation gets amplified orders of magnitude more than the truth.
> You guys gave them a megaphone, how do you expect society to behave?!
Considering most of humanity is... challenged when it comes to thinking critically, this should have been an entirely forseeable outcome. I agree it's society's fault, but Facebook is part of society. They watched how their tool was being usde by these people, and ENHANCED the reach of those messages because it was good for Facebook. Facebook is the microcosm of the object of it's blame. Idiocy writ large in recursion.
Most, no. Everybody is blind to one perspective or another. Also, time is limited and attention is limited. Do not think that others are just stupid because their focus or knowledge does not overlap with yours.
"Those people" does not exist. It's just an illusion of your own limited perspective. We are on this together and calling people stupid not it is true, not it helps.
You're pretending that there's no difference in intelligence/knowledge/skills between individuals or groups of people. There are differences. Can we stop pretending that there's no difference between a college educated European, your average American who reads at a 7th grade level, and a 3rd world farmer who has no perspective outside their small village?
> Ubiquitous cell coverage means everyone knows what is happening everywhere now.
I think this assumes facts not in evidence. Just because someone is "connected" doesn't mean they're automatically informed. As we see in first world countries, there's a lot of fucking morons that only listen to comfortable lies rather than uncomfortable truths.
There's also industrial production of bullshit peddled by disingenuous actors taking advantage of that fact. Fleecing rubes can be very profitable.
The very problem being discussed is Facebook trying to absolve themselves of bullshit peddling by blaming everyone else. They're blaming people for believing shit Facebook put in front of them under the guise of news. They're also fine taking money to promote bullshit as "news". Yet it's society's fault that they believed everything labeled news Facebook put in front of them.
To that effect, its worth pointing out that in many developing nations, facebook IS the internet. To say that this compounds all of the issues already discussed in this thread is a fairly drastic understatement.
Still a lot of people getting their crops burned because they knew about science and managed to grow a crop, and everyone else didn't and thinks they're a witch
That's not what the parent was claiming. The grandparent was claiming that most people are stupid. The parent was pointing out that most people are not stupid. Some people are, and many people have various biases and preconceptions that make it easier for them to be manipulated into believing misinformation.
> "Those people" does not exist. It's just an illusion of your own limited perspective. We are on this together and calling people stupid not it is true, not it helps.
I don't understand what's going on on this site. This is the second time recently I've come across someone claiming a comment didn't say something that's essentially copied verbatim mere centimeters higher on the monitor. It's basically in the same eyeful.
Hell, the commenter even added the word "stupid", which wasn't in the parent comment.
I was saying critical thinking is not as common a skill as you may think. Most people aren't stupid, but most people ACT stupid by acting without thinking.
I agree here. My observation is that most of humanity is rational but acts on limited or incorrect information. If you can provide truthful and complete information (in a digestible form), humanity will do just fine.
I stated the deficit is critical thinking skills. You editing the quote to make it sound as though I said all people are simply "challenged" which is disingenuous.
Not saying you're wrong, but to take a slightly more charitable view on humanity: Facebook exploits well known human behaviour to amplify content.
It's (unfortunately?) human nature to share shocking things, it may have even been evolutionarily advantageous at some point. Using algorithms to exploit this behaviour at a scale never before possible is harmful to humanity. No idiocy required.
It's much worse than giving the village idiot a megaphone. Facebook (and most other socials) prioritize content to maximize engagement, and (big surprise) the village idiot maximizes engagement. Facebook is a machine tuned specifically to spread hate and bad ideas because that's what maximizes the time people spend on Facebook.
I thought of a good analogy a while back. Lets say someone walks past you and says "hi" and smiles. Lets say someone else then walks past you and punches you in the face. Which interaction maximizes engagement? Well that's the interaction and content that social media is going to amplify.
Social media companies are the tobacco companies of technology. They make billions by lobotomizing the body politic.
True, but the financial incentives of many (most?) companies doesn't align with public benefits (see pollution, plastics, diet etc.) Why is social media singled out in our demands for them to act morally good, instead of just profitable?
I don't believe FB's algos truly maximize profit. I think they are short sited.
Imagine a food company did metrics that showed the most popular food was hamburgers, so they kept optimizing and optimizing trying to serve nothing but hamburgers. It arguably wouldn't work and they'd be giving up tons of market share and $$$$$$
FB (and Twitter) seem to be trying to optimize where they believe everyone wants the same thing. Unfortunately for them, when myself and lots of others try their services we find it's shoveling stuff at us we don't want. Instead of driving our engagement it drives us away.
There is tons of content I'd like to see on Twitter but every time I check it, now down to about once a month, they shovel so much shit I'm absolutely uninterested in at me that I just remember why I don't look and I leave. If they'd let me opt out of their recommendations my engagement would go way up and over time I'd follow or and more people. Instead they just drive me away.
The same is true for FB though in different ways. They try to fill my newsfeed with stuff I'm not interested in hoping to get me to spend more time. Instead it just prompted me to delete the mobile app and only use the website on desktop since some extension helps me filter their crap out. My usage without the mobile app is probably 25% what it was and falling. Entirely their fault for removing choice and assuming everyone wants the same thing.
The question, as always, is who watches the watchmen? We saw what happened with broadcast and telecom regulation for phone and radio/tv in the USA. ABC, CBS, and NBC held a virtual monopoly until cable news 60 years later. AT&T/Ma Bell did likewise. It was horrible and arguably retarded innovation and diversity of viewpoints for many years.
> Lets say someone walks past you and says "hi" and smiles. Lets say someone else then walks past you and punches you in the face. Which interaction maximizes engagement?
Likely the first one. Could also lead to a literal 'engagement'.
If you're optimizing for time spent in the interaction, which is what FB does - then probably the punch in the face will cause you to stay on the scene for longer, whether to yell at the person, fight back, when you call the cops, etc.
The "hi" is returned with a "hi" back and you both continue walking.
Likely scenario is that the interaction stops the moment you are floored by being punched in the face. You may stay on the scene longer, but the perpetuator would likely take off.
Also answering to hi and a smile with a hi and smile _could_ in fact be the right way to optimize for time spent in this interaction because it has low probability but very high impact outcomes as far as time spent goes (dating/marriage).
> The problem is that Facebook is giving the village idiot a megaphone
What's interesting is that before Facebook, the only people who could afford a megaphone were either state sponsored medias or billionaires who owned TV stations and newspapers.
For the ordinary citizens, the only way you could be heard was to write a letter to the editor of your local paper. If the state/billionaire/editor didn't like you, your views or anything really (your skin color perhaps?) it would simply not get published, period.
With Facebook a lot of gatekeeping simply disappeared. It's interesting to see who has an interest in regulating Facebook and bringing back the "good old days" of medias.
>Are we just bored and desire entertainment and drama?
Essentially yes, We evolved so that we get a chemical hit when we engage with social drama. If you are bored, have nothing better to pay attention to, and/or have low self control, it is a very easy way to get a quick fix.
>What's the evolutionary drive for drama anyway?
Humans evolved as pack animals, paying attention to pack drama was extremely important. Picking sides or paying attention could mean the difference between getting your next meal or being beaten to death.
Because we evolved in small packs, where information was usually relevant, we don't have a good filter, or on/off switch.
Today we are exposed to the latest and most exciting drama from around the world, opposed to our tiny pack, and it is really hard to resist paying attention.
Paying attention to anything in the news or on social media is unlikely to make an impact on your life. Even the biggest topics have a very low risk of impacting you personally, but you will notice that most of them have an explicit or implicit hook that they could impact you.
People want inflammatory content like a moth wants a flame. Facebook amplifying a signal from the lunatic fringe preys upon the need of non-lunatics (or different-thinking lunatics) to argue against ideas they consider dangerous or just wrong. As a side-effect, it makes the ideas appear more mainstream, which has the effect of making the ideas more popular. This further increases the compulsion of non-lunatics to address the ideas.
I'm not sure if that qualifies as being what people 'want' or not, but it seems like it's profitable.
Ultimately, yes, but that's a rather short-sighted position to take when there's an cadre of psychologists and other highly-trained people who's entire job is to entrap you further, just so someone can make (more) money.
Eg when you buy items at the grocery store, do you consciously examine all options, including the items on the bottom shelf by your feet, or do you just go for the items at eye level, and are thus tricked by a similar group of psychologists into buying the product you've been trained to want. And even if you, personally, do, there's a reason why product companies pay supermarkets to have their products at eye/arm level - it works.
Sure, but also Facebook has a bunch of doctorate-having engineers and psychologists dedicating hundreds or thousands of hours to figure out a system that gets for me to give my attention to Facebook, whereas I’m one dude who doesn’t even have a graduate degree who gets tired and bored and struggles to sleep sometimes.
I am not sure how it goes for the average person. Myself: I just do not go to places where village idiots tend to accumulate like FB or if I do (hard for me not to watch youtube) I just completely ignore all that crap.
It seems like lot of folks here allude though not exactly say that they should be in position to decide on who is "idiot", "bad-faith", "anti-science" and so on.
Should our society have free speech, or free speech for everyone except idiots?
If you agree with the second formulation, who do you think ought to be in charge of deciding who the idiots are? Surely Mark Zuckerberg would not be your first choice.
Maybe there is a third option: no free speech for anyone, all speech must be moderated for lies and misinformation. Is that what you want? In that case, who gets to decide what is true and what is not? Surely Zuckerberg wouldn't be your first choice for that either, right? And what should happen when Facebook blocks "misinformation" that turns out to actually be truthful?
Those who want Facebook to regulate "misinformation" and gatekeep who (and what) is allowed on the site need to admit that they don't actually believe in free speech -- they believe in limited speech regulated by corporations.
Take any of these arguments about Facebook, replace "Facebook" with "printing press" and everything still makes sense, which tells you what this really is:
Cultural elites wanting to control what their perceived inferiors think, believe, and most importantly, vote for.
The same class of people who wanted to regulate the printing press in Europe during the 15th and 16th centuries are the ones who want to regulate the internet today.
To be fair there is also a large contingent of well-intentioned people who don't realize the full implications of what they are asking for.
Ironically many of those people would say they oppose the concentration of corporate power, yet they are asking a very large capitalist corporation to exercise power over one of the most fundamental freedoms.
Removal of speech is not a consequence of speech -- it's preventing speech in the first place. That's what happens when Facebook blocks or deletes "misinformation" -- they are removing the speech itself. That's not the same thing as "consequences" for speech.
Look at what HN mods do -- they ban trolls, but they don't delete what the trolls posted. It's there for everyone to see -- in fact, if you look at "dead" comments you can see flagged stuff too. In terms of free speech, that's very different from deleting the comments entirely, which is what people seem to want Facebook to do.
And for the sake of argument, even if we accept that "consequences" ought to include the right to free speech being taken away from bad actors -- who can be trusted to decide who ought to be punished? Again, surely not Facebook. Surely not the government either -- the winners of every election would punish their enemies by taking away their rights. So even if we could tell, 100% reliably, who were trolls and who were not, we still should not give any corporation or government the power to take away the right of free speech.
What should be consequences? What are consequences when you say something stupid to your family or friends? What should be consequences when you knowingly lie to slander somebody?
> And for the sake of argument, even if we accept that "consequences" ought to include the right to free speech being taken away from bad actors
This already exists in the law. Just as your right to move freely is taken away in certain situations (for example due to restraining order).
The usual when you get up in front of a group and act like a jackass: social shame and ostracization. Something that's hard to do on internet platforms. Even when people are not anonymous, they have plenty of ways to "hide", and it's easy to unfollow and block people who criticize you for spreading misinformation.
So I don't know. Removal and deplatforming, IMO, is not the answer. You don't fix extremism through censorship; that just makes it worse and drives it underground.
People who support these kinds of activities are youthful and arrogant enough to have any form of humility about things they once passionately believed to be true turning out to be incorrect.
In the 90's, eggs were thought to be as deadly as cigarettes. A bowl of cheerios was considered to be vastly superior nutrition wise to a plate of eggs. This is the opposite of what we know to be true today. If I had tried to argue against this with the current form of Facebook, I'd be censored. (They also thought avocados were bad for you in the 90s.)
The elevation of a collectively determined "objective" truth over the freedom of individuals to exchange ideas is the first step towards creating an environment for authoritarianism to flourish. Subjugation of the individual to the collective is the norm in most of history, and it's not an accident that our current prosperity emerged when in the times and places where it was lifted.
> People who support these kinds of activities are youthful and arrogant enough to have any form of humility about things they once passionately believed to be true turning out to be incorrect.
It's not just that. Usually the folks you'll see on forums like HN, or many other tech spaces, aren't the people who would have had trouble having their voice heard in a pre-internet age. They're usually the (generally not brown or black) children of wealthy families, many of them being 2nd generation scientists or engineers, and would have had privileged access to publishing houses, financial news, or high quality PSTN lines. This is why these spaces usually see so much more of the bad associated with these technologies than the good. The old status quo was very much a benefit to them, their families, and the milieu they grew up in.
Ask someone who grew up or had family in a country where PSTN over copper was a crapshoot and where sending exorbitantly expensive telegraphs and mail out of the country were really the only ways to get your voice heard, and you'll hear a very different story.
That's not to say Facebook, Twitter and other social media companies have society's best interest in mind (I doubt they do) nor that they are impartial operators (I really don't think they are), nor that they shouldn't be regulated as any other neutral communications carrier, but simply that this is one of the main reasons why tech forums tend to hate so much on social media.
If your basis for believing it's the result of a lab leak is it would make a really juicy story with no basis in reality then it's an obvious untruth at time of telling even if it later turns out to be true.
> who do you think ought to be in charge of deciding who the idiots are?
Think about it. Engineering disciplines have mostly solved this issue. Lets take structural/civil engineering and something that affects many people - bridges. Through a combination of law, codes, and government, not any joe schome can build a bridge. Existing bridges generally work well and can be trusted. Sometimes bad things happen like the FIU collapse, but generally that's very rare.
I don't understand why there can't be a group of people, large or small educated and from diverse backgrounds, that can set basic standards on what is and is not misinformation, with due-process like things such as appeals, etc. It's not an impossible task.
> Those who want Facebook to regulate "misinformation" and gatekeep who (and what) is allowed on the site need to admit that they don't actually believe in free speech -- they believe in limited speech regulated by corporations.
If you're going to use a third party for communication and that third party is not owned by the people (i.e. a government entity) then it follows from the above statement that you don't believe in private property rights.
There is a difference between defining absolute truth and identifying obvious lies.
If a medication is approved and then a new risk factor is identified the issue goes from unproven to validated but if it wasn't based on nothing it was never a lie.
The covid vaccine containing a chip to track you was always a lie.
We don't need a ministry of Truth we need a ministry of obvious bullshit.
Under the -- very rocky -- assumption that a Ministry of Obvious Bullshit could still avoid scope creep into moderating truth, I still don't think this would fix things. People who actively want to spread bullshit will find a way. It's like spam vs. anti-spam, ads vs. ad-blockers.
Optimum number of murders is 0.
So we take steps to prevent murder.
This doesn't make the number of murders 0, but we take the steps because we want to reduce murder as much as possible, because we've agreed it's unacceptable.
Substitute "online misinformation postings" for "murders" above.
The point is just because something is difficult to eliminate is not a good reason to give up trying to eliminate it.
> I don't understand why there can't be a group of people, large or small educated and from diverse backgrounds, that can set basic standards on what is and is not misinformation, with due-process like things such as appeals, etc. It's not an impossible task.
This seems incredibly naive to me. It seems what’s happening there is we give people a list of important figures to bribe to allow their speech to be considered information (or for a competitors speech to be considered misinformation).
And even if these were incorruptible humans, there are several statements that are heavily under debate currently as fact or fiction, such as the validity of neopronouns, whether or not Spanish speakers anywhere use latinx, who is the bad art friend, if kyle rittenhouse should’ve been convicted for murder, do trans children exist and if so can they perform any transitioning or puberty delay, what is critical race theory, is the Covid vaccine rollout speed sinister and if trump lost the 2020 presidential election legitimately. And that’s just all in America.
Your comment can be loosely translated to the following:
"Authoritarianism can work. It's just the wrong people were in charge. If me and other people like me had the same power as Stalin/Hitler/Mao/Mussolini/Putin, everything would be better because they were dumb, but we know better. We can create Utopia when nobody else could. We are uniquely prescient and intelligent."
The amount of arrogance and utter lack of humility is shocking.
But the blast radius of a Facebook post doesn't have the same reach given the majority of posts go to your explicit network of connections. Unless you're specifically referring to Facebook Groups? But then are we certain it's different from Reddit or other forums?
Facebook Groups and Pages create ways for people to share content, triggering exponential growth (e.g. user shares meme to their page so that their friends see it. Their friends choose to re-share. wash. rinse. repeat.)
I think it's not so much Facebook alone but the entire Internet. The connectivity between humans is suddenly increased manyfold, and reaches much wider. Imagine using a graph layout tool on a giant graph with only few localized connections. Likely the picture will have evenly distributed nodes without much movement. But then as you dump all these new edges onto the graph, the nodes start to move into rigid clusters separated by weak boundaries. I think this is what's happening with the red/blue, vax/antivax etc. groups.
The internet alone doesn't connect people. Remove things like Facebook and Twitter, and how do you get this giant interconnected graph with few localized connections?
Yeah, I always hear people talking about the great "global village" where everyone is 'connected', but I have to admit I am against it. I don't want to be prank called.
Right. Prior to social media, people were vetted many ways and in every context in which they gained an audience. (e.g. earned standing in social settings and community groups, promotions at work, editors of one sort or another when publishing to a group, etc) Audiences grew incrementally as people earned their audience. Social media removed all that vetting and it inverted the criteria to grow an audience. Sensationalism was rewarded over thoughtfulness. So one of the most important tools we've always relied on to judge information was removed. Hard to believe, as intelligent as these folks at Facebook/Meta are said to be, that they don't understand this. Feels disingenuous.
The problem is that facebook is giving people earplugs. Biases and minority opinions get clustered together in huge echo chambers by eliminating mean societal influence.
This has assisted valid and invalid minority opinions to be heard.
What wasn’t there was critical thinking on behalf of the people who were already overwhelmingly exposed to mass political marketing and had developed a pseudo Asperger response. I will agree for once with the facebook exec, political philosophy has pretty much come to the conclusion that since there is not a unique definition of good or bad, there is not an algorithm that can do it.
The problem is that Gutenberg is giving the village idiot a megaphone. Gutenberg can't say:
- Amplify your commercial business message to billions of people worldwide.
AND at the same time
- Well its your individual choice whether or not to listen to the village idiot.
You guys gave them a megaphone, how do you expect society to behave?!
So should there be a special tax on "megaphones" like Twitter, Facebook or YouTube? What exactly is the legal framework under witch these companies could be scrutinized? Normally the manufacturer of megaphones does not get sued when a person uses it to promote hatred on a village square.
I think the megaphone is thus more of a metaphor than it is an analogy. Or at least, like most analogies, it breaks down under even the lightest pressure. For it to be an analogy, it'd have to be a megaphone manufacturer that also brings the audience together. Maybe Facebook is the megaphone AND the village square AND then some.
That’s what’s challenging about this situation. We’re experiencing a fairly new problem. It hasn’t before been possible for a member of society to communicate with all other members of society at the same time, nor has it been possible for a member of society to get addicted on a curated feed of random (sometimes anonymous) folks spreading their ideas globally.
All of these things seem new to me:
- Global, direct communication with all members of society.
- Addictive design patterns in software.
- AI-curated news feeds based on increasing engagement.
- Anonymous conversations.
Since it’s new, society doesn’t have frameworks to think about this kind of stuff yet.
>That’s what’s challenging about this situation. We’re experiencing a fairly new problem. It hasn’t before been possible for a member of society to communicate with all other members of society at the same time, nor has it been possible for a member of society to get addicted on a curated feed of random (sometimes anonymous) folks spreading their ideas globally.
This comment could have been taken more or less word for word from the diary of a monk who lived in the 1500s.
I think scale matters, though. In the 1500s (through much of the 1900s, even), most people were still mainly exposed to the viewpoints of people and groups who were physically local to them. Your local newspaper and (more recently) local TV news was a product of local attitudes and opinions. Certainly all of those people were not a member of your "tribe", but many were, and there were limits as to how far off the beaten path you could go.
If you had some wacky, non-mainstream ideas, you self-moderated, because you knew most of the people around you didn't have those ideas, and you'd suffer social consequences if you kept bringing them up and shouting them from the rooftops. Even if you decided you'd still like to do some rooftop-shouting, your reach was incredibly limited, and most people would just ignore you.
Today you can be exposed to viewpoints from every culture and every walk of life, usually with limited enough context that you'll never get the full picture of what these other people are about. If you have crazy ideas, no matter how crazy, you can find a scattered, distributed group of people who think like you do, and that will teach you that it's ok to believe -- and scream about -- things that are false, because other people in the world agree with you. And the dominant media platforms on the internet know that controversy drives page views more than anything else, so they amplify this sort of thing.
WASHINGTON — The Supreme Court ruled on Friday that abortion providers in Texas can challenge a state law banning most abortions after six weeks, allowing them to sue at least some state officials in federal court despite the procedural hurdles imposed by the law’s unusual structure.
But the Supreme Court refused to block the law in the meantime, saying that lower courts should consider the matter.
"So, essentially, the Supreme Court left the law in effect. We were expecting to possibly see them limit the enforcement of it because that was the biggest concern that Supreme Court, Supreme Court justices seemed to have. And that enforcement, of course, is allowing private citizens to sue, under the law, anyone who aids and abeits an abortion for at least $10,000 damages, if won. And so, that's where it kind of was being targeted today. And the Supreme Court essentially put that back on the U.S. District Court, allowing that lawsuit to resume to determine the constitutionality of the law."
Or TLDR they left it to the states. Can't wait to see how the states run with that concept.
"ruled that it's okay" != "left the law in effect"
"U.S. District Court" != "the states"
Whatever a state statute may or may not say about a defense, applicable federal constitutional defenses always stand available when properly asserted. See U. S. Const., Art. VI. Many federal constitutional rights are as a practical matter asserted typically as defenses to state-law claims, not in federal pre-enforcement cases like this one.
I wouldn't blame megaphones for the fact that "idiots" use them. Nor would I expect megaphone manufacturers to dictate what messages can be amplified using them. Nor would I expect megaphone retailers to determine somehow whether a person was an "idiot" before selling them a megaphone.
If someone uses a megaphone in an anti-social manner, that's a matter for the police to handle.
Analogies are nearly useless in making an argument Facebook is an online platform with real time access to users communications and metrics and analysis of how it's used which allow it to make reasonable predictions on how it's going to be used in the future.
Comparing it to dumb hardware is ridiculous.
Their ability to predict the negative effect of amplifying crazy provides a moral imperative to mitigate that harm. In case you don't understand there is a difference between the platform allowing Bob to tell Sam a harmful lie, and letting Bob tell 200 people who tell 200 people ..., and different yet from algorithmically promoting Bob's lie to hundreds of thousands of people who are statistically vulnerable to it.
So i think this is a breakdown of our previous mindset on the matter. I don't know what future is "right", what the answer is.. but i think it is important for us to at least recognize that in the past, a crazy person on the street corner was limited quite a bit on velocity.
This megaphone is a poor example imo. A far better example would be broadcast television. We're now broadcasting everyone straight into not just American homes, but world wide.
So i ask, because i don't know, how does broadcast television differ from a megaphone in requirements? What responsibility is there on broadcast television what doesn't exist for a street corner?
Is it a problem, however, if the megaphone manufacturers specifically look for people who spread misinformation, and sell them the loudest megaphones with the most reach?
FB has not directly done that, but they have consistently refused to acknowledge that selling the biggest megaphones to the people who create the most "engagement" (aka money for FB) tend to be the types of people who generate false information and outrage.
Their publicized efforts to shut down fake accounts and pages set up specifically to spread misinformation is perfunctory, and simply something for them to point at and say, "see, we're doing things to fix the problem", when they're merely playing whack-a-mole with symptoms, know what the root of the problem is, but refuse to fix it because it's their cash cow.
Would you expect megaphone manufacturers to give souped up models capable of drowning out other megaphones to only the most controversial, destructive people?
Internally Facebook works aggressively to combat covid misinformation: source I work at fb. Literally most of the commonly used datasets are about it. It's easy to hate and hard to understand.
Facebook decides what to show people. They could show you your friends posts in chronological order, and/or let people have control over what they see.
But no, Facebook decides what people see. Therefore they have some responsibility for the spread of misinformation.
It doesn't get enough attention? The "algorithms" are all anyone talks about when it comes to this issue. I think people put way too much weight on them.
Once you have enough people participating in a single conversation, from all walks of life, the discourse will go to shit very quickly.
Look at Reddit as an example. Small subreddits that are focused on a specific hobby are usually pleasant and informative. Large subreddits that cater to a large audience are no better than the comment section on a political YouTube video.
And people decide to use Facebook. I am not trying to defend it, but blaming it 100% on Facebook is not fair. Even if their algorithms were perfect to amplify misinformation, there still needs to be enough people reading and sharing content for it to have an effect.
A solution could be paying for Facebook, where both the number of people and incentives would change.
The problem is that humans are never 100% rational. If the audience for Facebook was purely rational robots, then sure, you could make the argument that since they should just be able to stop caring about these problematic things, and the issue will go away, it's not Facebook's fault that they haven't done that.
But given we are talking about humans, once Facebook has spent considerable time and money studying human behavior and society in general, exactly in order to figure out how to maximize their own goals over anything else, I think they should take the blame when there are negative side effects to doing so. Saying "well if society just stopped caring about (e.g.) negative content it'd be fine, so it's society's fault" is misdirection at best and ignores both the concentrated effort Facebook has spent on achieving this outcome, as well as the hoops they've spent the past few years jumping through to defend their choices once people started calling them out on it.
This is why I suggested 'paying for Facebook'. Legislation could exist that simply says things that have commercial interest behind them can not be given away for free.
Even a price of $0.01, would radically change the environment on Facebook.
I seriously think that selling people’s privacy is the lowest common denominator of business models. It requires no effort of said business to sell people’s data. You can do it for almost every type of business. Hotels, coffee shops, accounting firms, architecture firms, etc.
I don’t choose to use WhatsApp but I have to because that what my family members use and they aren’t tech savvy enough to use anything else. So no, it’s not a simple choice. Once a product gets saturated in the market, it gets very difficult to replace it.
Facebook doesn't really decide what you see, but instead optimizes what you see to maximize your engagement. If you never engage with political content or misinformation, you generally won't see it. Once you start engaging, it will drown out everything. What they could provide is a "no politics" option, but I wonder if anyone would utilize it. There was an old saying in the personalized recommendations world along the lines of "if you want to figure out what someone wants to read, don't ask them because they will lie." For instance, if you ask people what kinds of news they want they will invariably check "science" but in fact they will mostly read celebrity gossip and sports.
Facebook decides what you see. That they have created an algorithm that "maximizes engagement" is just another way of saying that they've decided what you should see is what they believe will keep you most "engaged". They could choose to use a different metric -- it is entirely their choice.
Facebook as experimented with a number of different options to clean your feed but ultimately they never get deployed because they all decrease engagement.
It’s not false that there is a societal problem that is not unique to Facebook.
But that sidesteps the question of what responsibility they have as a company whose profits are, at minimum, powered by that problem, if not exacerbating the problem.
“Privatize the profits, socialize the costs” is not sustainable.
> whose profits are, at minimum, powered by that problem
I don't think it's established that that's the minimum. Facebook usually argues that it's a small minority of their content and I don't see any evidence against that (it just gets a lot of scrutiny). It seems like if you magically removed all the "bad actors" they'd make just as much money.
Exactly. Running 'the world's largest vaccine information campaign' rings hollow when it's really a mitigation effort.
That's akin to saying that the Valdez tragedy and subsequent clean-up made Exxon the top environmentalists of '89.
> “Privatize the profits, socialize the costs” is not sustainable.
The church (as well as the integrated state) has been doing it for thousands of years. I think treating the general population as a somewhat renewable resource to be mined/harvested has a longstanding tradition and history of being one of the most sustainable things in human history.
Depending on how tax money is spent (i.e. in ways that benefit a subset of society, rather than all taxpayers) this is perhaps the most common and longstanding tradition we human beings have.
Look at video games, particularly on mobile. I mean they aren't even games anymore. They're just metrics-optimized psychological-trick machines to extract the most money from you $1 at a time ie in-app purchases and pay-to-win. These aren't games: they're engagement bait to bring you and your wallet back each day.
Why do we have this? Because people suck and it just makes way too much money for anyone not to do it. Why didn't we have this 20 years? Because the technical capability wasn't there.
It's really no different here. Communication and messaging costs have really gone down to zero. If it wasn't FB, it'd be someone else. There's simply too much money with very little costs in engagement bait, whether or not that's the intent of the platform or product.
And yeah, that's the case because people suck. Most people aren't looking for verifiable information. They're looking for whatever or whoever says whatever it is they've already chosen to believe. That's it.
I'd say the biggest problem with FB and Twitter is sharing links as this is such an easy way for the lazy, ignorant and stupid to signals their preconceived notions to whatever audience they happen to have. But if Twitter or FB didn't allow sharing links, someone else would and that someone else would be more popular.
I honestly don't know what the solution to this is.
> Look at video games, [...] I mean they aren't even games anymore. They're just metrics-optimized psychological-trick machines to extract the most money from you $1 at a time [...]
I don't think so, and I used to spend a lot of time in arcades. For one thing, everyone had to pay the exact same amount to play. Good player, bad player, whatever. There was no "free tier" to get you hooked before things suddenly got much harder, and when they did there was no way to pay more to make things easier (though toward the end some games did let you continue by pumping in more tokens). Every game was also self contained. There was no persistent state that you were in danger of losing if you didn't keep checking in day after day, week after week. Fortunately, the games were also cheap. Even when I was really poor, I could afford a dollar for several tokens that were enough to play the games I liked just about as long as I could stand. Hour-long games of Battle Zone turned into all-afternoon games of Joust. I could turn a game over to someone less skilled, go back to my apartment, eat lunch, come back, and pick up again before they'd managed to exhaust the lives I had accumulated.
Arcade games were certainly meant to be enjoyable, and to keep you playing, but they were nowhere near the dark-pattern psychological minefield that modern games - especially mobile games - often are.
I was thinking more about the late 80s to the early 90s.
Games usually were designed with an easy first level and a big ramp-up in difficulty, and yes, you could pay for continues.
No free tier of course, arcade owners certainly didn't want their machines being squatted by non-paying players, but the easy start was definitely a hook, and they definitely played the sunk cost fallacy. After you put in so many coin, you don't want to stop there right? You can put in lots of coins if you are not careful. They clearly solved the "play for hours" issue too, all games I knew had an ending after less than an hour, and you had to be really good to get there without continues.
And sure, the games were meant to be enjoyable, but so are the mobile games. The only significant difference is that you own your phone, so you running the game costs the publisher nothing. Arcade cabinets are expensive machines and owners don't want you to keep playing if you are not paying, so they have to balance their settings so that you don't stay too long on a single credit, but at the same time, make sure you want to come back for more. Mobile games only have the second requirement.
Of course, only talking about video games, arcades often have prize games that are borderline gambling too.
The trend in modern arcades seems less predatory though. You can get 10 minutes of game play for a credit no matter how skilled you are, as long as you select the right difficulty. There is less of the "get hooked on the first level and spend all your coins" attitude. The counterpart is that you won't play for much longer if you are really good.
> I was thinking more about the late 80s to the early 90s.
Then you're talking about a time when arcades were already past their peak, facing competition from consoles and PCs, forced to invent or refine some of these dark patterns in a declining market.
So when you asked whether the description of games today also applied to arcades you might have meant those arcades, and I don't deny that your impressions are real or accurate, but I stand by my claim that the description does not apply to arcades in general. When you include their heyday, the majority of arcades that have existed did not have much in common with the kinds of games we've been talking about.
The solution is probably not to freak out so much and cultivate your (metaphorical) garden. People will adapt as they always have. This exact moral panic plays out every time the economics of information changes as far back as the printing press, which a 16th century scientist warned would create an overabundance of information that was “confusing and harmful” to the mind [1]. Even further back, Socrates apparently warned of the dangers of writing (the danger being forgetfulness). I don't think they're strictly wrong. They just overvalue the benefits and overlook the problems of older technology. Unalloyed good is not the bar. Technology is never an unalloyed good because we are not an unalloyed good.
Yet each and every time, "but this time it's different though", "but this is unprecedented".
Yes, the world of information is bigger and more complex. Yes, we also invent bigger and more complex tools to manage our bigger and more complex world. And the world is richer for it. There's never been so many quality video games and insightful media than today.
There are still many video games that are not click boxes, and if you watch what kids are actually into, they tend to be actual games like roblox, minecraft, among us and fortnight. Even the small mobile games they end up being mostly games vs. metrics optimized click games.
So the biggest obstacle to solving anything about this world is stupidity of the general population. People can't stop choosing the most stupid things with their wallets and attention.
As harsh as I am on humanity, it's not entirely people's fault. We have primate emotions that have been grossly outpaced by our technological development. This is why I've become somewhat anti-technology despite still working on technology because it stretches us to lifestyles that are distinctly inhuman.
Humanity simply does not have the emotional capacity to handle the technology we create. It never has and never will, and it's just that software has greatly amplified this inability. This is why mental disorders are skyrocketing. We're building emotional prisons with technological walls and bars.
My laymen and superficial understanding of the situation is that humans do continue to evolve but our genome is also degrading in terms of building up mutations.
It's fine if everyone is stupid, or most of us choose to engage in stupid things from time to time. I paid $17 to see Transformers. That was pretty stupid, but oh well, no one got hurt.
People want to be entertained and engaged - I do not think they aim to harm. So I think we need to develop alternatives to entertain people - these alternatives can still be stupid, they just shouldn't be harmful.
“The NRA say that ‘guns don’t kill people, people kill people’. I think the gun helps. Just standing and shouting BANG! That’s not gonna kill too many people.”
- Eddie Izzard, Dress to Kill (1999)
Homo Sapiens have been murdering each other on this planet for at least 10,000 to 100,000 years, and have only used burning-powder projectile launch tubes for roughly the past 1000 years. (Poison darts launched from a blowgun are a more ancient form of killing tube.)
When convenient projectile launching killing tubes aren't available, Homo Sapiens will rapidly revert to 10,000+ year old murder methods, and thus a husband inflamed with murder-rage who just learned his wife's ovaries have been voluntarily fertilized by another man's semen will not infrequently use punches or a nearby blunt object (hammer or rock) to fracture her skull and destroy her brain function, or use his hands to crush her windpipe, or bleed her out with a knife. This has been happening essentially every year for at least the past 10,000 years. If his wife had been armed with a handheld projectile launching killing tube she could have defended herself, but women frequently don't carry projectile tubes and frequently vote for restricting access to projectile tubes, because projectile tubes are loud and scary and make them feel unsafe.
Homo Sapiens have been killing each other on this planet for at least 10,000 years, and only used nuclear weapons for roughly the last 75 years.... If his wife had been armed with an M-28 Davy Crockett firing an M-388 round using a W54 Mod 2 with a yield of 84GJ, she could easily have deterred him, but she would not be legally allowed to carry military weapons.
Maintain the purity of your bodily fluids, friend!
Izzard is from England which has a high degree of gun control. The murder rate doesn't seem to be strongly correlated to regulation, however. This lends less credence to Izzard's conjecture. Maybe people who may murder will use whatever tool is available or aren't concerned about breaking gun laws?
The murder rate seems a lot lower than the US. Also the "murderers aren't concerned about gun laws" line is specious; a criminal isn't concerned about laws by definition - that doesn't invalidate the reasoning behind passing a law.
Doesn't seem like China is rolling out the welcome mat for many folks.
In 2016, China issued 1,576 permanent residency cards. This was more than double what it had issued the previous year, but still roughly 750 times lower than the United States’ 1.2 million.
Well, it's partially because police salaries are based on the rate of solved cases, no? Surely there is underreporting. And the homicide rate in Canada (which includes unintentional manslaughter) isn't much higher than the reported rate in China and we're fairly well armed by international standards:
so think about this. Do you want to give up your right of protecting yourself and be "protected" by the government? Or, do you want to keep your individual sovereignty and the right to say "no" to the government, meanwhile, also face the consequence that there are evil people and you might get hurt?
Your link shows the US has 4x the homicide rate as the UK. Meanwhile, Japan has incredibly restrictive weapon laws (including knives and swords), and they have a fraction of the rate of everyone else.
And before you point to Switzerland, I am all for gun ownership by certified, well trained, responsible citizens. But the US doesn't have that. Either decrease access to guns, or enact 2 years of compulsory military service where you are trained to respect your weapon and know precisely when and how to responsibly use it AND store it. If you do neither, you get the US.
And in either case, we need to improve the mental wellbeing of everyone in the US by giving more people access to "free" healthcare and not stigmatizing mental health.
Some people seem to be blind to the fact that access to a firearm lowers the cost of killing (by making it easier to do so); and what lowers the cost of something will encourage that behavior at the margins. But Switzerland! Sure, they've managed to thread that needle through education and regulation. But just relaxing gun laws without counteracting that in some way will of course increase homicides (and suicides, similarly). The US is case in point.
Making it easier to kill (access to a firearm) doesn't necessarily lowers the cost of killing.
The cost of killing lies on the consequence of killing.
Some people seem to be blind to the fact that less penalty and defunded/broken police and justice system lowers the cost of killing (by making it hard to bring murder to justice and the due penalty)
https://www.nytimes.com/2016/03/16/world/europe/anders-breiv...
> Your link shows the US has 4x the homicide rate as the UK.
That isn't germane. If the rate of something fluctuates without connection to the action taken to change the rate, then the action isn't effective or is confounded by other more significant factors.
Izzard's implicit conjecture is that if guns are not present, then murder is less likely (please let me know if I am misunderstanding it). Izzard's country of primary domicile is England which has a recent history of making guns less available. However, the murder rate in England appears to increase and decrease without regard to the timing of key legislation. Since a murder may or may not be performed with a gun, if murders do not decrease in the absence of guns then it follows that a gun may be a particular method but is not necessary for the commission of a murder.
I suppose a counterfactual could be asserted: the murder rate would have been even higher without England's gun laws. I suppose that is possible and would be plausible with more information about the natural variation in murder rates of various methods. Maybe something like Narwal tusks aren't a replacement for guns but have their own natural rate of usage in murders.
Title is really poor compared to the content of the article.
In any case, he is right. Look at the pattern, any large social network has these issues, which more or less seems like is related to how people interact. Twitter is massively toxic, Reddit is. Back in day Tumblr which was not current social media huge also used to have content Facebook gets blamed for.
Give a platform for people to publish and share and every opinion has the chance to be there.
It also doesn't have to be a massive broadcast platform, messaging platforms with small communities in the form of groups have these issues on a smaller scale. Though broadcast does make it worse.
Whenever I see toxicity, I always assume it's a young individual. I remember myself in teens, always up for some trolling on forums.
Today, I just don't have a need to do such things. Whenever I encounter this weird behavior, I just stop interacting, because I have a feeling it's some 13-16 year old wasting my time.
The maker of the gun may not be held solely responsible for murder, but we surely have to consider if we want guns in the hands of murderers.
Facebook isn't arguing this from a neutral standpoint. They are arguing from the position of the company that stands to lose their business if society decides murderers shouldn't have guns.
But there is demonstrable research done inside FB that indicated the harmful effects it had on society. WSJs FB Files has done a good job showcasing that research. There are knobs they can control that could reduce anger and thus engagement, but they have dialed them up. WSJ wrote about that too. FB philosophy is more engagement, however they get it.
I don't think that's true. HN is pretty non-toxic, from my perspective at least. Reddit is tolerable to me.
I think the problem is misalignment of incentives. If I'm incentivized to increase engagement, then I can think of some pretty ridiculous shit to say that will get lots of people clicking downvote and shouting at me.
I think these platforms could have been more proactive in setting those cultural norms that evolve into fruitful social interaction.
The HN algorithm will de-front page stories where the comment section starts looking like a flame war. If you look up the stories which were removed from the front page, there is quite a bit of toxicity in the comments.
I personally like that they try to play down controversies instead of optimizing for engagement from them.
Yeah, they do and they get called out for censoring as well. People complain that their voice was suppressed. HN is a small community so this thing doesn't blow up.
Imagine a big social media platform doing it. Just because some topic became a flamewar if it gets removed, one party will blow it up as them being censored. When these posts are started by your ruling lawmakers, it can even be cited as contempt. It is an extremely tricky thing for big social media platforms to moderate content.
Thousands of years of tribalistic evolution can't be removed in just a few decades, this is a lot on how humans interact.
That misses the point of the OP. Tumblr (as a general social media platform) had incredible amounts of bullying and toxicity. Whether its from the left or right is a moot point.
Facebook chooses what I see while on their platform. If they didn't, I'd just see a chronological feed of my friends posts that I chose to follow as they came through without any external filtering. Going directly to friends walls shows that is not the case.
Instead, they amplify emotionally based content that they think I will react to (engagement) by studying previous interactions and don't show me things they don't agree with (censorship) even if it originated from an authoritative primary source. That doesn't sound like it originated in society, but more of a purposeful curation imposed on users, who have to conform if they want to stay. I didn't.
Yeah I don't think someone in that role from fb is particularly qualified to talk about society and human nature in relationship to social networking.
It's like listening to someone who builds, designs and optimizes production lines in cigarette factories philosophize about why people smoke and whether it is their free choice to do so.
Facebook execs are not responsible for what people think, but they aren't neutral either.
The connection between incentives ends generating a situation where their decisions had a huge influence on society:
- Their goal is to make the company profitable, and they choose ads as the business model.
- Without viewers, there are no advertisers. So, engagement is key.
- They need to create incentives to make people both content creators and followers: share your thoughts, share your photos, and show us what you like.
- Content creation is hard and strong opinions attract people (both detractors and followers).
- A long post format doesn't work for casual engagement, and the UI is optimized for a quick scan (because that helps with engagement).
The result is short posts of shitty content with very strong opinions that create an echo chamber. Can they get out of that trap? I don't know. I've seen good quality content in smaller online communities. (for example, while HN is not small, the quality of the comments is usually better than the article itself). But, I'm suspicious that optimizing for profit contradicts content quality. Something similar happens with TV shows. TV networks increased the number of reality shows: they are cheap to produce, generate strong emotions, and have a higher immediate audience than a high-quality TV series. The high-quality TV series came from media companies like HBO or Netflix because they don't care about optimizing minute-to-minute ratings (they care more about exclusives to attract subscribers).
On the one hand people have free will to believe what they want to - and - apparently - Facebook has no influence or responsibility on that.
On the other hand Facebook is entirely in the business of selling influence to change what people believe.
The meta is that this is a piece trying to influence what people believe about Facebook's influence.
I guess that makes this meta about meta being meta about meta.
Outrage against Facebook being too influential is marketing for Facebook adverts. It's a logical PR strategy. There's a perverse incentive to do it for real, and for Facebook to cause actual harm.
It doesn't matter if anyone in Facebook actually believes that (following a perverse incentive is a good idea). All that needs to happen is for the incentives to be aligned that way. Which might literally be the famed "optimising for engagement". https://www.youtube.com/watch?v=hn1VxaMEjRU
Not FB responsibility if they set up and tuned an information spreading system that promotes stuff that's inflammatory over stuff that's informative? Users of FB only see what FB feeds to them, and that's all about how FB aligns the user's activities and characteristics to the content FB is supplying. What a total cop out to say, the problem is what people say, when FB plays such a crucial role in what people see. Before FB (yes and other social media) amped this stuff up, a village idiot standing on a corner shouting conspiracy theories got very little attention. But on FB this kind of stuff feeds engagement, and we know how important engagement is.
Yet the guy who's slated to by FB's CTO says, don't put all this inflammatory stuff on me! Freedom of speech you know and just let us do our job of promoting engagement and building ever more effective ad targeting technology!
FB shows people what they want to see. They provided the public a channel to tune into what anyone has to say and it turned out that people sought out the village idiot.
Now the public says show me what I want, but prevent everyone else from seeing what they want.
Mental health has been an issue for as long as we've known, but Facebook does have a curious way of amplifying societal problems such as this and making it worse.
"When you’re young, you look at television and think, There’s a
conspiracy. The networks have conspired to dumb us down. But
when you get a little older, you realize that’s not true. The networks
are in business to give people exactly what they want. That’s a far
more depressing thought." - Steve Jobs.
Imagine if Facebook, Twitter, YouTube, etc were run like MMORPGs. Imagine them proactively mitigating griefing, bots, brigading, etc.
John Siracusa has been making this point on Accidental Tech Podcast (http://atp.fm): Zuck's envisioned metaverse would also be a toxic hellscape. Because Zuck is ideologically opposed to moderation.
The difference, of course, is because social medias sell advertising. Whereas MMOPRGs sell experiences.
The Fb algorithm favors engagement and controversy to the point that most people may not even see reasonable takes on a given issue. They’re not neutral.
Maybe if facebook’s graph were designed differently so that you have circles of relationships: family, friends, acquaintances, business relationships, interests and everyone else, but by default as the relationships are closer to the periphery, the less they get promoted. The smaller the circle the higher the chance of promotion.
That way idiocy’s doesn’t spread high and far and instead has limited transmission radius.
The medium is the message as the saying goes, the technology we interact with shapes what kinds of interaction we have. Blaming 'people' makes no sense because 'people' cannot be changed, unless we genetically reengineer humanity to be more accommodating to Facebook's algorithms, which they would probably prefer compared to having to actually fix the problems of their platform
>"At some point the onus is, and should be in any meaningful democracy, on the individual"
Viewing systems issues that are macro at scale through an individualized lens is great for dodging responsibility and Facebook's bottom line but it makes about as much sense as thinking that dealing with climage change will be achieved by everyone voluntarily shopping greener on amazon rather than creating systems that are, collectively and at a mechanism level oriented towards social good
Facebook is trying to save the internet culture of 20 years ago. A teenager version of me would 100% support what they are trying to do. But mainstream society is not compatible with the internet mentality of back in the day
If it makes you feel any better, I think this exec agrees with you that FB should not necessarily moderate. That said, judging by the blowback being received, it looks as though this exec's position will likely not win the day in the long run.
I'm not sure what this comment is trying to get at? It's a blanket call for Facebook to take "accountability", but for what specific actions. Private corporations will always have control on what goes on with their platforms, if the US government ran a version of Facebook, it would be a different story
Like, there's a difference between removing material that's actually illegal and removing material that they just don't agree with as a company. I think the difference is important.
Right, I think the problem then is that we only use one word for both, while at least in the "western" world I imagine a lot of people are ok with the former but not the latter. And saying something like "you can only censor material which is explicitly illegal" is not helping either, because like you said - that's still censorship.
You've implied two separate false equivalences, here:
• That "censorship" necessarily and only means "removing obvious child pornography", AND
• that in order to "take accountability", a company must "hire a ton of people".
You haven't established why either of those equivalences are a reasonable interpretation of the parent statement. And to this casual observer, it would appear as though you are deliberately conflating extremes in a rather disingenuous fashion.
No, the poster is not implying that "'censorship' necessarily and only means 'removing obvious child pornography'". They're implying that removing child pornography is one form that censorship might take, and that entirely removing the ability to censor would remove the ability to censor child pornography. You might have valid objections to this, but it's not the same as what you've written.
As to the second point, either an organization has the manpower to defend itself from content that it is "accountable" for, or it will be unable to defend itself when necessary. It seems pretty clear to me that only organizations with decent resources would be able to moderate content in the "if you censor, you're accountable" regime.
Not disregarding your comment, but we are all both winners and losers. I get a cheap phone with amazing capabilities, but healthcare is confusing and expensive for me.
Edit: the balance for is different for different people.
I worry that's where this is all going: "We've prohibited people with problematic opinions from participating in social media, but after some careful reflection, we've realized that they're still alive even so..."
Facebook is discovering what casinos, distilleries, and cigarette companies learned from their own experiences:
Sometimes, you stumble across something wildly successful that hits so hard partially because it keys into the pathological behaviors of some human beings (which can become self-destructive).
When you discover something like that, you either volunteer to regulate your interactions with your customers or the government steps in to regulate them for you.
Blame the society. Facebook IS a large part of the society. The same way religious people blame things on culture. Religion is a large part of the culture.
> Bosworth defended Facebook's role in combatting COVID, noting that the company ran one of the largest information campaigns in the world to spread authoritative information.
I find this to be such a creepy, off-putting statement. The last thing I want in my society is to have profit-driven tech corporations deciding what is and is not "authoritative information".
Especially given how time and time again they have censored "misinformation" which then proceeded to have merit.
The last thing that Facebook wants is to be forced into a position where they have to decide what is and is not "authoritative information". They are only doing it because society gives them no other choice except to shut down the whole business.
Of course they don't want to be in such a position because it costs them money. Similarly, it costs e.g Dupont money to be responsibly disposing of chemicals, but we're comfortable requiring them to do it or else face being shut down.
We learned our lesson with Dupont when they were poisoning water supplies. Similarly, I believe Facebook is poisoning our information supplies, which you can see in the rapid uptick in QAnon believers. Hell, they whipped themselves into enough of a frenzy to storm the Capitol building to overthrow an election.
I'm not making any political statements, truly. I'm just noting that these groups and ideas simply would not exist and perpetuate without being facilitated by Facebook's rampant misinformation problem.
> The last thing I want in my society is to have profit-driven tech corporations deciding what is and is not "authoritative information".
Most of the National Press (NTY, WaPo etc.) is profit-driven though, isn't it? They're the one's who generally report and decide what's authoritative...
I feel like the “toxic” big tech social media platforms have been a lot more impartial than any of the MSM sources these days. I don’t think the parent comment argued for that though.
That's why I think combatting fake news actually promoted them and made the world a worse place: when people hear "fake news", they've learnt to doubt the "fake" part and start thinking this news is "so worth it that Facebook/Google/the media is combatting it, so I remain ignorant of that information that is vital to me".
That's how I think combatting fake news made things worse for the very people we're trying to protect: it pushed them deeper into the fakeverse (TM).
Well said. I wonder how long it will take legislation to catch up to the unregulated information highways that dominate so much of our national and global thought and perceptions!
Won’t the regulation force them to censor? I get not wanting corporations to moderate discourse, but I can’t help but feel they get shit from both ends.
Fuck you if you censor, and fuck you if you don’t.
I don't think censorship is necessary given that it isn't the only way to address the problem of misinformation. For example, providing metrics on how trustworthy a source is by reasonable metrics isn't censorship, but providing information that's useful in gauging how much credibility to assign to a certain report.
Do I think they should censor information? No. But we're all ill-informed with respect to almost all subjects, so knowing whether or not a climate report came from e.g. a group of Ivy-league researcher vs. ExxonMobil is important, and knowing whether or not information on evolution comes from PhD evolutionary biologists vs. the Young Earth Creationist Museum matters a lot. What I think we're seeing is the growing distrust of subject-matter experts, but only in select highly political domains.
I understand what you said about pressure from both ends, but Facebook is a trillion dollar company with an extremely robust AI team. I'm sure they can figure out a solution, or at least come up with a gameplan on how to work their way towards one.
I also understand a trepidation with respect to legislation considering the not infrequent tendency to under-legislate or over-legislate, but this is common in every area of law and eventually the pendulum swings slow down to a reasonable medium.
It does not matter if it’s Facebook’s fault or not, Facebook is the causal agent. More than any other social network or media company, they have cultivated the most controversy and continue to do so.
Arguing over who’s to blame is like arguing over who’s to blame in drug problems —- human nature or drugs? Facebook adopts the same stance as every other morally bankrupt individual or institution in creating a convenient framework where they can justify the real and measured harm they are doing and weigh that against the demand and profitability of their product.
Only time will tell what becomes of Facebook, but in general terrible things have a way of destroying themselves and those that surround themselves with them.
Thought experiment #1: Facebook/Instagram disintegrate overnight. During that same night, a decentralized version appears that operates on something resembling the torrent protocol where users install a small receiver/transmitter on their machine and are able to participate in something perfectly resembling FB today (without the ads). The cases of bullying and such remain. How do you go about solving the problem in that case?
Thought experiment #2: The timeline is changed such that Bitcoin is actually made into a centralized protocol with one company at the middle with perfect control. Do lawmakers go about shutting it down because it contributes to drug exchanges and black markets?
All evil is either imposed by a outside and completely repairable, once that evil is removed.
Or all evil is eternal, unfixable and the best thing one can do is to give in.
Nobody seems to be willing to reverse engineer human brain architecture, to find reachable local optimas and best outcomes with a given, minimally changeable species. I guess it lacks the flair.
The irony is, that FB has the resources to do such a job though. They have the largest behavioral DB on the planet.
They know us better, then we know ourselves.
They could accomplish some nice Leviathancybernetics, but they do not want that.
They want to sell that knowledge as scary tales and social engineering advice to governments(palantir) and as marketing to everyone with money.
Isn't this the same point as "Guns don't kill people, people kill people"?
While that statement is objectively true, the fact remains that Facebook amplifies and benefits from sensational posts.
They are in a position to moderate the worst impacts of mis-information and yet consistently appear to be falling short.
While I agree that parts of society is to blame for the online toxicity online, FB (and other social media companies) are certainly in position to do a better job of managing some of the worst impacts such as vaccine hesitancy and political mis-information.
Infact they appear to be reluctant to act on it, since sensational posts and controversial topics appear to encourage attention.
> Isn't this the same point as "Guns don't kill people, people kill people"? While that statement is objectively true, the fact remains that Facebook amplifies and benefits from sensational posts.
And yet simply looking at Switzerland or Vermont, countries/States with a high number of guns per capita and an extremely low rate of gun violence, tells us a different story.
I'm not sure what you're even responding to here. How does it matter that there are areas with high gun ownership and low gun violence? It remains true that a gun gives the means to more easily kill someone, which is all that the analogy needs -- unless you're arguing that guns don't make it easier to kill, in which case I don't know why so many people bother using them.
Also: Facebook could go a long ways to counter covid disinformation by removing the billions of bots and fake accounts that infest its service. But if they did that we would find out that most of their market share is, in fact, fake. And then they wouldn't be worth nearly as much money as they say they are.
If I spread business lies on Facebook then Facebook becomes part of the problem. But if I spread lies which result in people dying, then Facebook is suddenly not the problem and we frame it as the fundamentals of democracy.
The public doesn't trust politicians, the government, or its official experts. Facebook is a public forum, and the public uses it to express their mistrust. Politicians then pretend that Facebook is the reason no one trusts them, because shooting the messenger is easier than admitting that they don't have much authority any more.
Are there major problems with Facebook? Absolutely. But the motivation behind attacks like the one leveled by this journalist is transparently to deflect blame off our incompetent political establishment and onto an easy scapegoat. The truth is that if politicians want people to trust them, they're going to have to figure out how to convince those people. Making Facebook or Twitter delete your enemies' posts hasn't worked in the past and it won't work in the future. This is a free society. You don't get to replace the public's opinions just because you declared those opinions "misinformation." Maybe in China, but it just doesn't work that way here.
This dodges the real issue which is that Facebook profits from users spreading disinformation. In fact this content is more engaging so their profit maximizers will amplify the disinformation. And as Frances Haugen's testimony demonstrates, FB knows this internally and chooses their profits over their users' well-being.
What the FB exec is trying to do here is akin to oil companies cleaning sludge of ducks. "Look! We're helping! Were part of the solution!" But the ducks shouldn't have any sludge on them in the first place.
Everyone should be reminded of Frances Haugen's testimony. Two months after, it's like everyone has forgotten so a Facebook exec can blame it on society.
>A spokesperson for Meta, Facebook's parent company, said the ads comparing the US Covid-19 response to Nazi Germany, comparing vaccines to the Holocaust, and the ad suggesting the vaccine was poison went against Facebook's vaccine misinformation policies.
Seems terrifying to me that Facebook has such policies in the fist place.
I agree that a shirt comparing US Covid-19 response to Nazi Germany or Fauci to Josef Mengele is poor taste, but to call a T-shits misinformation is a stretch.
"I'm originally from America but I currently reside in 1941 Germany"
This isn't misinformation because it isn't information. It is political criticism and opinion.
Well - maybe if ads in the linear feed were not treated like every other kind of "information" and actually did not allow "Likes, Comments & Shares", I could agree with you a little more.
Can you expand on how "Likes, Comments & Shares" changes disagreeable opinions to misinformation? Is it because individuals post actual misinformation in the comments?
Not OP, but the theory was interesting to me. Allowing the engagement verbs on content like this is how Facebook measures the quality of a piece of content (quality as in, will this make people spend more time on Facebook.) No verbs, no metadata for FB's algo to work with, no way to tell one piece of content from the next.
Facebook users are mistaking engagement counts and Facebook's profit motive as a trust signal. A Nazi comparison post is spliced in between photos of your cousin's kids and a weekend trip your college roommate went on. If this same procession were flashing past your eyes instead of being scrolled, it would look like a scene from A Clockwork Orange.
Second, let's say you already believe something false (false because vaccine mandates are not actually like Nazi Germany) and Facebook is optimized to find posts from people who believe the same false thing and show it to you in the pursuit of time-on-site; Facebook is doing matchmaking to reinforce beliefs that are false. This is the same goal as misinformation. So even if it isn't misinformation, it shares the same harmful outcome as misinformation.
It's interesting that Facebook openly labels this stuff misinformation because there is some truth to your claim. These were just garbage opinions until they came in contact with Facebook's business model. The chemical reaction turned them into something worse. Facebook themselves calls it misinformation but it could only have become that with Facebook's help. A nice own-goal scored by the company.
He says if he spends every dollar in removing fake and misleading content it won’t fix the issue.
More and more people like myself are just… deleting Facebook. Not deleting their account. Just deleting it from their phone, and never bother logging in from anywhere.
I urge people to try it. Don’t bother jumping through the hoops Facebook give you for deleting it. Just delete it from your phone. You’ll spend a day trying to check it, realise it isn’t there, then you’ll completely forget about it. And you won’t miss it.
They are right, these aren't problems problems of online discourse but rather problems of lack of critical thinking, reflection and other education in general that has become visible with digitalization. The solution isn't to create a facebook that will regulate and rule the masses for good, but to gradually educate the masses into thinking.
This just seems as a natural side effect of the new production mode that we have had for some time now, selling adds as the main source of income.
Society is dumb because execs of Facebook and other mega corporations want it to be dumb. They're building walled gardens where the herd can graze in a virtual reality, fully controlled by them.
Yes, an average man has no idea how viruses work or what is mRNA, but he is not dumb to understand that Zuckerberg and other billionaires just don't give a ** about his life. That's why all the conspiracies and denial of what comes from authorities.
I understand that corp execs are gonna corp exec,, but I gotta admit I am still unclear on a fundamental level why social media is any more blame worthy for misinformation than broadcast media on a qualitative level. FB never made you any promises that what your read on it contains any truth whatsoever.
If you have some time, I very highly recommend reading An Ugly Truth (https://www.indiebound.org/book/9780062960672). That book gave me a new realization - Facebook (and other companies) are not victims of their surroundings. There are some very intentional, very impactful decisions that are made at the highest level.
I think it’s mostly that they are incentivized to let some voices be amplified on their platform. The radical right is good for business so they let disinformation fester to make a few extra dollars.
I'm sorry, but I don't see how this is any different than networks being concerned about the ratings of each show, or Hollywood about projected return on budget when creating projects.
It’s the scale of it. How innocuous it seems. How One sided the moderation has been. Have you seen Facebook feeds? It blows my mind the amount of time people spend on that brain rot. It’s meant to be engaging and suck people in. In an customized way per individual. Mass media just can’t compete with Facebook for what it does. If you don’t see the difference I guess we can politely agree we live in different realities.
Facebook/youtube/whatever else posing as a ministry of truth, yeah. They absolutely do know truth on any topic, even health, science. Sure they have right to block anyone they see fit? (/sarcasm)
There's no neutrality. There's just fb/meta machine that feeds of humans.
Once again Boz has been reading a bunch of pop philosophy books and the press team left him alone with the press again.
He _may_ have made some excellent choices with oculus (price, formfactor feature tradeoff). However he still hasn't grasped the basics of communication.
His internal posts are long, rambling and at a tangent to the point he's making. Someone told him once that allegories in stories can be more effective means for communication. Either no one has told him he's doing it wrong, or no one who he respects has. more importantly the point hes making is normally painfully reductive, despite the 8k words implying its well thought out.
The core problem is that he honestly believes that facebook has done the best it can. He is firmly of the school of thought that he and his team can do anything, and do it better than anyone else.
The problem is, he can't do communication, and it shows. Worse still he can't empathise with the "other side". I don't mean sympathise, I mean mean understand why they are thinking the things they are.
It's not communication. He's got a big role at FaceBook and FaceBook is a big company. They cannot escape their social responsibility anymore. Hiding behind "freedom of choice, of individual, etc." is simplistic. But sure, with that kind of mind, FaceBook will continue to make piles of money.
I'm still on Facebook, but I don't engage with political content and I unfriend acquaintances that post stupid crap. Aside from posts by close family my feed is 90% cute animal photos and videos, along with a wee bit of PC building content.
Boz is 50% right but would be 100% right if they ditched the business model that leads them to be incentivized into spying on people and feeding them misinformation they'll click on.
This is a topic that came up in the wonderful Maria Ressa's Nobel Peace Prize speech. She argued that an international coalition of governments needs to combat disinformation on social media by saying what information is the "truth", a word that came up often in her speech.
I don't think this will work at all. Ignoring the implications of government interference in private corporations, do we trust governments to be arbiters of truth?
I certainly don't, and I'm sure John Locke would have agreed. In the case of the US, the government itself was the source of much skepticism concerning COVID, and masks!
> I don't think this will work at all. Ignoring the implications of government interference in private corporations, do we trust governments to be arbiters of truth?
Excluding the obvious bad faith actors (pretty much all on some sanction list) some countries will dance around the truth and bend over backward to avoid offending some countries [0] [1] [2] [3]
> do we trust governments to be arbiters of truth?
You already do, might as well accept it. The government funds science, makes some decisions about education, and makes laws based on what it thinks is true. Some of it is always wrong, but it's not as wrong as idiots on Facebook. And the goverment is something that is (somewhat) controlled by the people, more so than Facebook.
Maria Ressa herself doesn't trust (some) governments to be arbiters of truth. She was found guilty of spreading disinformation by the Philippine government. She claims that she was telling the truth, and the government is corrupt and lying and attacking the free press, but authoritative sources in the Philippines have stated that she is mistaken. The Philippine government was clearly combating disinformation from online foreign-backed conspiracy theorists.
I largely agree with this, not just in the context of COVID misinformation but a lot of the stuff Meta gets flak for in general.
With respect to Instagram's effect on teens, people seem to conspicuously omit the fact that this leaked internal research showed that users were twice as likely to say that Instagram improves their well-being than harms it. It's really not clear to me how much of this is due to Meta products themselves, versus inherent challenges people tend to experience during adolescence. And also "Facebook knew instagram was hurting teens" is reductive at best, disingenuous at worst given that teens were twice as like to say it benefitted them.
Similar analogies can be made with the Rohingya issue. Talk radio played a big part in inciting the Rwandan genocide. Is it right to say that talk radio was responsible for the genocide? I don't think so, the underlying social issues are mainly the cause and radio was part of the landscape in which in played out. I think it's a similar situation with Facebook. Like radio, they were a communication mechanism in societies that were perpetrating genocide. Facebook did their best to shut it down, but the challenges of suddenly scaling up moderation in a foreign language is hard. Yet people seem to genuinely think that Facebook was knowingly endorsing the genocide.
Hmmm - were is that meme of Gene Wilder from Charlie and the Chocolate Factory grinning maniacly while saying something along the lines of; "Oh yes, please tell me more..."
This headline is wrong. He did not say society is to blame, he said individual people are.
Margaret Thatcher said "there is no such thing as society". By that she meant we have no collective consciousness, we are just individuals. If that is the case then we would have no desire to protect people we don't know. Spreading misinformation is not an issue. Most legislation is unnecessary.
If you do believe in society, you are part of a larger organism and you aim to protect others.
Drug dealer blames addicts for substance abuse. No shit Sherlock, but you're not accused of coming up with the filth or consuming it - you're accused of being a primary trafficker. In fact, the comparison with a dealer is mild, as Facebook is to a typical drug dealer what Walmart is to your local convenience store. Escobar has nothing on Zuckerberg.
btw, is this host a man or woman, or some other gender? I genuine what to know so that I could use the correct gender pronouns on this person. That host shall wear a badge with preferred gender pronouns.
Finally a Facebook exec that tells it exactly how it is. Facebook should not be the information police. Individuals are 100 percent responsible for what they say and do. If someone makes a threat on Facebook, it’s not facebooks fault. The same goes for misinformation. People can get tricked by things but they can also learn. Look at the traditional media. People have learned that they all have their own biases and now it is no longer trusted. The same happens on Facebook. You cheer for a friend or a group for a bit then they say something totally stupid and stop cheering. People are not all mindless drones no matter how much “they are rather ones brainwashed and dumb, not me” people say there is. It’s always “not me” who is wrong. Not to mention even if they are wrong that there should be enough faith people will eventually learn. I love this Facebook execs comments and the way he brings back personal responsibility for what individuals post.
Calling it "blaming society" makes this pretty funny, like when Manson did it and the kid in the Suicidal Tendencies song does it when his mom won't bring him a Pepsi.
Facebook's self-serving algorithms are of course a scourge in this area, but he does have a point. Part of why the messaging on COVID has been so fucked is because of this very thing: spinning or tweaking the truth. Facebook does it, to increase engagement, but public officials and others also do it. People who should've just told the simple truth, instead tried to gauge our response to it, and spun and tweaked the truth in an effort to "game" the response. Just tell the truth. Because you're probably underestimating the general public, as usual, and will ultimately end up increasing the danger and impact, by two mechanisms: 1) people have incomplete or incorrect or insufficient information to act on, and/or 2) certain people (who are adults and can tell when someone is dissembling, or communicating manipulatively, a.k.a. propagandizing) start to distrust the "official story," and the cumulative effect is that they go looking for "the real truth" in all kinds of wacky out-of-the-way places and get all conspiracy-minded, and the Facebooks of the world pick up on this and amplify it in their feeds. You want to combat this? Give them an authoritative, trustworthy source. Tell them the whole, unvarnished truth. Gauging the response, communicating to achieve a goal, well that's not informing, that's either sales or propaganda. You want to combat disinformation, start with information - all of it, without spin, without censorship.
Seemingly every disaster movie has a character who refuses to sound a warning because they don't want to start a panic, but then they ultimately cause greater loss of life or whatnot. That character is always a villain. We hate them precisely because their communication or lack thereof, has an agenda that underestimates us and ultimately ends up costing us.
Thanks for pointing this out. A lot of people on HN seem to instinctively deride Facebook, but this statements is not wrong -- Facebook is a tool for communication, and how people use it is not entirely in its control. At best you could argue that their algorithms might have promoted misinformation since it got more engagement, but I suspect the opposite was true.
Because it is a loaded question and there is no good answer.
Social media exist, they are part of our world, it is not a factor we can isolate, which mean answering that question would be science-fiction. It makes as much sense as speculating about a world without Hitler, yeah, it can make great stories, but that's it.
Maybe the answer would be "no, because a world without social media would be a different world", but in which way? No one knows.
Sure there was. All the pearl-clutchers who are telling us that Facebook has doomed society were saying the same thing about TV a few decades ago. Remember Morton Downey Jr? Jerry Springer? Sally Jesse Raphael? They were saying that about music. Remember 2 Live Crew? People burned Beatles records for fear that this "devil's music" would destroy civilization. They were saying that about comic books. They were saying that about Dungeons and Dragons. I've spent my entire near half-century of life hearing about how everything I enjoy is going to destroy society.
Facebook can be both a problem - and a boon. It certainly helps people find other people who share the same "pearl/rifle" clutching mentality - and amplifies them into eco-chambers.
> He said that individuals make the choice whether to listen to that information or to rely on less reputable information spread by friends and family.
> "That's their choice. They are allowed to do that. You have an issue with those people. You don't have an issue with Facebook. You can't put that on me."
This is such nonsense.
Psychology tells us that we are susceptible to message repetition and perceived authority.
At the most basic level this is a necessary element for the survival of the species: Parent constantly tells a kid not to get too close to the edge of a cliff. If the brain wasn't wired to accept such messages without question humanity might have failed the evolutionary fitness test.
I've seen comments on this thread attributing aspects of the social media effect to the village idiot. That's also nonsense. Perfectly intelligent people who are demonstrably not idiots fall prey to these psychological effects. Once someone ascribes trust to a source --whether it is an individual, group, news organization, politician, etc.-- it is nearly impossible to make them see the errors in what they are being led to believe. It takes a particularly open mindset to be able to look outside of what I am going to call indoctrination.
In the US it is easy to identify some of these groups. Besides religious groupings, anyone who will generally refer to themselves as "life-long democrat" or "life-long republican" is far more likely to accept a world view and "truths" from members of those groups. Religion, of course, is likely the oldest such resonant chambers.
Facebook and other social media outlets, along with their algorithms, have introduced segmentation and grouping at a sub-level never before possible in society. Worse than that, they allow and, in fact, are the source of, a constant bombardment of ideas and ideologies in sometimes incredibly narrow domains. This is great when you are trying to understand the difference between using synthetic vs. organic motor oil in your engine. Not so great when it makes someone descend into a deep dark and narrow hole of hatred.
That's the problem. And yes, FB and social media are absolutely at fault of enabling for the constant repetition of some of the most negative, violent and counterproductive messaging humanity has ever seen.
I have mentioned this in other related discussions. We've seen this first hand in our own family. Over the last four years or so we two family members (cousins) who grew up together descend into equally extreme opposites thanks to FB. It is interesting because prior to this happening they didn't even have a FB account. They each got one at the same time to keep in touch with family. Four years later one is what I could only describe as a hate-filled-republican and the other an equal and opposite hate-filled-democrat. And 100% of this happened because FB drove these two people into deeper, darker and more hateful dark holes day after day, for years. The damage done is likely irreversible.
They (FB) didn't need to do that. Yet, that's what these geniuses thought was the "right" thing to do. Brilliant.
I am not generally in favor of heavy-handed government intervention. And yet, I have no idea how else something like this could be corrected in order to make these social media companies stop being radioactive to society. We have probably wasted at least a decade optimizing for hatred. How sad.
EDIT: I was going to say "unintentionally optimizing", however, at some point anyone with one bit of intelligence could understand this was trending in the wrong direction. Not making modifications to the recommendations algorithms to reduce the occurrence of deep dives into dark holes of hatred is a form of intentionally promoting such results. Again, sad.
> "Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing,"
This is just bullshit.
You can say the same thing about extracting confession under torture.. the individual "decided" to tell the truth, why shouldn't we admit it into evidence?
Those mega structures, controlled by billion parameter models, and then some human goes and does an interview and says 'ye no problem, its your choice', is most of all naive and arrogant, that anyone can even pretend to say they understand how the model is impacting the social structure.
As Jacques Ellul says, the strongest unit against propaganda is the family, or small groups of individuals, as they pull each other to the center of the group, but imagine now each individual is exposed to unique personalized propaganda, so the group constantly diverges. I imagine it like a group of friends holding hands in a pool, but then there is giant influx of water between them, so there is constant force to separate, so they have to keep their relationships stronger, and the force increases over time.
It is the people that seek propaganda, not the other way around. Now however the algorithm satisfies the search in most satisfactory way possible (within the limits of current technology)
The legislative system has always taken a long time to catch up to new technological innovations. It's certainly a problem that technology advances so rapidly now and the time scale of legislative action can't keep pace, especially given how connected the world is.
No longer do laws have to keep up with new technologies that affect how we live, but now also information highways that determine what we think to some degree. I'd much rather these highways be regulated in a way that at least touches democracy than by private companies whose role it is to drive profit.
Neither society nor facebook is to blame. The three foundational systems of our lives have been centralized and corrupted: money, information, politics. These systems are to blame, and the answer is decentralized systems. Decentralized money (Crypto), decentralized information (like HN), and decentralized voting (DAO's). Once we start using healthy systems, we'll get our power back and be able to fix our problems.
if he/she moves the original coins and proves that he/she is indeed Satoshi via technical competence and social consensus , then between the immense wealth and social status as the creator of Bitcoin their influence on the crypto world will be massive.
You can never decentralize power and wealth because they obey Power's law and also it happens organically and unspectacularly, it's simply a bunch of people agreeing that somebody is cool or what they are saying makes sense.
It snowballs from there and a couple of exponentials later that person is sitting on billions of dollars and a 70M people platform
Which claim? Our money is centralized via the Fed. Our information is centralized via corporate media. Our politics are centralized via Oligarchy.
These systems act to divide and conquer us, sapping our power. By switching to decentralized systems, we will be reconnected and rediscover our power. It's already happening via sites like this, crypto, etc.
Crowd-sourced content + crowd-sourced voting on visibility.
It's not perfectly decentralized of course, running on a server and centralized administration, but the general structure of the data flow is many to many.
- Amplify your commercial business message to billions of people worldwide.
AND at the same time
- Well its your individual choice whether or not to listen to the village idiot.
You guys gave them a megaphone, how do you expect society to behave?!