Facebook seems to be censoring so much because Facebook believes that most people will actually believe whatever bits of misinformation are floating around out there.
But is everyone that easily manipulated? More importantly, does everyone actually believe that they can be easily manipulated, or do they just think that everyone else is so easily manipulated, but somehow they're above the fray?
And at what point does the censorship to protect me from manipulation become manipulation itself?
Facebook is fighting a losing battle if they think they will survive a battle with their own users. This is way past censoring Alex Jones. You can't possibly censor every crackpot conspiracy theorist. Actually, we're probably all crackpot conspiracy theorists in some way. We probably all believe some conspiracy about 9/11 or the NSA or elections or vaccines or masks or aliens or royal families or whatever.
The rate of censoring is almost certainly accelerating faster than facebook's growth, and once you've been censored once, you're likely to radically curb your use of that platform. I can't imagine that FB doesn't have stats on how many people keep using FB after they've been censored just once.
FB only works when you and 99% of your social group are on there.
Facebook is removing content because that is ground they want to stand on. Removing content feels like censorship, so it lends itself to political grandstanding in their favor. Millions of people get upset and demand that Facebook handle all content neutrally, which is exactly what Facebook wanted everyone to think in the first place.
The real issue with Facebook is the editorial decisions they make about promoting some content over others. It’s totally optional and is also a powerful form of censorship. A link that shows up in no one’s “algorithmic timeline” is worthless, even if it is not actually removed from the platform.
The essential lie is that the algorithm is neutral. It’s not; it is intentionally biased to favor Facebook’s goals: increase ad inventory and the data used to target ads.
Facebook is removing content because they wanted to make money with a website, and the job of statecraft fell in their laps in the process. They didn't volunteer, they aren't qualified, nobody told them they were signing up to run a kingdom, let alone the world. So they don't produce lofty, philosophical, well-thought-out positions on free speech and human rights. They don't carefully manage the public square with the long term good of humanity in mind. They make the same expedient, best-effort, unenlightened, sounds-good-to-me decisions that literally every poorly qualified there-for-the-wrong-reasons ruler in all of history has made. Your majesty, people are dying because cats are vessels of Satan! By royal decree, kill the cats. That's all this is.
I think they are trying, and I wish they had a Thomas Jefferson on staff and saw themselves as needing one. But they don't.
It's a much bigger problem than than merely content promotion and aggregation.
There are countries where Facebook is the entire internet. Maybe they tried to make it happen and maybe they didn't - but however it happened, they are effectively the Ministry of Information now for that country. They probably shouldn't be. Everyone agrees on this, even them. But like the child-kings of history, it is still the situation. Avoiding the job is simply doing it badly.
Google, Twitter, others, are in a similar boat. They find themselves effectively running branches of government now, not of countries but of the world. Nobody has ever done that before. Nobody knows how to do it correctly, least of all people that mainly want to make money on a website.
The censorship is not because they're evil. It's because one of their subjects approached them saying their stuff is getting people killed and they were like, "Aaugh, I'm just a web site! I don't deal in such matters! Get it off me!" Because they think they're a web site. A political philosopher or an expert statesman would weigh philosophically how to maximize freedom and minimize harm while understanding that freedom gets some people killed, with an eye towards avoiding atrocities. Facebook isn't thinking about the historical risks associated with censorship because they still think they're a web site.
They don't know how to drive this thing - nobody does - and they are going to drive it straight into wars and catastrophes and atrocities. At least, that's the risk. There are countries whose actual ministry of information communicates matters of public safety primarily through Facebook. Or hosted on Amazon. Or whatever. And that company is going to make some minor change intended to protect transgender people in Wisconsin or something, and accidentally massively empower an oppressive government in Africa. That's the problem space. You think you're miserable because people in California are currently effectively setting policy in Arizona? Imagine you live in India. Heavy hangs the head that wears the crown, and this might be one of the biggest crowns ever made. It would be nice if we could find a way to destroy the ring. I definitely don't see another serious answer. In the mean time, the situation is what it is. The rest of us are idiots if we sit around criticizing them for not being perfect. We should be figuring out how to help!
Regarding the Ministry of Information concept, I recently realized that one potential motivation for the large-scale end-to-end encryption agenda is to move the "keys to the kingdom" for the world's trillions of chat messages (and attached media) to end-user devices, *and* loudly signal that this shift has taken place by taking advantage of the very loud minority of security-conscious folks that implicitly trust and signal-boost E2E everywhere they find it. My theory for why this may have become existentially necessary was exactly the sort of "get it off me!" ideology you describe, except from the perspective of protecting from coordinated attacks by multiple cooperating nation states. Of course, all this now means general interest in pwning all smartphones everywhere is now that much higher :(. (Medium-size wall of text version at https://news.ycombinator.com/item?id=25522220, also referenced in https://news.ycombinator.com/item?id=27841760.)
Regarding perspective mismatches because Facebook "still thinks it's a web site", thought-experiment nitpick: do Facebook really believe this, or is their problem a) that they are expected to behave as a website by the very same governments they're (optimistically) trying to support, and/or b) that they're afraid to stand up and behave how everyone's pining for them to do because doing so would "rock the boat", so to speak, and trip all over large swathes of plausible deniability that are still comfortably nestled in shadows and grey areas?
I don't think Facebook literally thinks they're a web site -- I think they know something extraordinary is going on. Probably much better than I do. But I do know that what they set out to do, and what they found themselves doing, are very different, and the head shift has been difficult. This is why, when people come to them with questions and requests and complaints that seem a little more appropriate to ask a judge or monarch or a legislature, they make expedient and historically and politically short-sighted decisions. They don't recognize them as questions of political philosophy, opportunities to think about the right way to architect a global community. They don't even draw on the considerable existing wisdom we have about how governments should work. They fumblingly reinvent ideas from ancient Babylon so they can get back to doing what they think they do and what they want to do.
It's hard to blame them. The head shift is difficult. They didn't ask for this responsibility, and finding themselves at the wheel of something enormous they aren't qualified to pilot, shrink from the task. Any sane person would. I only write about it here in the hopes that we all can start taking these problems seriously. How should a global information community work?
It's not literally true that Facebook is a branch of government, either. It's just the closest analogy I have, and it's closer to the truth than where we started. Really, they are something new. Something nobody knows how to manage correctly yet. I am not the only one to understand that the stakes are high, which is why people keep passing the buck. Facebook wants the government to tell them what's expected and allowed, and the government refuses -- or acts in small-minded expedient ways, too. They don't know either.
This disinformation and censorship question is one of a hundred with deep philosophical and legal elements. What Facebook thinks about the value of free speech, how they decide disputes should be adjudicated, what they do in small cases of deception and fraud and political manipulation, actually matters a lot more to us practically than what our own laws say. This is a shame, because our legal thinkers actually have a lot more experience with being wise and just when it comes to such matters. Facebook has the power to set policy in a way no one ever has before, and they do not seem to be thinking in terms of implications for humanity. They seem to be thinking in terms of how to get people off their back because this sort of thing isn't what they do. Or perhaps more charitably, because the people coming to them with problems are so terrifyingly powerful and the problems are so terrifyingly high stakes. There have been complaints about minor changes in advertising policy throwing elections in far-flung countries. Would you want that responsibility?
This is why I say we need a Thomas Jefferson. What I really mean is, we need a political philosopher who sees the moment and its dangers and opportunities clearly, and can try to help us do things fairly and right. I don't know where you'd find one, though.
You're right that it isn't as easy as that -- people's expectations play into it. Governments', yes, and people's too. While I say Twitter should be creating a legislature and courts and a constitution, we'd all think they were insanely arrogant to do so. Even though now that I say it, you can probably see that it probably is getting to the point where that's needed. But they aren't ready for it and we aren't either -- inertia and expectations. A Thomas Paine could fix that. I don't know where you'd find one of those, either.
We are at a time of great change in history. The information revolution is in full swing. The internet had been a tool for research, then a toy for nerds, then a novel technology full of opportunity. Now it is changing the way society organizes itself. Some people think these digital communities will supplant nations, or exist alongside as something equally important. I don't know. I suspect the change is as profound as the one that ended the middle ages, when we transitioned from the church as a primary social organization to nations. But living in the middle of history, who's to say?
It took us a while to figure out how to run nations, too. We really shouldn't expect too much for a generation or two at least.
My main hope in writing this is to encourage people to take the problem seriously, and think of it as belonging to the space of political philosophy -- which it does. If you work for one of these organizations, learn something about the history and philosophy of government -- why we have laws, why we have courts, why we have elections, why we have bills of rights. You are running a community bigger and more diverse than any nation, with totally novel powers and limitations, totally novel ways to be unfair and evil that you need to avoid, and you do not have the resources to do it right, nor can anyone even tell you what doing it right even looks like. It has to be invented. Get help -- from history, from scholars, from anywhere you find it. If you find Thomas Jefferson living under a rock, hire him immediately.
The rest of us are living under the feet of a giant struggling while trying to carry far too many boulders. We yelp when he steps on us -- and we should -- but we should also be trying to figure out if we can build something that will help carry that load. The boulders are there regardless and the guy is hardly up to the task. Running massive communities fairly is hard and complicated. We need widsom and principled understanding. Attacking the overwhelmed monarch is not how you get peace and justice -- you get that by finding him advisors and engineers that can help him build it.
The situation as it stands is untenable. We need a better way. But until we have one, the situation is what it is and we should help the poor guy stuck managing it.
Political neutrality is a charitable assumption, made because even in this (best) case, the problem is obviously so horrific and difficult that we will need everyone's help and cooperation (especially including Facebook's) to navigate it humanely. You cannot live in a society this large and complex without some give and take, and affection and good faith are the glue that hold us together through those moments of giving and taking. We need it to be true that Facebook intends to do right by everyone, even those they politically disagree with.
Fortunately for us, Facebook believes in its own political neutrality and very much wants us to believe it too. It is a conceit and a fiction -- I think we all know that -- nothing in the world is ever actually politically neutral, and the behavior of these entities is, at times, over the line of how close to perfection you might expect someone making a reasonable and principled effort to get. (An understatement, I know. I still choose to interpret what is happening as a sign of distress, not malice, on Facebook's part.) But we should nonetheless take their conceit at face value, hold them to that standard, expect them to try hard to get close to that ideal, and even assume they were trying and made a human error when they fall short and help and expect them do better. It is a tremendous gift that they are trying and think it is worth it to try. Even if that effort is not principled and sincere -- even if they're just trying to make others think they're politically neutral -- it's a massive gift. Run with that. Take it for all it's worth. Believe them and help them succeed and tell them that you're willing to believe they mean well if they act, not even perfectly, but reasonably. The alternative -- a world in which they cannot win and might as well not try, in which we have an openly and unapologetically politically non-neutral entity with this much power -- is far worse.
Facebook may have a lot of power, but it is not so much that going to war with half their countrymen is a good idea -- this is true both for them and for society. Perhaps they would be within their rights to do it, but the effect would be very bad for everyone. The predictable result of that scenario -- the establishment of a competing social media option with a competing flavor of censorship -- eliminates the possibility of healthy, democratic discourse. Our traditional media is in this state right now, which is why both sides are so useless. They cannot talk to each other. They can only talk to themselves. They cannot think high-mindedly about what is best for society. They can only talk about how wrong their enemies are. In an echo chamber and a war, everyone goes crazy, no matter how right they were at the beginning or overall. We need to be able to listen to each other, and choosing to fight instead would just hurt everyone. Loud bias on two sides does not moderate towards reason -- it accelerates towards two flavors of insanity.
We need each other. If we fight, all we will do is break the system, and -- as we see in traditional media -- this isn't worth it. As we navigate society-wide issues, it is easy to see that it would be really nice to have an information infrastructure that worked, but alas, in a short-sighted effort to win an election here or an issue there, we broke it. This is unfortunate because a functioning and rational information infrastructure would be a really helpful tool to have when you need to address a bigger problem like a pandemic. Even looking just domestically, the truth is that neither side needs to win half so badly as they need to be able to trust each other, and need the wisdom that they can come up with together. But of course, this information problem is not primarily domestic. It is revolutionary and it is global. We absolutely cannot afford to fight bitterly over stupid, provincial issues while trying to think about the sort of massive political and philosophical problem that is private companies in California pretending they are not running information infrastructure in Burma. Putting that infrastructure in the center of a US-based political war is a horrific idea to contemplate.
We can't go down that road here. Can't. And even if it looks like Facebook is going down it, responding by fighting is going down it with them. We have no choice but to figure out how to get along. We need high minded wisdom, we need the ability to listen to a huge, global community and make sacrifices for each other. We need excellent vision, and we need good ideals, and we need pragmatic political understanding. Affection and trust and good faith and cooperation are too important to sacrifice. We cannot fly apart, we cannot become enemies, we cannot take offense over small things. The only way to get to where we need to be is to insist on expecting everyone to be doing their best to get along and to accept nothing less. Facebook may be politically biased, but I refuse to accept that. To the degree that it is the case, they can do better, and I believe they want to, and I will help.
The cynical attitude may be true, but the deeper truth is that we cannot afford it.
A chronological timeline is still based on an editorial algorithm. A simple algorithm, but an algorithm nonetheless. It's no less editorial than any other algorithm, like (hypothetically) putting posts in alphabetical order or whatever.
Yes, Facebook has found itself wielding power that it could have never imagined. Nobody is prepared or qualified for that kind of power. That's why it took 700 years from the Magna Carta until the US Constitution to formulate a reasonable contract to limit such power.
Now you have the most dangerous combination of all. The two ingredients which when combined has resulted in horrific atrocities.
The paradox of tyranny
1) The desire to make the world a better place
2) The power and means to actually do so
The private company defense might become a moot point. There are numerous lawsuits now alleging these companies as state actors and some with actual receipts showing direct government communication and result action.
And just last week we have the white house essentially openly confirming state involvement.
> The private company defense might become a moot point. There are numerous lawsuits now alleging these companies as state actors and some with actual receipts showing direct government communication and result action.
Arguably, any entity that submits articles of incorporation to any jurisdiction is an extension of the that jurisdiction.
And in practice, it works out this way. Modern examples: CCP board members on top china tech co's and NSA PRISM integrations with top US tech cos…
Nice comment. I’m not sure that having a Thomas Jefferson on staff would be enough to make a difference though. The real Jefferson had one goal: build a sustainable democratic country that respects the civil rights of its citizens. Your Facebook Jefferson would still have to build an online system that prioritizes facebook’s financial health. Respecting the rights of users would be secondary.
It’s not though. Companies function with significant amounts of waste that are related to internal fiefdoms and have very little relationship to the bottom line.
All of this is also true for our actual government[0]. I have zero confidence that if our government were in charge of running Facebook/Twitter/any other social media app the results would be better.
I agree. I don't think our government is qualified to make these sorts of decisions. I don't know who is. I am not advocating for nationalization of these services.
I am only saying that these people never signed up to have to make decisions about such impactful matters. They are not political philosophers, and they are stuck making decisions of that gravity. I can be mad when they do it wrong, but I can also recognize how tragically outmatched they are. At least governments have judiciaries and cabinets and checks and balances and constitutions and stuff. These guys were just trying to make money on the internet, and suddenly human rights in China became their problem. Nobody seriously expects Twitter to have a full blown judiciary and legislature for processing bans. Nobody expects them to write a constitution which becomes a treasure of a historical document, on how to properly govern the flow of the world's conversation. But at this point, those things would actually be appropriate. It's not surprising they're struggling trying to solve the problem with algorithms. I don't think anyone could succeed at that, and they don't even realize they have the responsibility and opportunity -- they're just trying to do their best to be socially responsible and then get back to making money on the internet.
The best suggestion I have for them is to hold out their hands to humanity and say -- "Look, we have a tremendous opportunity here, and it's bigger than just us. How should we use it?"
Facebook have tried this. They have made am independent governing board that theoretically can tell Facebook what to do w.r.t. content censorship decisions.
No surprise, it appears to be staffed by people who were selected for their middle-of-the-road "you must censor a bit but not too much" type of views. It gave them a limp wristed rap on the knuckles when they banned Trump, said it was arbitrary and didn't follow the same rules enforced on everyone else, but that they still agreed with doing it.
I think what we're seeing here is what happens when you lack some sort of free speech libertarian fundamentalism. Facebook don't have to engage in "statecraft", whatever that is, no more than the designers of SMTP did. They could choose not to. They could say "we will shut down accounts when under court order to do so, end of story". Then governments who think a citizen is breaking a law about speech would have to go to court, win, and then the judge would say, here is an order requiring Facebook to shut down the account of this law breaker (which could automatically hide all content they created). All the evolved mechanisms, the checks and balances of the actual state would be in effect.
But Facebook is based in Silicon Valley and like most firms there, has systematically made deals with far-left devils in order to hire them and put them to work, often without really understanding if it's worth the cost. Does Google actually need 144,000 employees for example? It hardly seems more productive than when I started there and it had 10,000. Their "hire first ask questions later" approach inevitably leads to hiring lots of extremists and wingnuts, people who are there primarily to get closer to a nexus of power they can wield for their own political agendas. The constant dramas we see emanating from Mountain View, Palo Alto and San Francisco are the inevitable consequence.
Tech firms could fix this problem very quickly if they wanted to: just announce a renewed commitment to freedom of speech, platform principles and passive moderation. Any employee who doesn't like it can leave. Many would, but those companies are so over-staffed they'd barely notice, and the environment for those who remain would be drastically more pleasant.
The problem with that is if Facebook committed to free speech then users would post a lot of offensive content which drives away mainstream advertisers. We've already seen that happen. Facebook tightened their censorship several years ago specifically because large advertisers were leaving the platform over concerns about their ads appearing next to user generated content that negatively impacted their brands. Obviously Facebook isn't going to do anything that puts advertising revenue at risk.
That's a rather fundamental flaw in their whole business model, isn't it? Advertisers who don't want their ads appearing next to user generated content, on a social network, have missed something rather important.
It's obviously not a flaw. Facebook is highly profitable. The vast majority of user generated content is inoffensive. We're just discussing a small minority of edge cases.
The government, in theory, is bound by the first amendment. FB being run by a government bound by traditional first amendment restrictions would be worlds better than what we have now.
If the majority of people here are software engineers, would it surprise people that someone has automated the job of crafting believable bullshit? Not to mention disseminate it faster and better than we have ever been able to?
I see it as a problem that we can iterate “content” faster, identify “audience groups”, run marketing analytics, A/b test “narratives”, all to craft believable, plausible “content” and then mass broadcast it.
We’ve built systems that create content faster and better than a normal human BS filter can block.
How does this have anything to do with the First Amendment? How do free speech rules bring the balance of power back to individual human levels of filtering?
Mind you the First Amendment is an American construct. It does nothing for things like genocides in Myanmar, or journalist suppression or hate crimes and the like.
> We’ve built systems that create content faster and better than a normal human BS filter can block.
Maybe so, but I just don't trust that the "BS filter" big tech has constructed will block only "BS" and not true things inconvenient to a certain strident brand of west coast morality. The NY Post story from last year certainly wasn't "BS".
I don't think algorithmic BS is anywhere near as big a risk as you think it is, and I think the risk of outright censorship is far larger than you imagine. Big tech should be a common carrier and viewpoint discrimination in moderation should be illegal.
And my answer to people who believe this is always the same - please volunteer your their time to an active subreddit of your choice, preferably one with an active political aspect.
Look, I am not unsympathetic. I have personally gone through the whole cycle - I started from "the antidote to bad speech is more speech, not censorship", To advocating for better tools to handle misinformation.
I would honestly LOVE for the world to work how I thought it did. I worry about the tools we create to clean our "gardens". Yet, without those tools I know most large communities would fail to be governed.
I am largely tired, of these debates on HN where old arguments are rehashed, un-tempered with empirics. Heck, people should be upset that the data that can answer these questions is under NDAs.
> please volunteer your their time to an active subreddit of your choice, preferably one with an active political aspect.
Why reddit? That site is a soup of immaturity, did you expect otherwise? On reddit you have people with 5 accounts creating conversations to control narratives and make it look like you have a larger group than you do supporting your case. This is more difficult to do on FB because they do a bit of account validation and other measures.
And your example of a political forum on reddit is a great example of why we don’t want censorship. On reddit admins ban for simply not speaking to the narrative or saying something that is not misinformation yet the admins don’t like. This is literally the same problem we have with FB now. The moral of the story is, stop censoring. It’s ok someone doesn’t say what you think is the truth, I promise the world will keep spinning.
How does that work? reddit speech is not worthy of being heard? Its too immature to be even worth experimenting with moderation?
If people are creating 5 accounts to create a narrative, well then thats the job isn’t it? Stopping those 4 fake accounts.
But that would be censorship. So what is your solution for the problem you yourself have described?
Plus, If your first option is to choose a forum where you self select out of dealing with messy problems, then your experience is invalid for guidance on dealing with messy problems - no?
I can’t seem to see anything but a contradiction of your own purposes here.
Perhaps you see how these things are not contradictory>?
> How does that work? reddit speech is not worthy of being heard? Its too immature to be even worth experimenting with moderation?
I didn't say it wasn't worthy of being heard, I said it's immature and there's zero cost to account creation and therefore you have literal children going to that site and spitting nonsense. Moderating that site is like herding cats.
> If people are creating 5 accounts to create a narrative, well then thats the job isn’t it? Stopping those 4 fake accounts.
But that would be censorship.
Yup, banning them would. Did I say ban them? It was used as a reference to immaturity, because immature / crazy people create accounts to make a cohort that does not exist.
> So what is your solution for the problem you yourself have described?
In short, change the way these people are raised. Change what they're being taught in schools that enables this behavior. It's unacceptable to throw online temper tantrums when you don't get your way. It's also unacceptable to attempt to force those into your beliefs. So start by teaching children that instead of raising the entitled society that we have today.
For those, seemingly like yourself, who thinks the world will end if a group of people start believing the world is flat, I'd say learn to ignore people.
> Plus, If your first option is to choose a forum where you self select out of dealing with messy problems, then your experience is invalid for guidance on dealing with messy problems - no?
I can’t seem to see anything but a contradiction of your own purposes here.
Perhaps you see how these things are not contradictory>?
How about let me speak for myself and quit carrying the conversation forward using assumptions from yourself?
Without access to specialized knowledge, like being an expert in a subject, users will either check the authority/standing of the speaker, the emotional appeal of an argument, or the underlying logic of it.
In my simple way of putting it - creating a website, or creating a post, and then having it disseminated is dirt cheap today. You can get an article on a news website, having it referenced by a youtube channel, have that sent to a twitter feed.
That alone is sufficient to discuss an increase in the volume of content being created - however, that volume also gets disseminated as fast as it is created, which is what accounts for the speed.
This is also without looking into the fact that people tend to use superficial traits to assess whether information is credible online.
"Yet, research shows that people rarely engage in effortful information evaluation tasks, opting instead to base decisions on factors like web site design and navigability. Fogg et al. (2003), ... They argue that because web users do not often spend a long time at any given site, they likely develop quick strategies for assessing credibility."
From: Credibility and trust of information in online environments: The use of cognitive heuristics, (Miriam J.MetzgerAndrew J.Flanagin)
So a good looking website, with content that purports to be endorsed by known authorities, and hits the right cultural blind spots for its audience will get past their filters.
If you’re gathering information faster than your BS filter can process it, then you’ve also exceeded your capacity to process the information your receiving. Assuming BS filter means some extension of comprehension
Yes, I think so too. However people still will consume content as long it gives those dopamine hits. The brain makes you think you are doing something, even if its not really comprehending what its consuming.
Which is getting closer to the problem as I see it - the tech infrastructure has out scaled the default biological tools we are born with.
If you’re consuming information but comprehending it then this is listening vs hearing. This is my original problem with this “faster than BS filter”. If you claim that information is coming too fast for them to comprehend then they are not comprehending it and therefore not getting the information. Which would make this all false.
I think a lot of people would prefer a Facebook that showed what your friends post in chronological order, and wistfully say so now and then.
I'm not sure, but it might have once been that way, before I used it.
There is a string you can append to the URL (?sk=h_chr) to get a chrono view, but it doesn't seem to work well; for instance it doesn't go very far back and it's not clear to me that it's complete.
If my friends were posting lots of "spam, porn, and shock videos", I think I'd rather know it, whether or not I could tolerate it.
A lot of people do say that but I frankly don’t believe them when they say it. It’s well known inside and outside of technology that customer descriptions of their own behavior are often far out of step with reality and I think people would find Facebook less engaging and fun to use if it didn’t tune what it showed them based on their perceived interests and their activity. I would guess most people are friends with people they know but don’t actually want to hear from all that much, and it wouldn’t be ideal to be put in a position to have to remove family members because their posts are boring.
> It’s well known inside and outside of technology that customer descriptions of their own behavior are often far out of step with reality
How do you get from "people may not be reliable witnesses of their own preferences" to "I know their preferences better"?
The first doesn't sound that improbable, but the second is a non sequitur, even if a popular one. If it's easy to lie to yourself, doesn't that suggest it's even easier to lie to yourself about what others think?
My previous comment suggested multiple reasons why people might not choose the chronological ordering even if they prefer it. Most people surely aren't aware of the trick, and using it limits the number of posts available. It's not clear that it shows all of them within a range either. I myself only use it about half the time, not because I don't want things in order, but because I have the sense that I'm missing something either way.
>I think people would find Facebook less engaging and fun to use if it didn’t tune what it showed them based on their perceived interests and their activity
I think an important point is that "engaging" is not at all the same thing as "fun to use". Meth is engaging, but in the long run, not that fun. The conflation of the two is useful for apologetics but not necessarily convincing to a lay individual.
>it wouldn’t be ideal to be put in a position to have to remove family members because their posts are boring
"Boring" sounds like a different sort of thing from "spam, porn, and shock videos". That kind of post does sound more plausible.
But Facebook provides controls to filter people if you want, and there is no reason that if they stopped choosing what you see that the user's controls would have to go away.
What is the rational argument that Facebook knows which family members of nearly 3 billion people are "boring", though?
The rational argument is pretty simple. They have data about what things you click on, comment on, react to, etc., and use these to show you more similar things and show you fewer things you just scroll past. I'm not sure how you argue against what seems like pretty objective data.
>I'm not sure how you argue against what seems like pretty objective data.
I commend you for crystallizing the argument which I think is indeed representative of what the industry believes about itself and many others reject.
Virtually everything in computing, or the real world, for that matter, is solving some sort of optimization problem, explicitly or implicitly.
I'd paraphrase your comments as, Facebook solves an optimization problem, they make billions of dollars doing it, so it is objectively the optimum.
There are two objections I think that capture most of the opposition.
- Real optimizers can't promise a true optimum every time; they sometimes converge on a local optimum. This local optimum may not be within any particular distance from the global optimum either.
- "Fewer things you just scroll past" is not necessarily what the *user* wants to optimize. It is not in fact what I want to optimize. I'm not sure if I made it clear, but the *existence* of things that I don't actually read is important contextual information. So is the relative order, the timing, the source...
These points are really abstract and general - virtually anything can have these two issues - they're not optimizing the right thing, and they're not finding the global optimum.
I think non technical people sometimes get angry because they intuitively feel that people claim objectivity without justification, but I think it's possible to provide a bloodless, abstract, logical, and specific critique more suitable for a software engineer's mentality, and that's what I tried to do above.
[One other general issue I thought of - optimizers can have the wrong constraints; everything in life normally has rules and limits on how you can pursue a goal. The ends don't necessarily justify the means. So this is another thing that is not covered by simply saying the optimizer produces the optimum]
> I think a lot of people would prefer a Facebook that showed what your friends post in chronological order
A lot of people say they want this... but so far every study has come back with the result of “even the people who say they want it, consistently rank their experience as lower-quality when they get it”
Quite a few people do want that. Even if they didn't, some content is outright illegal to host, so they can't take an entirely user directed approach even if they wanted.
I mostly don't, but there is still a disconnect here.
Who wants to not know the sort of group they belong to?
Censoring things that are not acceptable at all is different from censoring (or reordering) things that are determined to be sub-optimal for a particular person to see, that come from members of the group who are not going to be banned.
So if you go to a listing of groups, without some active curation, how do you prevent it from being that kind of content, or hate groups, or other undesirable content? The standard gets real blurry real fast.
As I contemplate the subject, I think I draw the line between a set of human-understandable rules, enforced by some combination of human and machine partners, versus a set of goals that drive a ruthless optimization process that is opaque.
So you say, but is there really? What about hate speech? Should that go there? If not, now we need to decide what is and isn't hate speech, which is itself contentious. And material "deemed indecent" could also be newsworthy, so it's not really simple to apply even this standard. It's a pretty clear trend: every supposed "free speech platform" ends up having to have active content moderation to actually make it a site anybody would want to use, and many of them still languish in obscurity or eventually shut down because it turns out most people don't want to hang out with the people who were too unpleasant for the mainstream sites to want them.
Actually, once you start to probe, I think you'll find people disagree in degrees, even though most people consider their lines in the sand clear and commonsensical.
Yes. The US has managed fine with exactly such a dichotomy. The reason it’s easy to draw a bright line at obscenity/porn is because you know it when you see it.
> What about hate speech? Should that go there?
No, but speech inciting violence is pretty easy to recognize. Again, First Amendment jurisprudence has developed clear rules for this.
It’s not illegal to post shock videos. It’s protected by the first amendment. The same is true of porn, for the most part (while “you know it when you see it” is a suggested standard for obscenity the concept differs greatly from one observer to another — see what answers you get from Los Angeles and Karachi about what’s manifestly obscene and I bet the answers are different). I still don’t think most people want to go to a site overrun with this kind of content.
Are you really attempting to justify censorship to keep porn off of the site? We’ve had algorithms for years now that can detect nudity. It’s a simple filter that can be added that allows the user to select if they want to see it or not. Much like DDG and Google image search.
There's a difference between nudity and porn. Facebook banned the famous "Napalm Girl" photo, and then later un-banned it. Rules appear to be arbitrary and constantly changing.
Yea child nudity is going to be a difficult one for all algorithms as they lack context. The only reason that picture is allowed is because of the context. If it were a random naked child in a field it would and should be banned.
The point of this example is to poke holes in what appears to be an ironclad, simple, and consistent principle by pointing out that few advocates for it actually wish for what it implies.
I don’t find it very persuasive for people to baldly post “that’s different” when my whole idea is that the line between these things is not so bright as we like to imagine.
>the line between these things is not so bright as we like to imagine
In practice, the line exists, whether or not it's clear to you, it is clear to Facebook, because they already (and always will) act differently depending on their judgment of which side something falls on.
There are things that are totally outside the pale, and then there is the practice of curating peoples' feeds amongst the stuff that is allowed.
Some people specifically don't like the second thing, even if it would lead to some metric declining.
What principle? Are you saying you weren't disputing the principle that there are two different kinds of things that people call "censorship", in practice and in theory?
It is a difference in kind, not degree.
You can have an arbitrary number of rules, enforced by human and machine, for acceptable behavior in a forum. I think that's basically the case on HN or reddit.
But if you have an optimizer that filters and reorders things on top of that, then any particular item that is moved or removed is not against any rule. And nobody can really say why, even if they are informed of what happened.
The latter thing is what many people, including me, think is terrible, and the distinction is not fuzzy even if you can call it all "censorship" or "curation".
I’m sorry but my profession is vaccine hesitancy and persuasion researcher and OP is just wrong. I have n=20,000 polls showing people get disinformation from Facebook that informs their decision to not get vaccinated and hours of focus group recordings also showing this.
What about those who have never stepped foot on the platform yet are still hesitant? Not everybody is on Facebook.
Also while we're here, if you are what you say you are, add the military forcing vaccinations, people witnessing side effects while being forced to continue duty, and the network effects this has on them and their friends / family after leaving the service. Look at how many people try to go the route of court martial to avoid some vaccines only to change their mind later when they're literally sitting in jail and have no rights. Then think about how this plays out in a civilized society when you can't throw the person in jail.
> What about those who have never stepped foot on the platform yet are still hesitant? Not everybody is on Facebook.
No, but many (most?) people are. So many of "those who have never stepped foot on the platform yet are still hesitant" may well have gotten much of their hesitancy from people who are on Facebook and got much of it from there.
You can generally trace it back to someone they know on Facebook or Tucker Carlson. Dems would serve themselves better by not trying to point fingers on where to aim the ban cannon. This is only going to cause more mistrust.
Actually visit the areas with hesitancy, then come back and see if you say the same. I know some 100 people that are hesitant. Not a one of them are on FB, nor listen to anything FOX puts out. It's gov mistrust fueling it. Not Tucker Carlson. That's the most ridiculous thing I've heard lately.
I’ve done 15 12 hour shifts at vaccine clinics, designed clinics around the country, and regularly talk with unvaccinated individuals but thanks for the advice.
Nobody's suggesting military members have an option for vax, but are you denying the network effects this can have? In the US veterans make up some 7% of the population. Expand that from their network effects and we get closer to the 40% or so that are hesitant. Clearly not all military members and veterans are hesitant so this is nothing more than an estimate than anything. But it still holds, you see your friend get the shot then have a bad reaction and ultimately leaves deployment because of it then it stays with you. You see it 2 times, and you're ready to swear off most vaccines.
How do you distinguish correlation and causation? People who are concerned about something are naturally going to search online and engage with content that confirms their biases. Do you have a robust evidence that vaccine hesitant go on Facebook and change their minds based on pro-vaccine content that is heavily promoted these days?
No, most of the pro-vaccine content doesn’t persuade people because it’s not crafted right. There are some interesting efforts in the field now with promising effects.
But being on the fence/ skeptical and seeing disinformation that aligns with your fears does move someone from the hesitant to resident category
Tuskegee experiment cast a very long shadow and contributes a significant reason for the racial divide in vaccination rates for covid. No research needed, the bias is built into cultural memory.
No idea why you got downvoted so much. This is very much so a reason people are untrusting.
One method a Black pastor and I came up with for our county and surrounding ones was to bring the parishioners to a county vax clinic with white folks also present, and let them point in the pan of pre-drawn needles for which one they wanted, so they didn’t have to fear they were getting a “Black” vaccine
Most likely, I presume, is that telling people whose livelihood depends on the internet that other people do not research on the internet everything they develop beliefs about is apparently not a great idea if you are concerned about fake internet points.
That's an excuse, not a reason. The reason for both whites and blacks acting like idiots is lack of mastery of even kindergarden-level critical thinking among US adults.
> my profession is vaccine hesitancy and persuasion researcher
Can't get if this is sarcasm or not. If true, it's scary how fast micro-fields are created and funded once there is a political purpose behind it, so that policies can be rubber-stamped as science. If false, nice troll.
The antivax movement existed long before COVID and Facebook was a huge contributor to its spread. There is nothing scary about someone researching that.
Because some people have witnessed rushed vaccines first hand, some have the the test subject of other vaccines. Sometimes this doesn’t go well (see anthrax vax). So some of us want to wait to allow others to be the test subject. You definitely can’t count on a Dr telling you something is safe, it didn’t work out for us in the military there either.
Anecdata: I've seen several people in my circle who have never identified as anti-vax in the past, who reject SARS-CoV-2 vaccines on the grounds of "they were made too quick, we didn't have enough time for testing". Of course this argument completely falls apart when you know how vaccine development works, but it's an argument that genuinely only applies to this specific group of vaccines.
This is where politicization is so dangerous: it’s reasonable to have questions in that regard but the odds of them getting bad advice go up massively when there’s a billion dollar propaganda machine pumping out false claims faster than they can be debunked. A share on Facebook or political news can take hours to reason through a concern, find the supporting evidence, etc. even if someone is available to help. My wife has done this with her biology students and it’s a really good education for how science works but it’s also a job.
> Of course this argument completely falls apart when you know how vaccine development works, but it's an argument that genuinely only applies to this specific group of vaccines.
So if this is true then there should be no testing period right? We should know out the gate if a vaccine is safe or not right? Clearly this is incorrect.
If you like testing vaccines, join the military. Then let’s see after you’re injected with various chemicals and get a 100% disability from it if you feel the same way and attempt to force others into your line of thought.
> We're already past the testing period. The phase 3 trials of multiple vaccines are concluded and analyzed.
From the FAQ on the Pfizer vax:
"The Pfizer-BioNTech COVID-19 Vaccine is an unapproved vaccine..."
"The Pfizer-BioNTech COVID-19 Vaccine has not undergone the same type of review as an FDA-approved or cleared product. FDA may issue an EUA when certain criteria are met, which includes that there are no adequate, approved, available alternatives. In addition, the FDA decision is based on the totality of scientific evidence available showing that the product may be effective to prevent COVID-19 during the COVID-19 pandemic and that the known and potential benefits of the product outweigh the known and potential risks of the product...."
Not to mention this trial period along with vaccine development was accelerated.
Also, from actual mouths of those hesitant, the reason is mostly gov mistrust. Which is yet again another problem of this divide dems have been building.
> Isn't that the better option anyway, if you don't trust the government?
/me reviews history of Fauci speaking…
Nope.
When approval gets completed and the left quits trying to force the vaccine on people and implying they are too stupid to understand why they need it, when they quit trying to censor information they don't like (what do you think the #1 cause of gov mistrust is these days? how do I know the paper you posted isn't gov propaganda? you see how deep and dangerous censorship is?) maybe you’ll get more traction.
Further what you’re implying with these types of statements is that science never makes a mistake. We both know that’s not true don’t we? So while in aggregate the risk to an individual is limited, individually it’s their life and the only one they get. Let them make their own decisions. With delta now we can’t get herd immunity anyways.
And while we’re here, how many people do you know that have had COVID? Also stop telling people they need an expert to explain it to them.
> I said to read it yourself or get an expert, the latter meaning one you choose.
Great! Don't tell someone to get an expert unless you want them to not listen to you. OK?
> I didn't say anything about Fauci, and I didn't say you need an expert.
I did say something about Fauci. He's an expert no?
> If that's the worry why does FDA approval matter?? this doesn't make any sense.
Because that's why some people are hesitant, lack of approval. Others like myself won't believe pretty much anything put out by the left.
> And the rest of what you said about trying to convince people isn't related to what I said.
It 100% is, you're just avoiding it because it's difficult. Just like before you weren't downvoting and yet now you are. It's pretty clear the convo didn't go the way you wanted it to.
> Great! Don't tell someone to get an expert unless you want them to not listen to you. OK?
Does the 'or' not make sense here? If you don't feel qualified, you can get anyone you trust that is. If you do feel qualified, then go read it yourself.
> I did say something about Fauci. He's an expert no?
In a way that makes no sense as a counterargument. If I say you can fix your own car or get a mechanic to do it, there's no reason for you to name some famous mechanic you hate and act like I meant them.
> Just like before you weren't downvoting and yet now you are.
It's impossible to downvote replies to your own comments.
> It's pretty clear the convo didn't go the way you wanted it to.
This conversation just confuses me. All I wanted to do is explain that the vaccine is tested. And there are plenty of non-'left' people that can confirm the numbers and the underlying science, if you're honestly looking for the truth.
> Does the 'or' not make sense here? If you don't feel qualified, you can get anyone you trust that is. If you do feel qualified, then go read it yourself.
OK so I'll try again. If you want someone to listen to anything you have to say, do not start off implying they need someone to interpret information for them. Fix your language or not, I don't really care anymore.
> In a way that makes no sense as a counterargument. If I say you can fix your own car or get a mechanic to do it, there's no reason for you to name some famous mechanic you hate and act like I meant them.
Famous mechanic? Just curious, can you list any famous mechanics that anybody would know? Anyways, since this one is too difficult for you (see how offensive that is?) I'll explain more. Last year how many times did these people step on their toes when speaking to the public? That mask vs no mask thing early on in the pandemic. What became clear is that these "experts" will say whatever is needed to get people to do what they want. Be it get vaccinated or stay indoors. They lied to attempt to control people, and destroyed all the trust they had on the way.
> This conversation just confuses me. All I wanted to do is explain that the vaccine is tested. And there are plenty of non-'left' people that can confirm the numbers and the underlying science, if you're honestly looking for the truth.
It is tested in that we've stuck in in arms and most people don't die. Please explain to me in excruciatingly great detail what will happen to our bodies as a result of this shot in 5 years. When you can answer that question, then it's tested
Vaccines have saved hundreds of millions of human lives. Human lives are valuable. Refusal to get vaccinated against things like measles, polio, smallpox, or COVID-19 cause a lot of unnecessary deaths, which destroys a lot of valuable things.
Just as spending on loss prevention in stores can easily produce a net positive impact on the store's balance sheet, reducing the rate of unnecessary destruction of human life (or other forms of permanent damage short of death) from diseases for which vaccines exist produces net positive impacts on the tax rolls/life insurance balance sheet/quality of life.
I guess "produce profit and avoid unnecessary misery and harm" is a political purpose, but I don't understand how anyone could speak derisively about it.
The thing that really hammers home that there's something to study here is that we basically eliminated polio through vaccination campaigns, and we did this in my lifetime. The Global Polio Eradication Initiative was launched in 1988 (one year after I was launched), and since then the number of annual cases has dropped by 99.9%, from 350/year to a handful. 2% of the people that contracted polio suffered paralysis, which would necessitate living in an iron lung to keep breathing.
Before last year, I would have said that the clear and obvious harms of polio were the reason so many people got vaccinated that polio is basically eradicated. But since the start of COVID, over 10 million humans have died miserable deaths from COVID, yet so many people reject the vaccine.
There's an extremely grave threat, but there's also an easy way to gain near complete invincibility to that threat, but tens of millions of people in just the US are refusing invincibility to COVID. 99.2% of US COVID deaths now are among the unvaccinated [0]! 99.2%!!! Clearly there's something going on to make so many people make such an irrational choice, and that's worth studying, as the benefit of getting a few more people to embrace invulnerability is worth so much more than the cost of funding some research and outreach.
> since [1988] the number of annual [Polio] cases has dropped by 99.9%, from 350/year to a handful
And what's more, that's not even the impressive part of the curve in terms of magnitude. Polio was once so commonplace that the idea of being anti-vax would have been completely absurd to everyone just some 40/50 years ago, when you had someone crippled by childhood paralysis (of which Polio is the most common cause) living in basically every city block. And even in the White House, from 1933 to 1945.
Vaccine hesitancy is largely irrational, however polio and COVID-19 are hardly comparable. Polio has a 2 - 5% infection fatality rate for children and young adults, whereas the COVID-19 IFR for the same age group is close to 0. We're talking about multiple orders of magnitude difference in relative risk.
There's a global pandemic killing thousands of people and there's a sizeable portion of the American population spreading misinformation that's causing a significant number of people to not get the vaccine, in turn endangering both themselves and others.
It seems obvious to me that there would be people researching how to either stop this misinformation being spread, or educate people enough to stop believing it. This isn't some conspiracy theory - this is just sane people doing sane things in response to a very real threat.
It’s not like I wanted to do this, more a moral obligation. When a former EBOLA colleague texted me last February and said he was putting the team back together I had a small panic attack knowing this bullshit cycle of unmitigated spread and disinformation was about to be my life again for 2+ years. On our first Zoom one guy just banged his head on his desk for a solid 2 minutes to give us a bit of comic relief.
Doesn’t even pay well - like at all. Dunno why you got downvoted so much.
Actually that's average number of DAILY deaths from the past 7 days, not the number of deaths in the past 7 days. As a lot of people take the weekends off, reporting of new cases and deaths decreases sharply on weekends and increases on Mondays, but in reality people are not dying at a lower rate on weekends. That's why the stats are averaged over 7 days.
For reference, there are about 15k homicides per year in the US, or about 41 per day, which is about 13% of the 316 daily COVID death rate reported by the CDC (and on the front page of the NYTimes site).
The issue is an increasing infantilization world-wide where some group thinks it needs to "protect the others", assuming they are the single source of truth and that it's obvious they are. It's a well known bias which for some reason aligns with the current zeitgeist. As everything before, it shall too pass at some point.
Institutions of ordered information (papers, publishers, tv) have been disrupted by a New tech - the internet
What we see now is a knee-jerk reaction to reestablish order by the establishment.
This happened too with the creation of the gutenberg printing press. New tech disrupted established information sources.
Approx 60 years after the printing press creation, a relatively minor religious squabble ended up in a huge conflict that engulfed most major powers in europe.
Truth is power. The institutions that that control the truth will fight to maintain their power.
While I don't disagree about the knee-jerk reaction, the Guttenber comparison isn't quite apples-to-apples:
When Guttenberg made the press, access to it's output had a strong filter: most people couldn't read, and anyone who could read was highly educated for the times. Social media doesn't have a filter like that, and higher levels of education generally correlate to lower levels of belief in things like conspiracy theories.
Also keep in mind that the T+60 years conflict you're referencing, while greatly assisted by the printing press, was also greatly pushed along by the masses being told what to think by their preachers or by powerful elite looking to destabilize the status quo to their own benefit. The peasants of the peasant revolution weren't reading criticisms of the Catholic Church and coming to their own country conclusions, they were given emotional polemics by people like Storch sparking extreme levels of fanaticism.
So, yes, there is a lot in both situations about power struggles with the established order, but most people are being influenced more by emotionally charged arguments and tribal loyalties than they are by increased access to level-headed rational presentations of an issue.
> Also keep in mind that the T+60 years conflict you're referencing, while greatly assisted by the printing press, was also greatly pushed along by the masses being told what to think by their preachers or by powerful elite looking to destabilize the status quo to their own benefit. The peasants of the peasant revolution weren't reading criticisms of the Catholic Church and coming to their own country conclusions, they were given emotional polemics by people like Storch sparking extreme levels of fanaticism.
That part seems very much apples-to-apples with what's happening now.
That... Yes, thanks for pointing that out. Took me a second to realize I'd somewhat argued around my initial point. I suppose my initial point was simply that with the printing press, it all still required a direct middleman: most people still didn't have much greater access to the required material to make up their own minds. These days with the internet, that access actually exists, yet most people aren't really seeking it out. Possibly because, given that access, people are more likely to feel they've made an informed choice even if to more comprehensive information goes u used? Maybe?
I'm not sure, but there's a different dynamic at play. Both "revolutions" had (and have) great potential to allow individuals to make more informed choices, but in the case of Guttenberg the educational bottleneck to get there was still narrow... These days for some reason the bottleneck is more if a self-imposed barrier.
Although education is still a factor: I try to read news sources from the full range of points of view, but they all may report on something like the latest bit of research-by-PR release differently. I have enough of an education to know I should seek out the research publication itself and I understand how to approach a formal writeup of research to provide me with at least a low pass filter against poor quality work. I have enough of an education to be skeptical of simple solutions to complex problems and instead thing about edge cases and failure modes or poor incentive structure.
I can absolutely still be wrong, but if I'm wrong I will still have failed after going through a more rigorous process in an attempt to figure things out. I'm still more likely to ultimately accept a result that runs counter to my initial-- perhaps emotionally charged-- response on an issue as a result.
And perhaps of equal importance at a time where tribal loyalty often demands complete condemnation if anyone outside the group, I understand enough to realize that people with another viewpoint aren't inherently bad/evil/anti-patriotic because reasonable people can disagree, less reasonable people are still complex and not one-dimensionlly defined by a particular belief, and lines between spectrums if belief and just that-- on a spectrum rather than clear cut all or nothing lines in the sand.
I don't know... Maybe that's just me patting myself on the back. But even if educationally institutions can be inadvertently biased towards specific viewpoints, I still believe they expose people to many more ways of looking at something from multiple directions, in a less emotionally charged venue than social media in the internet. I've had teachers that I consider to be the model for how that should occur when I found out afterwards that the points of view they were encouraging me to consider went completely counter to their own strongly held beliefs.
Absent straw men, you aren't going to get anything near that approach from whatever talking head you're watching on TV or YouTube, or captioned meme photo that comes across your social media feed.
What institutions and who's the establishment? Facebook is the censor but also the disruptor of those institutions, so it can't be them.
The world isn't necessarily controlled by some illuminati-like organization. That belief comes from a cognitive bias called "Agent-detection bias" https://en.wikipedia.org/wiki/Agent_detection
It may simply be that free information created a bit of a power vacuum (perhaps the same as with the printing press?) so there's a scramble for power by all the people engaged in the culture war, not just existing powerful people but anyone who sees an opportunity.
Those are huge portions of the population, not some powerful individual bosses. Everyone who advocates for free speech is part of that power struggle fighting against everyone who wants censorship of ideas they don't want to become popular.
The internet will kill most every institution that predated it. Citizen journalism killed media. Netflix/YouTube killed Hollywood. Blockchains will replace 51% democracy with 100% democracy.
Dying inherited institutions are grasping to stay relevant with appeals to authoritarianism. Free speech is now a hinderance and a liability to their existence. As their relevance dwindles, tyranny becomes their only tool.
None of this is accurate. Craigslist killed newspapers (it took their ad revenue). Citizen journalism hasn't replaced that; buzzfeed-style sites did. Hollywood is thriving. Netflix is also thriving. If Netflix is killing anything, it's cable providers. Blockchains have produced nothing of worth, especially if you consider anything outside of cryptocurrency (but let's be real, cryptocurrency is also worthless)
> Craigslist killed newspapers (it took their ad revenue) Citizen journalism hasn't replaced that; buzzfeed-style sites did.
Craigslist killed classified ads (in the US). Twitter and Substack replaced investigative pieces, leaving legacy media companies reduced to peddling outrage for clicks.
> Hollywood is thriving.
The Golden Globes are over. Last year's Emmys saw the lowest viewership of any Emmy ceremony in the Television Academy’s history[1]. COVID killed the cinema[2]. AI is diminishing any production advantages that remain.
> Blockchains have produced nothing of worth, especially if you consider anything outside of cryptocurrency (but let's be real, cryptocurrency is also worthless)
That's demonstrably untrue given Bitcoin has a market cap larger than all but 20 countries[3]. Blockchains allow, for the first time in history, humans to coordinate at scale without human intermediaries. Only dead weight bureaucrats, machiavellian statists, and the innumerate who have been sold lies from the former would argue that blockchains are worthless.
Can you expand on that 100% democracy part? It doesn't seem like any system would allow 100% of the people to all be governed by their preferred set of rules when at least some of those rules would always conflict with ones that others want to live with.
1. With your ballot: very small chance of changing policies (1 vote)
2. With your wallet: small chance of changing policies (n votes through political donations)
3. With your feet: guaranteed chance of changing policies
The internet facilities 100% democracy as it simplifies voting with your feet (remote work, social networks), allowing likeminded individuals to congregate and influence policy. Blockchain furthers this, unlocking the ability to have a referendum on every issue, which maximises consent.
Thanks for a thoughtful response! I agree more referendums would get to a better level of participatory democracy. Though on any specific issue it could still be a 51% win with 49% going away angry.
I'm not sure how blockchains makes more frequent referendums easier. Maybe harder to game the system? Apart from that it seems like a platform for regular referendums could be built without blockchains. It's still relative early days on that tech though, and amidst "Blockchain" being attached to anything anyone wants to hype, I have ended up being a bit skeptical of claims they can fit a given purpose until it goes beyond the conceptual stage.
> Political representation becomes obsolete with blockchains that can facilitate a referendum on every issue without the need for centralised trust
The choice of representative democracy was not because of the difficulty of counting votes. The founding fathers very clearly rejected what you're proposing. It's because most people are grossly ignorant about most things, and their opinion shouldn't be counted. If you can't pass a high school math class, I honestly don't care about your opinion on climate change.
Math is not magical and the blockchain doesn't solve any of these problems. Let's look at the risks and see how the blockchain does nothing:
1. Vote machines change votes - Great, the blockchain would allow people to verify their vote! Well, the software that sums up all blockchain votes could still miscount them, regardless of what the blockchain contains. Ok! So we can audit the results! Great, we can already do that with paper ballots.
2. Dead people voting - Again, same as the audit before. We see a John Smith voted on the blockchain. We can look up John Smith based on a paper ballot, just as we can with the blockchain.
3. Illegals voting - The same process used for identifying voters and giving out private keys can be used for verifying voters
But what about risks you have introduced?
1. By making everything electronic, you've shifted the risk from local elections, to state actors.
Signing keys are not magic. If a country found a solution to the factorization problem, or solving discrete logarithms, the scheme would collapse and they could forge massive amounts of votes.
2. Getting 300m people to manage cryptographic keys is impossible. Keys will be stolen, just as bitcoin wallets are. Now you need to deal with revocation of votes, and other controls that nobody has yet invented.
Have you personally destroyed the climate because of what one of those people said? Or are you immune to it through some kind of superiority of intellect or luck of living in the one final correct culture of true information in all of history?
Can't respond to the first, but it's pretty clear that those groups that think they are the single source of truth would probably include any self-proclaimed "fact checkers", which seems to now also include Facebook itself -- and would normally also include the CDC (excepting the ironic title of this thread, of course!)
Are you saying that people who check facts believe they are the single source of truth? Why would you need to check facts if you were the source of truth?
Wouldn't it be more accurate to say that the fact checkers believe that there is an external source of truth (i.e. facts) which they are checking?
I'm not sure if you're being sarcastic, but the CDC doesn't see themselves as a single source of truth.
They defer to the scientific consensus of people who specialize in understanding how viruses work, how pandemics work, and who have a process for learning facts by interrogating reality.
The irony of your statement is that you can only make this accusation now because the scientific community did not stop there and diligently continued working to improve the consensus.
Science converged on heliocentrism as quickly as it possibly could once we had instruments precise enough for heliocentrism to be the simplest model. The only reason the Earth-centric system overtook previous heliocentric theories was that it was most consistent with observable facts.
Do you have some other system you'd like to propose that ignores observable facts? Maybe you'd like to consult a Shaman?
The obesity epidemic isn't created by science, it's created by economic and food policy. The root problem is essentially the same one as with Facebook: scientific information gets crowded out because people stand to profit from misinformation.
Seriously, what alternative to science are you suggesting? Political populism? Let a bunch of outrage-addicted conspiracy theory junkies determine the truth with no regard to whether it matches reality? Do you really think there's an alternative system that better arrives at the truth than making and checking your predictions against reality?
The people who check facts, empirically at least, believe there is a single source of truth on any topic and see their role as guiding people towards it and/or whacking people who are talking about a topic but not considered that single source.
Very few fact checking organizations appear to actually use critical thinking of any kind. They claimed they would be but they're just journalism interns normally, so they just degrade to assuming that whatever the government or ideologically aligned academics say, is the truth, and anything else is wrong.
Yes. 'some group thinks it needs to "protect the others"'.
Virtual signaling and safetyism. Hilarious how the virtue signaling elite regularly close their conference calls with "everyone stay safe". Oh, I know you (don't) care about me.
If I never hear the term "virtue signaling" again it will be too soon. Adds about as much to the conversation as calling someone a sissy (or other ruder names I'm sure you can think of) for caring about something and the intent is hardly any different.
If asking people to take a specific step that has a good chance of protecting them from disease or even death is “virtue signaling” and therefore evil, the concept has lost any meaning it may or may not have had.
Labeling something “virtue signaling” implies acceptance that the action in question is beneficial. So you agree that getting vaccinated is a good idea, but somehow manage to still take issue with people saying as much in a slightly more direct manner?
Yes, the group of unvaccinated people, making up 99.5% of recent US deaths even though they only represent about 40% of the population is, almost tautologically, in need of protection. Considering the US has, once again, managed to neatly divide into the same ideological camps as on any other issue, there is no obvious selfish motive for people of the one group to care for the other. This leaves the latter group stumped, because they can’t think of any non-selfish motives, forcing them come up with I’ll-defined terms such as “safetyism”.
The term you’re grasping for is “altruism”, by the way. Although I’ll freely admit, limited to me, personally, that it’s supply is running low and I mostly want people to get vaccinated to finally get this over with.
"If asking people to take a specific step that has a good chance of protecting them from disease or even death"
Most people under the age of about 65 are being asked to take a vaccine for a disease that is extremely unlikely to hurt them, and the vaccine itself can have severe side effects, with deaths and injuries from these vaccines now greatly exceeding the deaths and injuries from every other vaccine programme in the last 10 years combined.
Over-simplifying this situation down to "people need to be protected [from themselves]" is exactly the sort of misinformation that Facebook believes it's fighting.
"This leaves the latter group stumped, because they can’t think of any non-selfish motives"
Non selfish motives? Their demands are incredibly selfish. They've convinced themselves, against all actual data and biological theory, that the unvaccinated people are dangerous to themselves, the vaccinated, and thus individual choice over people's own bodies must be removed by force. There is no basis for this belief but they apparently want to reach this conclusion via any means, fair or foul. Despite that they are the same people who are usually insistent that women be given choice over 'their own bodies' in abortion debates.
Yes, that's probably more than "all vaccine programmes in the last 10 years combined" because, contrary to popular believe, vaccines are incredibly save.
It's also 1/100th of daily COVID deaths, or about 15 minutes' worth.
In the years before, how often were you vaccinated? Exactly: you probably weren't. Millions of vaccinations were given ib the last few months, orders of magnitude more than usual. And while most are usually given to toddlers, now the 60 to 95 year-olds got most of the vaccinations.
You know what happens a lot to 80 year olds in the six months after they are vaccinated? They die.
You know what happens a lot to 80 year olds in the six months after eating ice cream? They die.
Also, as has been reported plenty of times, read by you (in, for example, the article above), and ignored because it doesn't fit your worldview, is that VAERS is, as the URL says, "open". It's a relic from a time where people tended to be sane, and wouldn't just submit all sorts of imaginary things to a medical database.
The article cited above summarises what the people collecting that data have concluded from it. How can you make an argument about the data they collect and publish, implying that you trust them, and then, in the very next breath ignore them and, I assume, next accuse them of being either too stupid or corrupt to say anything about that data?
> If asking people to take a specific step that has a good chance of protecting them from disease or even death is “virtue signaling” and therefore evil, the concept has lost any meaning it may or may not have had.
At this point it's not even so much about protecting them anymore as it is about protecting them from continuing to mutate and spread a virus that kills people. The "anti-vaxxer" folk are choosing to not only be a danger to themselves but to potentially anyone they come in contact with. What's worse is that they're actively trying to spread their ignorance about vaccines to other people, thereby growing the danger and the problem.
Quote: "Infections in vaccinated Americans also may be as transmissible as those in unvaccinated people."
That doesn't mean that vaccinated people are spreading the disease as much as the unvaccinated. Only the infected vaccinated people do.
But, as happens with vaccinations, infections are, while possible, far less common among vaccinated people. Even the worst numbers put vaccine efficiency at 65%+ for infection. (85%+ for hospitalisation, 95%+ for death).
By the way, for everyone else who - like me - doesn't have a NYT subscription and thus don't know what the above comment is talking about, the Washington Post has what looks like an equivalent version of the same story:
Internal CDC documents have leaked. Here are some quotes from the report:
"It cites a combination of recently obtained, still-unpublished data from outbreak investigations and outside studies showing that vaccinated individuals infected with delta may be able to transmit the virus as easily as those who are unvaccinated. Vaccinated people infected with delta have measurable viral loads similar to those who are unvaccinated and infected with the variant."
"The breakthrough cases are to be expected, the CDC briefing states, and will probably rise as a proportion of all cases because there are so many more people vaccinated now. This echoes data seen from studies in other countries, including highly vaccinated Singapore, where 75 percent of new infections reportedly occur in people who are partially and fully vaccinated."
"The presentation highlights the daunting task the CDC faces. It must continue to emphasize the proven efficacy of the vaccines at preventing severe illness and death while acknowledging milder breakthrough infections may not be so rare after all, and that vaccinated individuals are transmitting the virus. The agency must move the goal posts of success in full public view."
There are some other points worth noting from the presentation:
1. The frequent and very US specific claim that "99% of hospitalizations are in the unvaccinated" isn't true. CDC stats show 15% of hospitalizations are from vaccinated people and the number is growing. Why are they claiming otherwise in public?
2. The confidential data is not really surprising, because it brings the USA into line with what every other country is seeing.
There's a good summary of some of the latest non-US data from UK and Israel here. Efficacy against infection and "mild illness" keeps falling:
That's not correct. Why assume I read the NYT? I'm referring to data from places with very high vaccination rates but summer waves of equal/greater size to prior waves, like Iceland. The only way that's possible is if vaccination doesn't slow down transmission at all.
See for yourself. Google [iceland covid] and look at the graph. Cases go near vertical starting 16th July and are now the biggest wave they've ever had. Then click through to the vaccinations tab. They had vaccinated 70-75% of their population by that date (depending on whether you count first dose or second).
If 75% of the population are vaccinated yet case curves are bigger than before, the vaccination is not stopping transmission.
Edit: Also, recent data from the UK suggests effectiveness against infection has fallen to <20% in the over 50s. The calculations based on PHE data can be found here:
By the way, are you flagging all my posts? Do you realize that's against the rules? Nothing in my last three posts is a flaggable offence yet suddenly they are all flagged. Do you want me to do the same to you if so?
There have always been rules like this. Turn off your lights when the air raid siren is running, for example. What you're noticing now is a significant portion of the population simply throwing away common sense because obeying common sense somehow infringes on their rights.
>What you're noticing now is a significant portion of the population simply throwing away common sense because obeying common sense somehow infringes on their rights.
It's not common sense, it's mass brainwashing. If you'd asked people two years ago what they thought about the idea of locking the whole country down indefinitely for a virus with a 99% survival rate (and over 99.9% for people under 50), the vast majority would have said it's a crazy idea.
Calling getting vaccines and wearing masks indoors "locking the whole country down indefinitely" is definitely hyperbolic. Is the germ theory of disease really not common sense anymore?
I'll be sure to tell all the thousands of small business owners who were forced to permanently shutter their business that it was just a simple mask mandate after all.
You're attacking a strawman here. Nothing that was marked as misinformation was about the economic effects of lockdowns. If those effects concern you, you should be concerned with vaccine misinformation prolonging lockdowns, whether those lockdowns are government-enforced or due to individuals concerned about risk.
>If you'd asked people two years ago what they thought about the idea of locking the whole country down indefinitely for a virus with a 99% survival rate (and over 99.9% for people under 50), the vast majority would have said it's a crazy idea.
>Calling getting vaccines and wearing masks indoors "locking the whole country down indefinitely" is definitely hyperbolic.
You can say that not everything is still locked down (some stuff still is), but don't pretend like 6 months ago you would have been against the lockdowns that actually happened.
Once again, don't pretend that lockdowns is what the argument is about. That is a straw man. It is about whether companies should be obligated to spread misinformation about vaccines and diseases without getting to mark that as misinformation — they aren't. It is about whether that "infantilization" of the users of the platform is new — it isn't.
No, it isn't. Both you and logicchains keep trying to make the discussion about that, but that has nothing to do with whether posts should be marked as misinformation because people ignore common sense.
20 years ago at a national park with signs "don't feed the wildlife" with a parent teaching her 3 year old to feed the wild life within 3 feet of such a sign. I said "do you see the sign? why are you doing this?" She scoffed, pouted, and ran with her child to her car and drove away.
We treat her like a child because so many adults act like children. They seek what their parents never gave them, and unfortunately keep barking up the wrong tree, but you can't just refrain from sugggesting they not feed wildlife or it leads to mayhem.
Today we have so many humans defecating in the lakes near Maroon Bells that the park might close. For at least a decade that park has a "pack it out" policy, including your own poop. There are that many people pooping in the woods, and it's not good enough to dig holes. People so infantilized they lack the emotional coping skill to poop in a sack and pack it out, so they swim out in a lake with numerous "do not swim" signs and take a dump.
People are fucking stupid. If you don't tell them this, they think it isn't true, and the rules for for someone else.
Are you sure those people are stupid (ie. they care about the environment and want to do the right thing but mistakenly do the wrong thing instead) or simply putting their personal convenience above the convenience of others? To be stupid, I think it would have to be an action that goes against their own interests.
Tragedy of the commons isn't caused by stupidity but by selfishness. It's different.
It does go against their own interests when feeding animals is more likely to attract dangerous animals that attack tourists.
People don't always make the decision they know to be in their best interest. If they don't actually know what's in their best interest, then I'd say it's less an issue of stupidity and more about ignorance.
Attack other tourists. If you feed a duck then leave, you're not attracting a bear to attack you personally. You won't step in your own poo either. Somebody else will.
But people can also be very smart. I'm sure the people blatantly ignoring signs are the minority. There's so many people in the world that even a minority can cause issues (e.g. defecating in the woods), but it's still a minority.
It's different people, and moreover, different situations involving the same person. There are so many things considered "common sense" that basically everyone will miss at least one thing considered common sense.
So yes, you have to accommodate and prepare for the stupid people. You have to patrol the national parks to yell at people for doing things the signs right in front of them say not to. But you also have to accommodate and prepare for the smart people, and realize that you'll sometimes be stupid.
In Facebook's case, they think they're smarter than everyone else by choosing what content everyone can see. Flagging "vaccines are 5G microchips" was fine. But flagging official CDC guidelines implies Facebook is smarter than the CDC, which is probably not true.
So "Stop" signs should say "Stop because there might be another car coming from another direction and you'll be able to avoid it if you stop and see it coming".
I'm not sure that type of explanatory thing would work with most signs. Instead, we need:
1) people we trust to determine what the signs should say so that we are willing to listen to the signs.
2) Easy access to more details explanations of why/what the sign means.
3) An easy method of questioning the sign makers and and getting them changed or removed when they when either the sign maker was wrong or it's simply no longer needed.
Information is pretty easily available, especially as it pertains to things like the "don't feed the animals" sign in the GGP post, and it's not unreasonable to expect people to seek out an explanation (e.g. park staff) rather than just ignore the rule. The problem often comes down to people-- at all stations of society-- believing they shouldn't have to follow certain rules all of the time. People know that going 20mph/30kmh over the speed limit is more dangerous for themselves and others, but lots of people still do it. On more complex topics, explanation are available. But most people want simple answers, and complex issues don't have them. So when they disagree with something but don't have the full explanation to understand and determine things for themselves, they will listen a person who gives easy answers. And they'll tend to listen to whichever person gives an easy answer they already agree with. 99% of political rhetoric is about giving easy answers to complex problems, most of which would have been solved long ago if there was an obvious easy answer instead of countless details and edge cases and other factors to consider.
I don't know how you solve this problem. Human nature means most people don't spend a ton of time trying to understand all aspects of a problem that may only vaguely impact them personally. I don't expect anyone but a very small minority will study geopolitical history & current circumstance enough to have a well-informed opinion on something like whether the US should stay in, or withdraw, from a missile treaty. There's countless other examples of that sort too.
involved in the decision-make process
I'm not sure how you get around the lawmakers problem. Everyone can't weigh in individually on decisions made in a massively complex society. Being involved in the decision-making process means voting for people to handle those things. Are there other models of citizen involvement that better limit the impact of corrupt/biases/stupid lawmakers?
The purpose of a nature preserve, unsurprisingly, is to preserve nature as it exists within this space. Any human interference with nature inside the preserve runs the risk of tipping the scales of the natural lifecycle's balance.
If someone were to bring their dog into the preserve and hunt and kill some rabbits, it would probably be quite obvious that they would be disturbing the natural balance that the nature preserve is supposed to protect. Leaving your excrements is prohibited for the same reason, although on a subtler level. Your excrement can contain germs not native to the area which could start an epidemic if stuff goes really bad.
Just today, there was news that the National Forest Service estimates half of all deer to already have been infected with SARS-CoV-2. Now thankfully, Covid-19 does not seem to be a dangerous disease in deer. Just imagine if it weren't that benign and a significant amount of deer had been wiped out by it nationwide (or even worldwide). That's the kind of low-probability-but-high-danger scenario that packing it out is supposed to prevent.
Are humans not part of nature? What evidence do we have that nature is ever at any time "in balance?" This is a common trope that is deeply rooted in the western mind. Perhaps it is connected to old ideas like geocentrism.
Here is another way of looking at things: the world is constantly changing. Do I advocate pooping on a crowded beach? No. Is the ocassional defacation of a back country backpacker a problem? No.
It is not like humans are some toxic alien not native to planet earth.
Humans are part of 99% of nature. Nature reserves are the control group that helps us understand which effects are caused by us specifically, and which are not.
No. Humans are supranatural when it comes to our influence on the planet. While this power is not absolute, it's significant enough to make humans an edge case in nature.
>Is the ocassional defacation of a back country backpacker a problem? No.
In the example given, it's not occasional, but occurs a significant minority of the time that nature can't cope with the volume of human feces.
So plants are supranatural then too? Their influence far exceeds ours.
Humans are part of nature. Nature is never in balance. Change is the only constant. As pre-Socratic Heralcitus said you "never step in the same river twice."
If humans exceed the carrying capacity of their ecosystem, the system will reduce the number of humans.
I don't think that problem solves itself if they get rid of algo ranking. Misinformation on WhatsApp chats have literally lead to multiple lynchings, and it can't possible be said to stem from any algorithm promoted the content out of order.
I'm clearly against the HN grain here but I actually think these companies do have a civic duty to actively dampen viral spread of information; there are plenty of clear cases where it is leading to Pizzarias being shot up, and almost no cases where I've seen viral information rapidly propagating on these networks is valuable.
They used to do that, but the "problem" they ran into is that the people the average person knows in real life don't produce enough Facebook updates to keep users glued to Facebook all day. Time spent on Facebook = money for Facebook (more ads can be shown) so it's more profitable for them to push "recommendations" for more "engagement."
The discussion about the algorithmic feed seems to be out of context for this thread (i.e., Facebook is now claiming official CDC.gov links are “False Information”)
I don't believe the problem is facebook recommending anything -- the problem is facebook trying to ban certain content entirely (at least in the context of this thread -- algorithmic feeds are a separate problem.)
I fully agree with this. It flys in the face of what I have even seen recommend on HN for TikTok. I’ve seen posts saying the algorithm on TikTok is the best and it gives great suggestions. I guess my question to you is, do you think TikTok has a superior algorithm to Facebook/YouTube or is promoting content on any level going to lead to misinformation?
Just as a foreign newspaper can be the best way to find out the actually important things going on in your country/culture, maybe a foreign algorithm (or rather algorithm-moderation complex - I suspect facebook recommendations have a lot more manual tuning than anyone would admit) will be better at recommending in a neutral way. (TikTok of course has blatant biases on particular issues - but they're issues that are less relevant to most westerners than the things which Facebook/YouTube recommendation is biased about).
> I'm not making the connection between an algorithmic feed and anything I wrote.
> The title of this thread is 'Facebook is now claiming official CDC.gov links are “False Information”'.
> I don't believe the problem is facebook recommending anything -- the problem is facebook trying to ban certain content entirely
You say you don't believe (a), and you state that the problem is (b). I'm seeing "belief" without substantiation in both of those sentences. I'm also actively not taking username "gunapologist" as related to the merits of beliefs or statements in any way.
Finally, it's not clear to me that Facebook is a censor. I hold they have a right to promote or not promote whatever they want. Give someone a megaphone, suddenly anyone not handed a megaphone cries censor. No, just not amplified. Commercially, choosing not to promote a thing is much like a TV channel choosing not to run an ad or sponsor an event, something they choose to not do all the time.
I'd rather FB promote nothing, go back to a timeline of posts from my connections, and seems like "a problem" (one of many) is solved.
> I'm also actively not taking username "gunapologist" as related in any way.
rotfl touché
> I'm seeing "belief" without substantiation in both of those sentences.
that's why I said "believe". my opinions about what facebook should and shouldn't be doing are simply that.. opinions.
if I were to say any of this on facebook, it's possible that I could immediately be banned, and where's the fun in that. (that's a serious question.. isn't facebook supposed to be a fun place to communicate? if we have to worry about the ban hammer, how is it either?)
Moderation should be like HN: light, but still there. If it gets too heavy, civil discourse dies. If it disappears, then the place might turn into lord of the flies. (although, I hope a place like that always exists, too.)
I have a friend who posts once in a blue moon. When he posts, it's usually the first thing on my feed, even many days later. My wife's cousin posts batshit stuff multiple times a day. I fail to see how a strict timeline won't result in my stream being more looney?
Before the algo feed they’d already implemented filtering so you could stop seeing social addict connections’ posts. You could also do groups with different notifications and post filtering. If you really wanted to catch up on your wife’s cousin’s train-wreck of thought, you could still click into her.
You claimed the algorithm is responsible for increasing virality of dumb thoughts. Yet my counterexample is that it reduces it. I believe removing the algorithmic feed alone is not going to solve the problem of misinformation.
This idea seems to a weird new talking point that just... doesn't make any sense?
If you post porn to Facebook they'll delete it from your album, even if you've chosen for it to skip the newsfeed altogether, or if it's only flagged long after it appeared in anyone's newsfeed.
Nobody believes that something showing up in their newsfeed somehow implies approval by Facebook. Nor would some time-ordered newsfeed convince anyone that it's impossible for Facebook to remove individual items and get them off the hook when they don't act on harmful content.
The idea that any form of "curation" or "ordering" somehow creates full liability for content is some entirely arbitrary brainworm that's infecting people. Do you take full responsibility for all incoming e-mails that your spam filter doesn't flag? Is Google responsible for every single website that appears in any SERP? Is the firewall you configured reason enough to sue you if anyone gets past it and into the network?
No, none of these would make any sense. And in those cases it is, I presume, obvious that these are "best efforts" that can be useful without being 100% right, all the time. And that demanding perfection is tantamount to just outlawing the practice altogether.
Honest question: who's fact checking the fact checkers in FB? No offense, but those fact checkers are not necessarily smart or knowledgeable in every subject. They don't necessarily know the difference between DNA and RNA, allele or gene, second international vs third international, and the list goes on. The fact checkers may be ideologues themselves. They fact checkers may not even make it through college courses. So, I really don't get why they think they know the truth of everything and can tell FB users what is misinformation and what is not.
My personal guess is they are people in a middle management position designated full autonomy on their decision. It just so happens that the cultural beliefs there skew left since it is in California and in tech.
Why factual information gets censored though is a genuine question. It is a legitimate "who watches the watchmen" issue seeing as they literally get to enforce whatever ideology tickles their fancy that moment.
My guess is that fact checking falls on low paid workers with high "performance metrics" they're expected to meet to keep their job-- not managers. At least, that seems to be the model used when the curtain has been pulled back on social media QA.
The truth is they have outsourced fact checking to the "International Fact-Checking Network at Poynter" who has to uphold standards not just freely enforce whatever ideology tickles their fancy.
Knowing one low-level employee of one of the subcontractors involved; it works something like this:
1. An article is posted to Facebook/Twitter/etc and is automatically flagged based on keywords or manually flagged based on user reports of misinformation.
2. Fact checking organizations take up fact-checking of the article for a small fee.
3. In this case for the organization I am personally aware of, the fact checking company has a large number of independent contractors who can research a "batch" of claims/articles.
4. Usually the jobs are broken up into simple questions.
5. You along with many others provide answers and must provide pertinent factual sources supporting the veracity of your assessment that this is either true, partly false, or false along with other categories such as if it is clearly and obviously satire.
6. Your answers are judged vs the final determination. Those who consistently skew in any direction are dismissed from eligibility to participate in the future.
7. The fact checking organization once it has enough answers for confidence has higher-level employees double-check the answer and source information.
8. The fact checking organization submits it's formal assessment on the article/opinion to the IFCN.
9. Companies like Facebook use multiple IFCN assessments and perhaps other fact check assessments to provide pertinent information that corrects misinformation that has spread rampantly over the last year like "Joe Biden wants to defund police departments across the US" with corrections like "After being asked if "We can reduce the responsibilities assigned to the police and redirect some of the funding for police into social services, mental health counselling, and affordable housing." Mr Biden has answered "Yes" to a question about whether he agrees with this redirecting of funding.
I may be slightly off in how this works but the overall program is the same for most social networks that care about misinformation. There is more information in the sources below:
Sources:
> 6. Your answers are judged vs the final determination. Those who consistently skew in any direction are dismissed from eligibility to participate in the future.
And if it later turns out that the final determination was wrong, do they go back and reverse this?
I'm sure that there's any amount of process in place, but ultimately if you rely on comparing people to the consensus of their peers then you're going to get the politically approved answer rather than the truth.
I am fairly certain they aren't naive about setting up their fact checking organization. They want to be taken up by anyone trying to verify facts so it's not in their interest to integrate bias.
Yes, there are revisions and the fact checkers fact check the fact checkers. They aren't colluding to implement left or right bias here. They are just making matter of fact statements like "it's false that the pfizer vaccine modifies your DNA" and providing sources that verify the claim and "no, politician X did not say Y"
There's no explicit collusion, but they operate in a particular social culture and the biases of that culture will permeate everything they do. The news organisations that they're fact-checking have the same reasons to become reputable suppliers of truth; I don't see how a fact checker would end up any less biased than a newspaper, since they're ultimately subject to the same incentives.
> They want to be taken up by anyone trying to verify facts so it's not in their interest to integrate bias.
Yes, there are revisions and the fact
This isn’t true. It’s easier to get business if you’re a “fact checking” organization that always fact checks in a favorable direction to a major political view.
The people who want truth is a much smaller group than the people who want their views reinforced.
> FB only works when you and 99% of your social group are on there.
You can choose whatever social group you want on social media. The days of physical presence and limited supply of peers are over.
Your IRL friends don't believe that the government is made up of lizard people? Go find yourself a Facebook, Telegram, Discord, etc. group, you will find plenty of like-minded people and "proof" there. People can and do survive on online contacts alone, so being shunned is not a problem for them. At least not in the medium term.
Unfortunately, I think this is true but not realized.
I found out that my brother was "going to do his own research" on the COVID-19 vaccines, which really is just code for, "I'm going to look online at sources in my echo chamber." (Because how many people really do their own research by talking with virologists, epidemiologists, etc.?)
It wasn't until he had some social pressure and the death of two acquaintances that he decided it was important to go get vaccinated.
Facebook is censoring because politicians who are also murmuring about tech antitrust are demanding that they do. They're just trying to survive the rage cycle.
edit: So since they don't have any overarching philosophy about what to censor, some of the choices they make are overzealous or bizarre.
> But is everyone that easily manipulated? More importantly, does everyone actually believe that they can be easily manipulated, or do they just think that everyone else is so easily manipulated, but somehow they're above the fray?
I mean, having watched family members descend into raving about the Pope and George Soros harvesting children for adrenochrome despite no previous interest in current events, yeah, I think people can be manipulated by obviously bogus information. You can decide for yourself what you think should or shouldn't be done about it but I don't think that point should be in doubt anymore.
Plus its not “people” will believe whatever bits of misinformation is out there.
Take it this way from marketing speak:
—————-
Content is being made and iterated upon rapidly, over social media to find new and innovative ways of winning “mindshare” among consumers.
With new technology we can cluster user groups and tailor make perfect user experiences after extensively a/b testing content to most closely match user preferences.
All of this can be done cheaply and at scale, for increasingly new types of content and complexity levels.
————
That is the problem. It’s not whether people will believe misinformation, its that content creation is faster at changing and sounds plausible enough to slip past your typical bullshit filter.
Most people have their minds made up and just look for articles supporting it. I can't help but feel we are doomed in this respect. Only teaching critical reasoning can help combat this and I fear most countries aren't up for the challenge.
It is the winning move in the long or even medium term, for the same reasons as not paying Dane-geld. Once you start censoring politically unacceptable content there will be no end of demands for you to censor more and more.
Usually if there is no scientific proof of something, there will be a conspiracy theory on it. Like Bigfoot, no scientific data that he exists. Just some big footprints that someone made with modified shoes.
> do they just think that everyone else is so easily manipulated, but somehow they're above the fray?
Based on my own anecdotal evidence, it’s exactly this. The people that went down the rabbit hole of QAnon on YouTube are, in their mind, the informed ones. They’re the ones that went out of their way to do the research and the rest of us are just lemmings. Thus, spending hours staring at an algorithmically generated video feed is actually a positive attribute rather than the obviously negative one the rest of us see.
I think there's also a tendency for the targeting of any given scam/psyop/sales pitch/etc. to make targets look foolish to non-targets. That is, the more narrowly targeted it is to bypass your bullshit filter, the more obvious it seems to someone else. So they walk away from hearing about your situation thinking that they're too smart to be fooled... only to fall for something that was designed to bypass their bullshit filter, possibly without even realizing it.
If you believe "the vaccine turned people into the Hulk", thats a mental health problem.
But when inconvenient claims, like "vaccines can cause blood clots in rare cases", often flagged as misinformation, you don't instil much confidence with the public.
And so they go out and get information from other sources.
> If you believe "the vaccine turned people into the Hulk", thats a mental health problem.
Can you identify anyone who actually believes that? Keep in mind that people don't always have logically consistent beliefs, and they sometimes exaggerate their beliefs so they're not even true at all. If I really believed that claim, I would be making a bet with somebody who doesn't believe it and prove myself right by finding this hulk and asking them to punch a hole in a brick wall of flip a car over or something like that. Are there people who have put their own skin in the game like that to support that belief? If not, you're probably strawmanning.
> Can you identify anyone who actually believes that?
I'm not sure if you are trolling but even though Hulk is a hypothetical example, people believe equally strange things, such as invisible microchips inside face masks, viruses being spread by 5G cell towers, that the world is flat and Australia does not exist.
But do they stake something of significant value on those beliefs or just say them on the internet and it's you who is incorrectly believing their claims of their beliefs to be true? People often willingly enjoy false beliefs for emotional reasons and wouldn't actually act on them if the outcome was going to materially affect them.
I used to have a conversation with a crank who kept almost inventing perpetual motion machines. But every time he had nearly finished one, he somehow didn't quite do the last few steps. He talked like he believed they would work but somebody who actually believed that would be chomping at the bit to finish one and prove it working. They'd be richer than God. It seems like some part of him knew it wouldn't and he preferred to enjoy the fantasy of being an unrecognized genius. Does that kind of thinking count as a belief or not? I don't think it's really clear cut. You see a similar inconsistency with religious people who "believe" they'll go to heaven when they die but they still try not to die.
Has "vaccines can cause blood clots in rare cases" been flagged as misinformation? I don't think it has. I get the impression that the misinformation filter is 99% accurate.
There are crazy things that people believe (not necessarily vaccines turning people into the Hulk, but still) that, in the past, I would have assumed were mental health issues.
But when massive numbers of people believe these things, blaming it on individual problems is not so productive. About 300 people a day are dying because intense political polarization has caused people to believe whatever best feeds their tribal urges. The same thing caused an attack on the US seat of government, and a very large percentage of the population thinks that's ok.
I don't know the solution, short of us declaring that the internet and cable tv was a bad idea and we need to go back to simpler times of unconnected computers and 3 TV networks. But I'd agree that facebook, Twitter, etc has to do something.
Read your comment again, but consider a hypothesis that you are entirely wrong about most things - how would you know? Everyone seems to lack a consideration that there is even a possibility they could be wrong.
If they are exposed two sides, and you one, how do you know you are right? You probably are. But… it’s the absolutism that is starting to become obvious to me.
Maybe you will see it, but many people get their news from social media. Seeing how Facebook is now censoring the CDC website (or at least a certain page) many people will not.
While there is disinformation out there, it is rare to see completely fabricated primary sources. For example, when a bunch of news sources are saying things like the election was rigged, you can go to the published court cases and the handwritten affidavits. You can then see that 87 judges of all types totally rejected the ridiculous arguments. You can then read through about 250 pages of affidavits and see that they are nothing but vaguely racist, speculative garbage.
Regarding things like the covid vaccine, you can look at the 96% vaccination rate of doctors or you can listen to basically every single immunology or virology expert. It is quite telling that pretty much the only doctors making the media rounds fear mongering about the vaccine are gastroenterologists, opthalmologists, or other unrelated fields.
So no, nothing is certain, but some things are pretty close to certain.
>You can then see that 87 judges of all types totally rejected the ridiculous arguments.
Not one election fraud case was rejected on merit and claims. None were heard. All were denied via process. This isn't a surprise. All lower courts are going to boot them, they aren't the place to hear this, I watched the hearings where the judges rejected and immediately facilitate getting the time sensitive paperwork ready for appeals. The higher courts rejected even the idea of hearing them for various reasons including the establishment right wanted Trump to remain in office as much as the establishment left.
Including the Supreme Court that ruled that potential federal election fraud or process violations in states wouldn't effect another state. So in this case despite Texas correctly pointing out that other states unquestionably and admittedly violated Article2 by approving election protocols without the consent of their legislatures, that Texas had no standing to sue.
This doesn't mean there was fraud. Doesn't mean there wasn't. But in terms of the scotus case, that was bullshit of the century and if you don't understand what happened (McConnell picked "Trump's" judges and McConnell wanted Trump gone as fast as possible) then you will never understand what really happened.
But to just wave your hands and say "They didn't win trial, all fake!" without admitting none were ever heard, is a willfully ignorant view of what happened. Even the courts sympathetic to Trump weren't going to rock the boat like that. It was always going to go to scotus and they signaled bright and early they weren't going to do it.
>Not one election fraud case was rejected on merit and claims
That's not true and you obviously have not read through the cases if you think that. Quite a few of the cases were rejected because the judge found the merits so lacking that it didn't even warrant a trial. That's a HUGE indicator that your cases have ZERO merit. Especially when we are talking about sympathetic judges. They didn't just reject the cases for lack of merit, the excoriated many of the attornies that were pushing such garbage into the court system.
In addition, many of the lawyers who brought those cases forward are seeing disbarment and serious sanctions for such frivolous cases. You don't get disbarred or sanctioned for filing a meritorious case with the wrong court. You get them for serious malpractice and bad faith.
> Not one election fraud case was rejected on merit and claims. None were heard. All were denied via process.
Do you mind expanding on what you mean by "[not] rejected on merit and claims", "none were heard", and "all were denied via process"? There are multiple arguments that these are incorrect:
- In the US court systems, dismissal due to failure to state a claim is a decision on the merits (Federated Dep't Stores v. Moitie)
- At least one court did hear claims (barring you and me using different definitions of "hear claims"), and specifically addressed them in their decisions (e.g., Law v. Whitmer included depositions and explicitly ruled on the merits)
- Multiple decisions included an analysis of the claims in addition to analysis of the procedural failings (e.g., the decision in King v. Whitmer addresses 11th Amendment immunity, mootness, laches, abstention, and standing, and also goes on to describe why even if the complaint did not suffer from the previous issues the plaintiffs would not be granted their requested relief).
> The higher courts rejected even the idea of hearing them for various reasons including the establishment right wanted Trump to remain in office as much as the establishment left.
Can you cite anything that would support the latter half of that sentence being a reason appeals courts tended to rule the way they did?
> Including the Supreme Court that ruled that potential federal election fraud or process violations in states wouldn't effect another state.
This is an incorrect reading of the Supreme Court's decision. They said Texas lacked Article III standing, which is not the same thing as what you said. For example, the Supreme Court could agree that potential election fraud or process violations could affect another state, while still ruling that Texas lacks Article III standing due to failing the injury-in-fact requirement (injury is not a) concrete and particularized and b) actual or imminent).
> So in this case despite Texas correctly pointing out that other states unquestionably and admittedly violated Article2 by approving election protocols without the consent of their legislatures
This is almost certainly not as unquestionable as you make it sound; for example, Texas claimed that Pennsylvania's Secretary of State "unilaterally abrogated several Pennsylvania statutes requiring signature verification for absentee or mail-in ballots" without the approval of the legislature, but the Pennsylvania Supreme Court pointed out that the Pennsylvania Election Code does not permit the rejection of mail-in ballots based on signature, so if anything the SoS's guidance is aligning with state law more than before.
Facebook cares about their bottom line more than anything else. In a vacuum, they're pro free speech because there's more content to serve ads off of. However, presently people will get companies to boycott ads on your platform if they see something on it they don't like. Also Biden just blamed Facebook for not censoring enough so they're likely facing pressure from Washington as well.
It's important to remember the history here. In 2015 social media was basically the wild west. You could get away with saying all sorts of crazy stuff and nobody cared.
After the 2016 election all of the big social media CEOs were dragged in front of Congress to testify about fake news, Russian propaganda, etc. They were directly threatened by lawmakers that if they didn't shape up by censoring us that lawmakers would bring the hammer down.
What we are experiencing now is 5 years of development on that threat. The social media CEOs are in a position where if the media manages to make a big fuss about some sort of "misinformation" then Congress will respond by dragging their butts back to testify and/or regulation.
This has turned the social media CEOs into slaves of the corporate press. All it takes is a few tweets from the right people asking Zuck why he's platforming "Nazis" or whatever they are calling moderate Republicans and anti-war leftists these days and these companies will jump.
This is the censorship that the government demanded, and they can't escape. I'm not sure what the answer is, because if we all jump ship from FB/Twitter/etc onto a "new" platform, it won't be long until that new CEO gets dragged before Congress and is given the same threats.
There seems to be this pervasive myth that conservatives are being censored on sites like Facebook. That's at odds with the fact that far right misinformation still dominates those websites. Just what evidence would be sufficient to convince conservatives that they aren't being unduly censored?
> “Republicans, or more broadly conservatives, have been spreading a form of disinformation on how they're treated on social media. They complain they’re censored and suppressed but, not only is there not evidence to support that, what evidence exists actually cuts in the other direction,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, which released the report Monday.
I don’t believe there is anything I or anyone else could say that would change your mind.
Out of curiosity, when I said "anti-war leftists" did you perceive that to mean something negative/radical about them relative to when I said "moderate Republicans"? Also, what gave you the impression that I meant one of those was getting censored more than the other?
You didn't imply one was censored more than another. You claimed it's moderate Republicans being called Nazis and throwing in the anti-war leftists as false equivalence in an attempt to sound moderate. There are very real signs of fascism growing within the Republican party. From aligning against an "other" enemy to telling people they are the only ones who can stop the onslaught to telling people to be strong in resisting the election results to passing laws in 18 states to restrict voting. Over 60% of Republicans in the House voted to overturn the results of the 2020 election and over 70% of Republicans believe the election was stolen. This is tens of millions of people deluded by an obvious con-man with no sign of sanity in sight. I think the time for "both side" and false equivalence is well past over.
Conservatives are vaccinating at a fraction of the rate they should be, and it's pretty much directly attributable to directed misinformation. Yeah, I think there's plenty of evidence to suggest misinformation works, and it's potentially extremely damaging. How many more people have died due to Covid because of people downplaying the virus? We still have Trump supporters claiming it's not real.
If stuff people see on Facebook has absolutely no effect on their believes, then, yes, it wouldn't make sense for Facebook to delete disinformation.
It also wouldn't make sense to post any dis- or -information on Facebook. And it wouldn't make any difference if Facebook deleted anything or not, so people could stop complaining about "censorship" for exactly the same reason you want people to stop demanding action from Facebook.
The fact that the latter two aren't happening seems to suggest nobody–you included–believes it's all quite as meaningless.
"Censoring" is a fairly hyperbolic way of describing marking a post as misinformation.
As far as the business analysis, I'm sure there are some number-crunchers at Facebook who looked at lockdown usage of the platform, the lockdown's effect on ad revenue, declining use due to deaths, how much marking misinformation changed a poster's beliefs, how much marking misinformation changes a poster's usage, and figured out whether marking misinformation made financial sense. I don't think they care at all about what you might think is freedom or what someone else might think is moral, and the idea that some other commenters have put forth that marking misinformation would affect antitrust litigation seems just as wacky as some of the Facebook posts I've seen about vaccines and 5g.
Not sure if it's still happening, but you can test it if you make the post visible to only you and you wait ~5 minutes for that misinformation message to show up.
Even if the fact checker had some kind of point, I don't know how labeling an authoritative source as "false information" helps anyone understand anything just because part of the information might be outdated.
Nevertheless, the level of carelessness upon which this mechanism seems to have been built upon is an embarrassment.
I don't get why they can't add a regex to ok *.gov$ websites.
> Nevertheless, the level of carelessness upon which this mechanism seems to have been built upon is an embarrassment.
Same mechanism would flag and ban folks who argued that the lab leak hypothesis was a possibility and was worth investigating... Interesting how the media changed their narrative on this one.
Such a sad commentary that you so blithely assume that if it's on a government website then it's not misinformation. Across the ages, governments have been the most heinous purveyors of misinformation, and are the biggest misinformation threats to humanity. Ergo the 1st Amendment.
Are we sure government websites won’t contain misinformation? For example, senior government leadership was advising the consumption of bleach as a COVID-19 cure.
Iirc, he was asking a CDC official whether the application of a disinfectant to infected lungs had been investigated. A stupid idea on its face, but at no point did he recommend the consumption of bleach as a cure or a treatment for covid.
He didn't recommend hydroxy as a cure. He was advocating studies at the time that were supportive of its preventative effectiveness. As those weren't enough to justify using it the left media jumped on it with extreme hyberbole and you bought it with your lack of skepticism I assume. States that have ended lockdowns early, like Florida, haven't done any worse than locked states and saved their economies. Lockdowns are unproven as effective. Trump did not "actively" reject use of masks; he simply didn't fervently promote them as the left wished. And for that matter, Texas ended masks very early to the seething disdain of the left and then nothing happened. So...
Best possible health advice... except when it wasn't initially [1] (allegedly to save stock for frontline workers), then it was, then it wasn't [2] (vaccinated no longer need them), and now it is again.
I agree that the use of face masks is commonsense and should become a part of life, like it has been in asian countries since at least SARS. But to imply that the directives from health officials has remained the same is either ignorant or deceitful.
I'm not the same person you were talking to but no where does it say he recommends others take it. He is just saying he takes it. You're completely changing his words to suit your narrative that he said something when he never did. This is exactly why people are up in arms over Facebooks censorship. They let people skew other peoples words and present them as fact when they are libel at best.
Yes. Perhaps he shouldn't have been taking it. But he certainly didn't say, as the person I was responding to clearly asserted, that it is a cure for Corona. And here we are, you and I, getting 0'd out by people who apparently can't read. And people apparently who go along with whatever is printed in left-wing media without fact checking.
He probably should. Studies show protease inhibitors are effective treatment: https://onlinelibrary.wiley.com/doi/10.1111/jcpt.13356 But don't say this here, because HN downvotes science. Their religion is against it.
Rather than trying to tease out the explicit meaning or jokes or subtext, we could just consider the rise of bleach ingestion at the time. Responsible people with a platform speak responsibly.
Given he never mentions bleach, or even suggested disinfectant as anything other than a quizzical aside to an aide - if the rise in bleach poisonings were to be attributed to anyone, it would be media outlets propagating the claim that he wanted people to drink bleach.
However, the rise in bleach poisonings seems to have predated his statement, and pundit's subsequent interpretation
There are plenty of exceptional reasons to have a low opinion of the man, and call him irresponsible with the platform he had. No reason to make up more.
He literally never talked about injecting bleach. Please stop spreading misinformation. Instead of criticizing him on important issues like not pardoning Assange/Snowden, ideologues keep making stuff up and destroying their own credibility. I provided my sources in this comment:
Your statement is very similar to saying "maybe she shouldn't have worn a short skirt if she didn't want to get raped".
Top health official is talking about UV light killing the virus, person replies asking questions about possibility of UV light used for disinfecting the lungs as the technology was patented that week, media turns this into "person tells people to inject bleach", ideologues believe whatever media sells them without any questioning, ideologues blame the person instead of media for spreading disinformation, ideologues censor the person.
Partisan hackery isn't helping when the media is lying and ideologues choose to go along with it. Maybe it's time for ideologues to re-consider their own beliefs.
How exactly do you draw that insulting "similarity"? I'm not blaming a single one of those poisoning cases on the victims; I'm saying the message was obviously not clearly received by a lot of people. When I watched it live I couldn't even figure out if he was talking to his staff or the press pool, so I'm not surprised other people misinterpreted his words. I'm not even getting into whatever your crusade is against "the media."
> For example, senior government leadership was advising the consumption of bleach as a COVID-19 cure.
Right!
Except that never happened at all and you are spreading BS / fakenews / “misinformation”.
At no point did anyone suggest drinking bleach. During a press conference of possible treatments he asked about developing an ingestible solution along with experimental internal UV lung treatments. The media quickly turned that “OMG Trump said drink bleach!”, the nonsense you are happy to repeat.
EDIT: I’m anticipating a link I’ve never seen before. Can’t wait.
> was advising the consumption of bleach as a COVID-19 cure
I researched this extensively. That never happened and people still spreading this shows the damage media and politicians have done.
He said "disinfectant", not bleach.
If you watch the press briefing in context, Bill Bryan, Under Secretary for Science and Technology at DHS was saying right prior to Trump: “Our most striking observation to date is the powerful effect that solar light appears to have on killing the virus, both surfaces and in the air. We’ve seen a similar effect with both temperature and humidity as well, where increasing the temperature and humidity or both is generally less favorable to the virus. The virus dies the quickest in the presence of direct sunlight under these conditions. We’ve tested bleach, we’ve tested isopropyl alcohol on the virus, specifically in saliva or in respiratory fluids, and I can tell you that bleach will kill the virus in five minutes,” Bryan said. “Isopropyl alcohol will kill the virus in 30 seconds, and that’s with no manipulation, no rubbing. Just bring it on and leaving it go. You rub it and it goes away even faster.” Bryan talked about the half-life of the coronavirus on surfaces like door handles and stainless steel surfaces, saying that when they “inject” UV rays into the mix along with high temperatures and increased humidity that the virus dies quickly.
After this, Trump said: "So, I’m going to ask Bill a question that probably some of you are thinking of if you’re totally into that world, which I find to be very interesting. So, supposing when we hit the body with a tremendous, whether it’s ultraviolet or just very powerful light, and I think you said that hasn’t been checked, but you’re going to test it. And then I said supposing you brought the light inside the body, which you can do either through the skin or in some other way. And I think you said you’re going to test that too. Sounds interesting. And then I see the disinfectant, where it knocks it out in a minute, one minute. And is there a way we can do something like that by injection inside or almost a cleaning? Because you see it gets in the lungs and it does a tremendous number on the lungs, so it’d be interesting to check that, so that you’re going to have to use medical doctors with, but it sounds interesting to me. So, we’ll see, but the whole concept of the light, the way it kills it in one minute. That’s pretty powerful." Trump then clarified his remarks: “It wouldn’t be through injections, you’re talking about almost a cleaning and sterilization of an area. Maybe it works, maybe it doesn’t work, but it certainly has a big affect if it’s on a stationary object.” “If they’re outside, right, and their hands are exposed to the sun, will that kill it as though it were a piece of metal or something else?” Trump asked.
In that context Trump was talking about this technology of using UV lights which got patented the same week he mentioned cleaning inside:
Disinfectant to clean lungs: “Injecting light” could be referring to Ultraviolet Blood Irradiation.
Methylene blue photochemical and another light therapy studies which came out that week:
> Led by Mark Pimentel, MD, the research team of the Medically Associated Science and Technology (MAST) Program at Cedars-Sinai has been developing the patent-pending Healight platform since 2016 and has produced a growing body of scientific evidence demonstrating pre-clinical safety and effectiveness of the technology as an antiviral and antibacterial treatment.
Possibility of Disinfection of SARS-CoV-2 (COVID-19) in Human Respiratory Tract by Controlled Ethanol Vapor Inhalation by Tsumoru Shintake:
A disinfectant does not need to be a harsh chemical or bleach. A disinfectant can be heat, radiation, UV light, etc. Many people consider a lung lavage a method of disinfecting the lung by “washing” it out.
They’ve released technology that is using UV light to sterilize N95 masks due to the shortage. It’s now officially FDA approved. They have been putting babies under ultraviolet light for jaundice, for as long as I can remember. It is not new. They’re also using UV light to sterilize ambulances between runs because of the virus.
No reasonable person would take this to claim Trump was telling people to inject bleach. Media and politician ideologues destroyed their credibility when they spread misinformation while claiming to be against it.
The example regex would fail on that url. More generally I was trying to suggest with an example that filtering credible information with a regex is error prone.
Yeah, but that doesn't work either. There are gov sites that have incorrect information, out of date information, partisan ideas and rhetoric, or there may exist, now or in the future, a .gov website where people could upload and share content.
If you were going to check for this and didn’t immediately reach for location.host rather than location.href then it might be time to read some API docs.
And this is why Facebook has no business being part of fact checking.
While it may look different, this is no different than two folks on the phone getting interrupted by ATT and being told they are incorrect on their assumptions about topic X.
Social media platforms like Facebook and Twitter are the carrier, and should be treated as such.
We need to differentiate the recommendation algorithm parts of these products from the communication parts. If Facebook is recommending something, that's on them, and if they want to not recommend my content to anyone, so be it; but I should be able to post something seen only by the people who opted in to following me without them getting in the way.
I would take this a step further and say that all social media recommendation algorithms should be publicly reviewable.
The claim is that this algorithms are neutral. But we know many contain ways for the owner to artificially boost preferred content. And algorithms tend to have the biases of their creators in them.
Controlling what people see is an important power and one that should be regulated.
I don't see how this will help. FB et. al. work by shoehorning complex nuances into intentionally crude metrics like "engagement.". If they show the engagement-focused algorithm, that shows they're "just trying to engage users."
What would the algorithm show that isn't already apparent?
Most people that want to see the algorithm want that knowledge to use it for their own ends. They are forgetting about the second-order effects where literally everyone will also be doing that, and then they are gaming each other instead of "The Algorithm".
Yes. The entire point of government is to delegate important functions that cannot be handled correctly by a "free market" to elected officials, which would include appointing regulators. So that would be the system working as intended.
I don't really think there's enough reason for recommendation algorithms to be publicly reviewable, in many cases the algorithm itself is the most important asset of the product, and in a competitive business environment, you can't really force them to disclose it.
What would be ideal is to think of an incentive for these companies to give users an option to disable the use of their algorithm, aka natural flow. That's unfortunately not the trend things are moving, and even services that still have something like that employ dark-patterns to throw user back into their "controlled" timeline (Twitter for example reverts you back "Home" after a few days with a tiny message that is barely noticeable.)
I'm even willing to pay for a feature that "turns off" the algorithm, unfortunately that would never happen because it'd entail these companies admitting that they design their algos not for the benefit of the user but for stickiness and the dreaded "engagement", which shouldn't be a problem in itself, but then it'd quickly become obvious when they're acting sanctimonious.
I agree that the algorithms should be publicly reviewable. But I'm not sure that solves the central issue which is that there's a tension between what's financially good for social networks (algorithms that increase engagement - which disproportionately favors echo chambers, controversial and shallow content, etc.) and what's good for the general public.
Social networks aren't in the business of showing you what's "good for you." They're in the business of showing you what's good for them (e.g. things that will increase your engagement).
There are two (or more) feeds of information on facebook:
- The main one, which only shows posts sent to the "recommend this to people queue" based on facebooks recommendation algorithms.
- Messenger, which directly sends messages to people in a FIFO fashion, which I assume facebook doesn't censor as heavily (though I can't say I've tested).
The fact that the recommendation based stream is more popular is a reflection that facebooks curation is actually useful, but it doesn't mean messenger doesn't exist.
Nor do I see that messenger must exist, facebook isn't morally obligated to provide a "non recommend feed" to you if that isn't the product they want to sell you. Perfectly good alternatives even exist, in the form of "blogs" (say, substack).
I assert that content I post to my timeline and which you see on your feed as my friend shouldn't be subject to Facebook's commentary or opinions. I take strong issue with the claim that to obtain that I have to use Messenger--which is an entirely different content paradigm--and I think it is outright dishonest to try to claim that people posting to timelines means that "the recommendation based stream is more popular".
Facebook wants to recommend content to me from random people: that's their issue, and they should take responsibility for it--whether legally or simply morally--and thereby should have to be careful with what they show people (maybe to the point where they simply can't scale it to the scale they are at automatically! I will not cry over their loss). Facebook also shows me content from my "friends"... I opted in to the content from my friends: if I don't like my friends, I can and should unfriend them, as they were my responsibility, not Facebook's (in the same sense that if I call someone on the phone I don't whine to AT&T about how much they suck). I should not get labels from Facebook telling me my friends' content--content I explicitly wanted to see--is scary or wrong, and they certainly should not be banned from showing it to me.
I think the most egregious category error you are making here is to equate messaging with communication... the entire mechanism of social media to followers is primarily direct communication! We can see how ridiculous this has become by looking at how, even if you have a private account--one random people aren't even allowed to see!--Facebook, Twitter, and Google currently consider it their business what we post there... an account you post things to and which other people follow is identical to a chat channel that happens to maintain a buffer.
(FWIW, to steel man your argument as best I can, "yes": Facebook also applies recommendation algorithms to friend feeds. Twitter I believe still doesn't. Instagram didn't, but then started... AFAIK people don't like it that much, but it might be more profitable for Facebook? Either way: it is helping me reorder content I already curated, and so I don't feel a need to claim that this is something they published. There is simply still a fundamental difference from platforms surfacing "trending" content or suggesting new people or channels I might want to follow / posts I might want to see, and acting as an intermediary for content from people I was directly linked to from people I am communicating with. The former might as well be illegal as far as I am concerned, while the latter needs to be sacrosanct.)
Point of fact first, twitter does filter tweets via a recommendation algorithm [1]. If you troll through HN you should be able to find threads with people complaining about that too... I assume instagram does too but don't use it and haven't verified [edit: It does https://later.com/blog/how-instagram-algorithm-works/].
> Facebook also shows me content from my "friends"... I opted in to the content from my friends: if I don't like my friends, I can and should unfriend them
This is not the service Facebook offers, nor has it ever been. Facebook has forever been about friending everyone you are actually friends with (and then some), and you (or at least the vast vast majority of people) do not want to swallow the entire firehose of content that their facebook friends post. Facebook doesn't want people unfriending people to avoid seeing their content, that's not how the model works - doing so would break all of other facebooks services like acting as a birthday reminder, event planner, and so on and so forth. Even the language is indicative of this, "unfriend" (I don't like you anymore) not "stop showing me these posts" (I don't like what you post).
> the entire mechanism of social media to followers is primarily direct communication!
On the contrary, the entire mechanism of the majority of successful social media companies is curated communication. As it turns out 99% of everything (e.g. posts) is crap, and people go to the sites where less than 99% of what they see is crap.
Types of curation vary, but the existence of it is very consistence. To create some categories: algorithms (facebook, tiktok, youtube, twitter), community upvotes + community moderation (reddit, HN, slashdot), just straight up heavy moderation (many forums, forum.nasaspaceflight.com comes to mind as one that is still going strong). Also worth pointing out that most of these platforms actually mix it up, e.g. youtube, twitter, and facebook all have forms of upvotes, and facebook has forms of community moderation.
But if you want direct communication, that exists. I.e. the previously mentioned facebook messenger (which admittedly doesn't do followers) and substack (which is a competitor of sorts that does).
Direct communication is not what facebook is offering as their main product (but is what they are offering with messenger), it's never been, and I don't see that you have any right to demand that it becomes what their main product does.
That said, SMS carriers filter and block TONS of messages and have been doing it for years. Our startup works in that space so we have to deal with this problem all the time and I can tell you last November if you tried to send a text message with the word "election" in it in an automated fashion, it was most likely getting blocked.
Yes, and now the U.S. mobile telcos are applying a completely subjective "reputation" "brand trust score", which sounds a lot like the Chinese social credit score. Oh, and they're charging a ton more for the privilege of sending a small number of texts, too.
I guess when you engage in price-fixing in the open and call it "setting a standard", then anti-trust rules don't apply..
I promise you there is tons of actual keyword based content filtering going on. We had to build an AI just to detect which keywords different carriers block. Anything socially contentious, like "BLM", "election", "covid" increases the chances your message will get a 30008 error aka carrier-level content blocking. More disturbing is that plenty of carriers will also block messages but report them back to Twilio as delivered.
This starts to get to the issue here. FB is not a carrier (like a telephone company). Their business model isn't built around carrying information or broadcasting it.
FB is a content platform who is about making ad revenue. They are crafted around that and how to make that the most profitable possible.
Another nuance is that due to Facebook post privacy settings, an individual post can be restricted to a limited friend list, and not viewable by the general public. However the "fact checking" appears to apply to both private and public posts.
The fact checking aspect of it is new twist, but Facebook's servers already checks the contents of messages and will disappear messages if the content is deemed bad. It started off as protection against spreading viruses/malware which are unequivocally bad and has evolved from there to include urls to sites with extreme political views and other controversial content.
If everyone involved in the conversation wants to be involved, and no laws are broken, and the platform interferes, then it's censorship.
If there's some kind of algorithm putting the content in front of people who didn't ask for it (e.g. not followers/friends/subscribers/whatever), then you have a point.
Technology changes - magnifies, accelerates, projects, distorts - many social effects. One of those is removing the previously invisible, small cost of spreading an idea.
If two people want to have a conversation or exchange a letter, or one person wants to speak to everyone within earshot or post a sign for people walking by to read, that's one thing. Maybe you say something dumb, and it gets shot down, or maybe it gets propagated, but it needs to have an R0 greater than 1 to endure. If you need some level of capital and agreement to have a publisher run a thousand pamphlets with your idea, that's another, you can leverage previous efforts to amplify weak ideas, but more people are involved and able to provide some sort of sanity filter. On the Internet, the cost to promote an idea is near zero, and algorithms can amplify something dumb but catchy to millions of people in a heartbeat.
It used to require a few seconds of talking per person you wanted to reach, now it costs a few seconds to post a message that might be seen by millions. The difference is minuscule in absolute value (perhaps a monetary value of a few cents) but huge in relative terms (how many percent less than a few cents is zero cents?).
Maintaining policy on censorship while ignoring this massive change in the landscape is shortsighted. Yes, a physical public venue ought not censor someone who wants to talk to other people there, but a digital platform with an audience of millions should think carefully about the effects of messages that their technology amplifies.
You only have two choices, let all information be available regardless of whatever downside there may be, given that at least you still have some control over how to deal with that, or live in a truly Orwellian society in which you have no control over what you know.
The world isn't a black and white place and there are a variety of more nuanced stances between those two extremes. Any amount of censorship doesn't necessarily and immediately devolve into 1984.
Free speech issues are never simple - but their solutions is never found in the extremes. We've already, as a society, compromised free speech to make exceptions for discriminatory speech and violent destabilizing speech - we're already compromising how available all information is and it's necessary to have a functioning society.
Right now the US, specifically you guys - most of the rest of the world is handling this better - has a huge issue with false information around vaccine efficacy and dangers. This issue must be resolved if you want to be included in an open world once again - some level of censorship is going to be required.
> has a huge issue with false information around vaccine efficacy and dangers
It is fascinating your viewpoint here, as the false information identified in the US is true information elsewhere. The reason for this is precisely the censorship and information control.
This is a case of an algorithm doing stupid stuff and misclassifying things - it's also perfectly fair to criticize whether FB should actually control the classification algorithm for this particular type of statement. A misbehaving algorithm doesn't prove the lack of a system though - it just proves that the current one is broken.
Can you substantiate this? It seems pretty loaded, and I'm reminded of that fallacy where you state there are only two extremes possible as outcomes, you doing that on purpose or what?
How to propose there is a middle ground? What everyone imagines is that if we only censor what is reasonable to censor it will be fine.
The fallacy is that we always view this from our personal perspective, yet we will not be the ones who make those choices. We give up that role to someone else.
Who has this role over society has immense power. Some would argue greater than governments themselves. It is only a matter of time before that vector will be exploited. It is in principle the same idea as regulatory capture, yet the incentives for capturing speech are far greater than a typical regulatory body.
Unless I put her account on snooze, I regularly see posts that my (literally) abolish-money/anarcho-communist ex shares, even those from groups which I have repeatedly marked as “block all content from this group”.
So just unfollow her. It sounds like you have some major issues with her personally anyway so how is this a failure by the platform and not just you subjecting yourself to a negative situation?
The stuff she writes directly (rather than liking memes from groups she’s in) is as interesting as any other friend’s posts. Reason I mention her politics is because they’re about as extreme as you can get.
The problem is that Facebook is convinced that I, a British citizen living in Berlin, want to see “Bernie Sanders Dank Memes” (or whatever it was, there are many) even though I’ve clicked on the button labeled “don’t show me ‘Bernie Sanders Dank Memes’“.
The “even though I’ve clicked on the button” is especially egregious, in this context.
Yes, I hate how Facebook (and especially Twitter!) have now chosen to not just show me my friends' posts, but posts they like and respond to. A like has basically become a retweet.
Facebook is a private company that allows you to sign up to use its wholly-owned platform, and it's allowed to censor for whatever clever/asinine reasons it comes up with. Your only recourse is to disengage.
The reason the private-company argument is really tired is that it's simply not something that we take to its logical end in society; the returns of company freedoms are diminished and even counter-productive when a company reaches ultimate freedom to do whatever it wants. A diverse society isn't sustainable if people from different backgrounds don't have the same opportunities to participate in society. This is exactly why, no, you can't only allow whites into your business and you can't ignore the needs of the disabled, among any number of things. It would work excellent under a feudalist system, however.
Likewise, if companies like Facebook get so large and influential that they (and a small number of other NGOs) provide the only meaningful channels of communication between groups, we are dooming freedom of expression if Facebook, Google, or whomever are free to silence you in order to pander to politics and advertisers. Especially not when Facebook works for the federal government and gets tax breaks and subsidies. Your individual rights mean more than the right of a giant corporation to make lots of money and have undue amounts of power.
> In the United States the statement is often heard, that “corporations are private businesses, so they can do whatever they want.” This assertion is particularly false when referring to entities like Facebook, Amazon, Exxon, and Pfizer, because…
> - the phrase goes directly against a basic knowledge of the history of incorporation — corporations were originally designed and granted special legal privileges by government, only because they were expected to serve a pubic good.
> - the phrase goes directly against the dictionary definition, and investment industry terminology — a public company is defined as a company whose shares are traded freely on a stock exchange, hence the term IPO (initial public offering).
> - the phrase ignores real-world government involvement—many large corporations are state and federally sponsored (e.g. subsidized, bailed out, and given perks), by money which ultimately comes from the tax-paying public.
> In reality, all big corporations are some combination of state-chartered, publicly-traded, and government-sponsored. By definition many are public companies, while others have complicated hybrid characteristics.
Not being allowed to do things is called being regulated. So Facebook shouldn't be regulated in the same way that ATT is because they currently aren't being regulated in the same way that ATT is?
> this is no different than two folks on the phone getting interrupted by ATT
Those entities aren't comparable. Internet and telecom operators, much like the post, is a carrier in a legal sense. Not sure about your jurisdiction, but in many countries this is an actual law, secrecy of correspondence.
Facebook however, is operating a publishing service on the web, much like Nature or NY Times. Nature isn't obliged to publish your article just because you upload it, and they certainly do a fair amount of fact checking.
To continue the parable you started, while AT&T isn't allowed to snoop the article you upload to NY Times, the latter is allowed to refrain from publishing it on their web. Stretching the definition of carrier to encompass Facebook and Twitter would have wide reaching consequences and risk making it much more difficult to operate a web page.
Carriers do not have any staff monitoring the content being posted, nevermind the largest such staff in the world
Carriers do not have algorithms seeking out what content on what phone call creates the most 'engagement' (which is probably inversely correlated with truth value) and then actively interrupting your other calls to pushing that content into your stream. Again, nevermind that FB has the largest such feed-selecting algorithm on the planet.
Carriers do not select and push one news source over another, based on the level of times it gets mentioned on the calls.
These are all editorial functions, far more selective and influential than any newsroom editor.
The idea that they should be treated as carriers may have been originally true when the feeds were absolute literal timelines of items posted by 'friends' you selected, in strict chronological order.
But once they (FB, Twitter, etc.) started tracking "likes" and activity, and favoring one bit of content over another to surface and emphasize/de-emphazise in your feed, they became editors.
That point was decades ago, and it is time to stop treating them with that old trope. The fact that they fail at their fact checking is no reason to say that they shouldn't do it (and yes, if they would go back to strict chronological feeds fully selected by us, with no algorithmic prioritizing, I'd agree that we should again treat them as carriers).
They should not be able to have it both ways -- being the largest editors in the world = all the power, but treated as innocent carriers = none of the responsibility.
I don't see how (although I might not object to some values of your quote).
It seems a straightforward choice:
1) Go forward as a Carrier and delete ALL features with a hint of editing, promotion, or recommendation, i.e., a simple straight chronological feed of other members' posts/feed specifically selected by each user, and maybe a search function.
2) Go forward as a Publisher and select, edit, recommend, promote, annotate etc. as much as they want.
With Option-1 they can avoid all responsibility for content, merely doing takedowns on items for which they get notices. With Option-2, they have the same responsibility of any publisher for their content (e.g., newspapers are still responsible for the 'Letters to The Editor' that they choose to publish, and I'm sure edit out profanities, etc.)
I expect that users would actually strongly prefer Option-1, although actual "engagement" numbers could decline, it might actually be a better advert platform.
It also seems that viral content would still exist, but be more 'natural', i.e., not enhanced by algorithms, since you Alan could see something from Bob and repost it, which would be seen by Chris, who is subscribed to Alan but not Bob, and Chris could repost it to be seen by Debbie, who subscribes to neither Alan nor Bob...
So, I'm curious how you see that this would this kill/outlaw social media?
I think option 2 is a non-starter. If you're going have newspaper-like liability, I don't think anyone can afford to do that.
That leaves option 1. I suppose such a thing could exist, but it would be very different from social media as it exists. It sounds like a mash-up of twitter (accounts, follows, retweets) and 4chan (minimal moderation). Which would be interesting.
But you're still talking about basically making anything resembling current social media sites illegal.
You're also probably making any niche forum illegal too, unless it's niche enough that the operator can reasonably subject every post to prepublication review to try to avoid liability.
For sure, the same rapid-promotion algorithms would not be workable at scale if they were to run at current speeds.
But would it be so bad to have a system that is primarily unmolested by algos (i.e., mostly Opt-1) but with a slower algo that promotes more judiciously might make for a much less toxic SocMed environment?
By adding content to user's newsfeed, they ( facebook's algorithm ) have made a explicit decision to share the piece of content and implicit decision that they deem the post safe for sharing. In that case shouldn't a speaker operator be responsible for the voices they amplify?
You only see things in your newsfeed from people you've explicitly added to your network (i.e. explicitly opted in to receiving content from). Is facebook wrong for showing you the things you said you want it to show you?
> You only see things in your newsfeed from people you've explicitly added to your network (i.e. explicitly opted in to receiving content from).
Eh, I'm pretty sure see random posts from political Facebook groups that were commented on by a Facebook friend I added in like 2006 when that sort of "content proliferation" didn't exist yet (or at least wasn't anywhere near as prominent). I explicitly added that friend, yes, but I definitely didn't explicitly consent to receiving every single piece of Internet content that that friend ever interacts with.
I think this is an unhelpful framing of the situation, because as you imply Facebook is of course not wrong.
If you have one friend and your feed is entirely full of their posts, Facebook is off the hook. But, for most users, the situation is far more complex. People have many friends, they may also be in groups. They probably will not be on facebook long enough for them to see all of the posts and, if they are, their attention levels will differ over the entire corpus.
In this situation facebook begins to have agency - which is different from being wrong. They didn't make the content, they didn't create the link that brought the content to you, but of all the content they could show they did show you this and not that. It's a relatively new kind of agency, one that we didn't have a lot of practical experience with before very recently, so they can be forgiven somewhat for their difficulty grappling with it. However, they're an important part of the chain of information organization and it would be foolish to pretend they have no responsibility.
I don't follow enough users to have a flourishing newsfeed, so facebook gives me suggestions to follow pages and shows posts from those pages. These suggestions are purely based on what facebook thinks I may like.
With people having enough sources to feed their newsfeed, facebook decides the order and cherry picks what posts are shown.
> Is facebook wrong for showing you the things you said you want it to show you?
I do agree with this. Coming to a solution to this would be hard problem. Users will have to give finer information about types of posts people like from a creator. I doubt there are any real work incentives for a big company to build a system like this.
It’s moderation through omission. No different than traditional TV news choosing to only produce stories painting their politically allies positively or foes negatively. Any form of selection can be a means for bias.
The problem is that one of the main differentiating factors of these platforms is selecting which post to present to the user at a particular moment. It's a valuable service - there are lots of posts - but it means the platforms are always somewhat going to be in the business of choosing one post over another.
> While it may look different, this is no different than two folks on the phone getting interrupted by ATT and being told they are incorrect on their assumptions about topic X.
There are many ways in which this is different, most of which relate to the differences between audio and visual communication mediums.
This not completely correct because while a carrier transport anything indescrminately, social networks manipulate and control information exchange, for instance giving more priority to a content wrt to another.
Except as soon as the started curating their content (algorithmic feeds; like/comment sharing) they started become publishers. If they solely made connections and forwarded messages, then they'd be carriers.
They're not just a carrier, they're created their own version of the Internet which they now have to police.
The Internet was fine without Facebook, it was largely self-governing and people could choose to browse others self-hosted content. Their decision to create the platform and use algorithms to direct peoples attention was totally self made, clearly problematic and now they have to workout how to cope with the fallout.
Same with Google. I searched a covid-related question, and Google's preferred answer at the top was "There is no evidence of this", while the next two links were medical journals that did show evidence that directly contradicted Google's statement.
If all my phone calls were on party lines, and the carrier made me listen to the most outrageous trolls, carefully selected to instigate fights, then ya your analogy holds.
Sure. But what would be your position, if, as a neutral carrier and a profit making business, FB sets up a system of taking money & promoting content (whatever the content) ?
Would you be ok with FB promoting glorification of white supremacy? Because, as a neutral carrier, it is not supposed to look at the content.
Would you support it promoting a content piece that is a fluff of falsehoods resting on a thin layer or truth (They are making all the frogs gay !) ?
FB could always argue that it is not responsible for content on its servers and can always argue that the counter point is also allowed.
What if corporations, with their infinite marketing budgets start using FB to start promoting that climate change is actually good for the world?
Frankly, does anyone have business in fact checking at this point? Who checks the fact checkers? I can't think of any source that hasn't repeatedly bungled information over the past year. What we have is consensus checking, at best.
Well… I’d answer, but how many downvotes do you want me to get?
If fact checkers are operating with political bias, the people that call it out will be the ones who feel they’re targeted for their beliefs. They aren’t going to police themselves.
The problem everyone is dancing around by trying to say Google, Twitter, or Facebook is a common carrier is that they filter and weight what you see anyway
It’s literally the service Google provides.
If they could decouple the information store from the display, and allow third parties to algorithmically filter and sort, then we could designate them as common carries for the first half. Maybe.
This depends how you interact with the site though. "fake news" warnings will appear on links even if you navigate directly to the page of the person who posted it. Even worse, Twitter has blocked certain stories from being sent via direct message in the past. The algorithmic feeds may make up a much larger portion of the practical problem, but I think regardless of where you stand on that issue some of these sites have already overreached with how they moderate personal pages and private messages.
IDK about a warning on private messages. But I do know there are political links and websites you can not send over PM/DM on the big players. So, IDK, is that worse?
I'd rather them try than do nothing. If you aren't on facebook regularly you simply don't understand how much crap they do find and cut. Also the sheer number of groups they have to remove to prevent complete misinformation and terrorist groups like qanon is insane.
The problem is the poorly-considered but extremely widespread epistemology that goes something like "whether a claim is true depends on how convinced certain people are of that claim."
> Once you create a fact checking entity, it will decide for itself what are object truths.
No, it will just make claims, which may be true or false. There's nothing mysterious or troubling about this. They're not "deciding" what's true.
It is exactly what they are doing. You are playing with semantics. The public as well as private companies use the claims as an argument for validation of truth.
The troubling aspect is it comes with some level of authority backing the claims. There are consequences for not aligning with the positions of the claims.
It very well may be an explanation of the truth, and there's nothing wrong with that. If that's what you mean by "validation" then that's great! But if you mean that whether a claim is "valid" depends on any one person or source's position on the claim, then you're back to that bad epistemology I described.
> But if you mean that whether a claim is "valid" depends on any one person or source's position on the claim, then you're back to that bad epistemology I described.
No, I'm not asserting that view. However, what you are stating I think is the viewpoint of most people. They will accept the claim as valid.
> However, what you are stating I think is the viewpoint of most people.
Do you think that most people think that whether a claim is true depends on Facebook's stance on that claim? That would be extremely shocking to me. My impressions is the opposite: that there is mainstream repulsion to Facebook (and Twitter, etc.) being so bold as to take any stance on factual issues.
People do leverage Facebook's claims when they align with their own. You are probably right in that it doesn't directly change anyone's opinion that disagrees.
However, it does reinforce confirmation bias. Those that align with Facebook's viewpoint will be less likely to listen to other user's opposing viewpoints as the feel emboldened that they viewpoint has some official support.
So I think it has an indirect effect in that it weakens user to user influence.
> Once you create a fact checking entity, it will decide for itself what are object truths
okay, so what? What do you think a court does? The guy who reads the meter at the water sanitation plant or someone who hands out parking tickets?
A fact checker is nothing but an institution with the authority to adjudicate on some questions which are relevant for the maintenance of the platform, no different from any other authority that manages public conduct. Where is the problem?
And of course fact-checkers are not unaccountable. Like in this case their behavior itself is topic of public discourse.
The view of "if we just implement it right" it will be fine is a fallacy.
These organizations represent immense power through influence. By their very existence arises the conflict power and truth. If you can have influence over the fact checkers, then you have influence over truth. Who would you trust to form such an organization? Where is the oversight? Who monitors the monitors?
We are seeing what "reasonable censorship" actually looks like. It is an oxymoron.
>The view of "if we just implement it right" it will be fine is a fallacy.
Good thing I didn't say that then. The original comment that I was challenging said there was no such thing as fact checking. "Fact checking is hard" is not an argument in support of the idea that fact checking doesn't exist as a concept.
>We are seeing what "reasonable censorship" actually looks like. It is an oxymoron.
What is your definition of censorship? Because there is plenty of speech that I think is reasonable to censor. Obviously there is the illegal speech. Should Facebook be forced to host threats, defamation, copyright infringement, child porn, etc? What about the speech that is not illegal but is objectionable in some way? Should Facebook be forced to include hardcore porn in people's feeds if someone posts it? Once we establish that not all speech is appropriate in all contexts, censorship sure starts to seem reasonable. We just don't call it censorship because of the negative connotation. We call "reasonable censorship" moderation.
Yes, what is bound by law is not an option. However, in most cases the law should handle it. Meaning that the post remains unless some legal action is taken as the host often can't just know the legal status except in the more obvious case of child porn etc.
It is censorship when it is out of the control of users. It is filtering when the user has a choice.
Yes, users want to see or not see some categories of content. That certainly is not a problem when it is presented to the user as a choice.
There are platforms that operate mostly in this method, like MeWe.
There is objective truth. The earth is objectively not flat. There are people who don't agree with that (or at least who say they don't), but the earth is objectively not flat, whether or not some people don't agree. Not only that, it's provably not flat. So those who don't agree, well... they prove the contrariness of human nature, I guess ¯\_(ツ)_/¯
For all intents and purposes there is always an objective truth, the hard question is when you consider it proven.
Even for the flat Earth theory it's hard to dismiss it offhand. I mean sure there are plenty of experiments you can do to show that it is pretty close to a sphere. But who do you trust to do those experiments?
Either the person fact-checking needs to do the experiments/research themselves (which doesn't scale well) or they need to trust other people to have done the experiments. However at that point you're simply placing the word of some people above the word of others, which is not objective at all.
And even assuming you've actually picked honest people acting in good faith you're still relying on people to not be confidently incorrect, which I can fairly confidently say is always going to go wrong at some point.
> Even for the flat Earth theory it's hard to dismiss it offhand. > I mean sure there are plenty of experiments you can do to show that it is pretty close to a sphere. But who do you trust to do those experiments?
All the various people who have done them, some of which can be repeated by anyone motivated enough, and all the technology that relies on it being the case. This was known to the ancient Greeks. One even calculated the circumference within a reasonable accuracy.
The Earth is objectively and unironically flat from the point of view of some, in fact many, narratives.
Certainly the architectural drawings of my house do not include a "bulge" in the basement due to heretical sphericalness.
All discussion of my basement floor being flat within about 1/2 inch need to be censored by big brother to save us all from free and independent thought.
The purpose of arguing about what should be censored is to distract from the argument of should there be censorship at all. Classic divide and conqueror strategy. Nobody ever gets asked if they should have concentration camps or not, they only argue about competing paint schemes and honorary mottos.
The problem is that almost any positive statement can be shown to be objectively wrong. "F=m * a", "x^2 = -1 has no solutions", "light travels in straight lines", etc. There are always conditions under which those are true or are good approximations for specific purpose.
Spelling out those applicability conditions explicitly is critical in research publications (but even then, only used for the topic under investigation). In all other contexts such pedantry is impossible. My 2c.
No, that just means that some people ignore or discount objective facts for ideological reasons. The Earth is roughly a sphere, and that is empirically provable.
The arbiter should be well established science, history, math and logic. Of course there is still room for plenty of debate where the matter is not already settled by overwhelming evidence. That's where there shouldn't be an arbiter. But for factual matters like the Earth being (roughly) spherical, there's no actual debate. Just belief that goes contrary to the evidence.
Facts don't care about feelings. Feelings do care about facts.
Fact: the Earth is an oblate spheroid (roughly round) undergoing human-caused climate change.
Example contrary feeling: I live in the arctic circle, it's cold and all I can see is a flat ice-scape. How can the Earth be warming and round when this is all I experience?
There are at least two facts that are creating cognitive dissonance here:
1) I feel cold.
2) I can't see the whole Earth from the surface.
What we feel guides us to question the world (which is healthy) but we shouldn't cling to what we feel is right when we can prove our feelings are wrong. Turns out that being cold on one point of the Earth is an experience that has nothing to do with the global average temperature rising. Also, we are living on a giant rock whose size is so much larger that it locally feels flat. We can also only see so far before our vision is impeded by the atmosphere.
Flat-Earthers are wasting all of our time IMO because there is a lot of well established evidence to prove the Earth is round (ish) and much bigger than us. It's even easy to visually verify if you spend time/money to go high enough to see it for yourself.
The real issue here is that the facts are more complex than just repeating what "feels" true. Humans (much as all animals) are lazy and often don't pursue rigor.
It is really a fact. It's a provable fact. It's a fact that was proven thousands of years ago. Here's a whole-ass section on Wikipedia with evidence[0].
Not sure if that’s a good comparison: 1) the lab leak theory is only incidentally political since the two sides decided to make it into yet another hot button issue; 2) the media did cover the emails story extensively and it was never prematurely “fact checked” as false.
Well the obvious response to that is at one point in human history the objective truth was that the Earth was flat.. you could literally be killed if you denied the Earth was flat.
There is great danger in giving one group of people the power to choose what is "objective truth"
Everyone loves to mock the flat Earthers, but I believe the theory deserves more respect.
For the typical person, who roams around on some small patch of the Earth's surface, but who isn't launching rockets or traveling between continents, the flat earth theory is more useful than the spheroid earth theory. When you walk a short distance, your path's deviation from planar geometry due to the earth's global curvature is orders of magnitude smaller than the deviation due to local surface roughness, and hence the global curvature is irrelevant in practice. Let's say you walk 5km (about 3 miles); the difference between your path as predicted by the planar-earth theory versus the spherical earth theory is about one tenth of a millimeter. For 99% of what we do, invoking the spheroid Earth theory would be like using general relativity to model a baseball's trajectory in the presence of air resistance.
And if you happen to be far from the earth, on a length scale greater than about 10**9 km, then the point-earth theory is probably going to be the most useful model to you.
In short, it's all a matter of perspective. "Objective truth" isn't a thing; all that matters is how effective your model is at describing the properties that are relevant to you.
"When people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together." - Isaac Asimov
Well, it proves the problems and downsides of the entire idea. It doesn't necessarily prove it's futile, and it doesn't even necessarily prove it's a bad idea, but those of us who pointed out the problems are proven right that they are in fact problems...
> Social media platforms like Facebook and Twitter are the carrier, and should be treated as such.
They are not "carriers" nor are they "publishers" because those concepts aren't legal terms, outside a very narrow field of law and the imagination of the tech community.
Besides, "carriers" aren't as free from intervention as you make it out to be. AT&T can and does, and sometimes is required to, block spam or fraudulent calls, for example/
-Facebook isn't legally required to fact check.
-The CDC position is sufficiently controversial that this is well within the margin of error we'd expect of a fact checker.
-AT&T isn't amplifying your phone call to millions by selective algorithms that enhance the most controversial and hateful messages
-Carriers are heavily regulated, in part because they enjoy natural monopolies (in AT&T's case, only so many companies should be digging trenches and connecting physical wirelines into homes and businesses)
This argument I see everywhere "Facebook amplified your voice, a phone line does not" misses a key point: Facebook chooses to amplify voices. All any of us wanted really was for our friends to see what we say, if they choose to see it. So really, Facebook and similar entities at creating the problem deliberately and then claiming they're entitled to solve it by placing scarlet letters around their site.
The submitted postimg.cc screenshot had the following cdc.gov link [0], which appears to be flagged as false information. The following text was on that cdc page:
> After December 31, 2021, CDC will withdraw the request to the U.S. Food and Drug Administration (FDA) for Emergency Use Authorization (EUA) of the CDC 2019-Novel Coronavirus (2019-nCoV) Real-Time RT-PCR Diagnostic Panel, the assay first introduced in February 2020 for detection of SARS-CoV-2 only. CDC is providing this advance notice for clinical laboratories to have adequate time to select and implement one of the many FDA-authorized alternatives.
> Visit the FDA website for a list of authorized COVID-19 diagnostic methods. For a summary of the performance of FDA-authorized molecular methods with an FDA reference panel, visit this page.
> In preparation for this change, CDC recommends clinical laboratories and testing sites that have been using the CDC 2019-nCoV RT-PCR assay select and begin their transition to another FDA-authorized COVID-19 test. CDC encourages laboratories to consider adoption of a multiplexed method that can facilitate detection and differentiation of SARS-CoV-2 and influenza viruses. Such assays can facilitate continued testing for both influenza and SARS-CoV-2 and can save both time and resources as we head into influenza season. Laboratories and testing sites should validate and verify their selected assay within their facility before beginning clinical testing.
I'm personally not sure what to make of all this, the cdc link doesn't seem controversial, and perhaps flagging this as false information was done in error? But screen shots are lame so I figured I'd repost as a link and the text for convenience.
i think the problem is that facebook doesn't actually fact-check anything. they abdicate that responsibility to third parties that they designate as "fact checkers", and designated fact checkers include some organizations who may not have an entirely firm grasp on what's true.
> This is the underlying problem we're dealing with.
What's the underlying problem? How are we dealing with it? As someone who does not and has never had a facebook account, I have very little context to understand what you are talking about.
> CDC encourages laboratories to consider adoption of a multiplexed method that can facilitate detection and differentiation of SARS-CoV-2 and influenza viruses
Used to claim that many positive test results were not covid cases, but were flu cases.
This _isn’t_ the case — the CDC is now recommending tests that can check for multiple infections rather than just one.
But the wording does leave it open to being a possibility, on first reading that may even be a sensible assumption.
So the problem is complicated —
1) this is an official and authoritative source posting wording that’s explaining what’s changing with ambiguity about what the status quo is.
2) Amid that ambiguity, people are posting information that is not true, using this as a source.
3) Facebook’s response is the “Fake News” label, when while the source has its faults it is not Fake or News, it’s just having meaning attributed to it that’s not there
> 3) Facebook’s response is the “Fake News” label, when while the source has its faults it is not Fake or News, it’s just having meaning attributed to it that’s not there
I would also be unsurprised to learn that this is something like an original conspiracy post being correctly flagged as fake news and subsequent posts with the same URLs either being auto-tagged or suggested as the same to a reviewer who is almost certainly overworked and underpaid.
All of the big tech companies like to rely on automation to avoid hiring more people and it's really easy to imagine this repurposing infrastructure which was originally built to quickly block things like spam or malware links being shared where the presence of a particular URL does in fact mean the post is highly similar.
I see a picture of a link with a "false information" notice. What are you seeing that I am not that causes you to interpret all of this ostensibly harmful missing context?
Sure, you can use this fact to make a misleading claim. But literally any piece of information can be used in a harmful manner. I guess we need someone to round off the points on all of our scissors so we can't hurt ourselves?
Yes, I know what you're talking about, but it has nothing to do with the post. Someone posted a link to a CDC page on Facebook, and Facebook flagged it as misinformation.
Maybe someone, somewhere used this link to argue something misleading, but as I said before, you can do that with any fact.
Oh. I'm not defending labelling the link as fake news. And my post wasn't an argument in favour of this being ok. Just adding an example for the poster and any other readers who haven't seen the "context problem" first hand
>
timr 8 hours ago | parent [–] | on: Facebook is now claiming official CDC.gov links ar...
I see a picture of a link with a "false information" notice. What are you seeing that I am not that causes you to interpret all of this ostensibly harmful missing context?
Sure, you can use this fact to make a misleading claim. But literally any piece of information can be used in a harmful manner.
That is the gist of the problem.
It's not quite entirely true, as taking a quote from an astronaut about how the ground on the moon didn't feel like anything on earth isn't going to convince anyone that the moon is made of green cheese who didn't think that anyway, but that is the general problem.
If I understand correctly it means that the old tests tested only for covid and the new ones test for covid and influenza at the same time. This makes them more efficient because you won't have to test twice to detect each virus separately.
That's not why they are decommissioning this test though. There's no reason that both this test, and a covid/influenza test cannot coexist.
CDC is explicitly decommissioning this test, and it's not clear why in their announcement.
I'm just trying to gain more context here.
It's pretty easy to see how this could imply that the current PCR test could have issues.
The consequences of this is not neutral because false positives were treated for Covid. No one knows what the consequences of that medical misapplication of treatment was.
Which wording? The one quoted above? That means "if you are switching tests anyways, consider using one that does multiple things so you don't waste (somewhat) scarce and expensive testing capacity on double-testing"
Anyone flagging this because they think it's an exaggeration, well, I envy you not having direct counterexamples in your immediate family. Paul is almost pitch-perfect explaining posts which real people make in all seriousness.
I don't think it's an exaggeration, and I wouldn't flag it, but I worry about the attitude. Officials are telling the truth about the danger of the coronavirus, but sometimes they do lie! We can't afford to cultivate an attitude that only raving lunatics accuse government agencies of lying.
No, but I think it’s important to do that by relying on reason and evidence. Anyone known to repeat lies is unlikely to be right about deep secrets, just as the Snowden leak didn’t validate the people convinced that the government was reading your emails on behalf of the Trilateral Commission and aliens in Area 51.
I mean... didn't it? One of the things Snowden revealed was an international conspiracy called Five Eyes, where intelligence agencies across the Anglosphere coordinate to covertly guide world events in their preferred direction, leaning on each other to handle things such as domestic espionage which would be illegal in their home countries. I'm not sure we can mark this as a loss for the conspiracy theorists, even if they were wrong to think the Trilateral Commission was involved.
I don’t think it validated anyone: they didn’t contribute any new facts about the groups, reasons, or methods. The fact that there was a government spy agency spying doesn’t mean any of those details were less wrong or that the conspiracy theorists are trustworthy sources of information about anything in the future.
Part of what makes conspiracy theories spread well is mixing in real details to make the invented parts sound reasonable. Five Eyes and ECHELON weren’t secret so they were great for making it sound more like you knew what you were talking about, things like PRISM, MUSCULAR, and XKeyscore were but none of the conspiracists had detailed anything like that.
This is the same dynamic we’re seeing now with antivax and QAnon types digging through things like CDC bulletins. They can use details to sound like they’re more informed and find odd phrasing or terms of art which can be confusing to normal people in a way which supports their narrative. Imagine how it’d sound if someone started writing stories about a dark cannibal-murder conspiracy among Linux admins forking and killing children, based on someone’s mailing list posts about troubleshooting some C code. If they correctly identified fork(2) and you knew that Reiser had actually killed someone, would that make you trust them about anything because a few details could be stretched to cover something real?
More often than not, they aren't "lying". It's presented that way, but there's a real challenge explaining policy to a hostile audience. I mean, that's the job, so not much pity for them, but most of the time is an oversimplification (a classic form of propaganda). Their incentives are such that it is way easier to tell the truth.
Huge chunks of the American public are refusing to take a vaccine that could end the pandemic and save tens-to-hundreds of thousands of lives because people are lying to them on forums like Facebook (also right here on HN) and telling them it's dangerous.
I think you misunderstand. It's not about whether one can easily imagine it being used to create a false narrative, but whether those false narratives are being broadly deployed. In other words, it's an empirical, not theoretical model.
Is it? I don't think "people sometimes make wrong arguments" is a problem Facebook should attempt to solve, or could solve if they did attempt it. The problem of fake news is fabricated nonsense, not arguments which end up being flawed upon further thought.
I assume it has been claimed on FB that this is proof that the flu didn't disappear but just got misdiagnosed as Covid, and/or that the Covid tests results couldn't be trusted at all, hence they have been withdrawn.
> Isn’t implying that the test can’t tell the difference between flu and covid?
they are strongly convinced that this is the case, quoting papers or research showing that in some rare cases with some methods you could have tests which could sometimes confuse Covid with other Corona Viruses (not flu, completely ignoring that this might not apply to the tests they are speaking about or that the probabilities are negligible.)
I had to realize that the majority of people (which haven't studied something where they had to learn it) do not understand stochastic at all, but believe they have some basic understanding of probabilities (which they don't have).
Furthermore it's not uncommon for people to reject truth which do not fit into their world view.
Not to long ago I realized that a relative of mine does not understand the difference between false positives and false negatives and that they are not equally likely. They knew the (fast, non PCR) test can be wrong and as such believed it can be equally wrong in all directions including false positives. So because it was confirmed that the test have many false negative they believed that they also must have many false positives and as such the increase in cases must be because of this false positives...
I tried to reach out to them and explain them that false positives and false negatives are not the same and just because something can likely fail in one way
it doesn't mean it will also fail likely in another way. But I completely failed to reach them, they insisted on their opinion being correct and went very fast into a defensive mode where it's impossible to reach them no matter what how reasonable and convincing your arguments are.
Yes, it does state that. However, there is the word 'and' between detection and differentiation.
The original could not detect, so therefore differentiation is a moot point.
Also, the test for flu was a separate test. So it would have had no effect on the lower flu incidence assuming we didn't test for flu at a lesser rate.
>assuming we didn't test for flu at a lesser rate.
Do we have any numbers on this? I would imagine a lot of people who went to the doctor or whoever complaining of flu like symptoms would have been tested for covid.
They've archived past weekly reports going back to 1999-2000 (not sure if that's when the weekly report started or just when the archive starts), although the formatting and specific information has changed over the years. https://www.cdc.gov/flu/weekly/pastreports.htm
It does not imply that. It would be effectively impossible for that to happen. PCR testing is a crude form of genetic test. Influenza viruses and coronaviruses are not closely related. It's more likely that a genetic test would confuse influenza for ebola (another negative sense single strand RNA virus) than for a coronavirus infection.
> If that’s true, then, how can we be sure that flu hasn’t been misdiagnosed?
Because I haven't seen anything on Facebook, or self-censoring HN for that matter, validating those claims. So they must be but only false but really out-there wacky conspiracy theories.
People were spreading misinformation about the link. Just to be clear, there is no recall. The test is not defective. The test can be used through the end of the year. The CDC recommends using tests that can detect flu as well as COVID.
The current PCR test from the CDC does not detect flu at all, or other coronaviruses. The test is very specific. See pages 40 on in the Instructions for Use[1] if you are interested. The test sequences were compared (by computer) to hundreds of thousands of genomes of flu, the human genome, and other organisms, as well as tested on physical specimens.
Here is a sample of messages that were shared on Reddit:
> The FDA announced today that the CDC PCR test for COVID-19 has failed its full review. Emergency Use Authorization has been REVOKED. The FRAUDULENT PCR Test has finally been ruled a Class I Recall. This is the most serious type of recall. All measurements based on PCR Testing should come to an end ASAP:
> CDC retracts PCR test as it can't differentiate between COVID and the flu. It was all a Lie.
> CDC pulls PCR tests because they can’t distinguish Covid from the flu. We finally caught the SOBs who have lied to us about the coronavirus numbers the last year.
> The Pandemic narrative is unravelling... The FDA announced today that the CDC PCR test for COVID-19 has failed its full review. Emergency Use Authorization has been REVOKED. It is a Class I recall. The most serious type of recall. Too many false POSITIVES. This is the test that started the pandemic!
> CDC to Withdraw Emergency Use Authorization for PCR Test Because It Cannot Distinguish Between SARS-CoV-2 and the Flu.
> Watch the panicked minority bombard the comments with establishment talking points to try and refute this tweet.. PCR is being rescinded as a test, vaccinated people are filling hospitals, and our government officials are still urging us to get tested using PCR and get jabbed with the Covid vaccines. Alrighty then.
There are people which distribute false information which use extreamly clever highly manipulative methods which more often then not show a clear understanding of the matter in question. And I feel that it's a degree of understanding of both the topic and method about how to subtitle manipulate people that I think they themself should be clearly aware about them spreading false information.
One method often used is to take some "official" article, paper, etc. which is not targeting a general-public audience but a audience of people with specialized knowledge and then twist what it supposedly says and spread it to people which in general do not have this specialized knowledge but
where they know that with the given combination of
pre-feed misinformation and non-understanding of the content and complex language/terminology their targets will see it as proof for the false information. Even if it might proof the exact opposite.
Examples include:
- this link
- a case where two very different statistics of the CDC (from different sources) used the same "label name" for a graph with similar but very different data
- cases of studies showing that e.g. some vaccine is not perfect in some very specific situations being taking as proof that it's not working at all
- same for tests etc.
- court cases won due to formalities being taken as proof that vaccines don't work (e.g. anti-vaccer creating a bounty for a proof of them being wrong but intentionally formulating the requirement for the bounty submissions subtle in a way that no submission is ever valid for submission and as such they won the court case about not paying out the money, even through the submission proofed them wrong)
A commonly used fallacy of the mind is that people jump from "this solution is not perfect" to "this solution MUST be replaced" to "a solution which is perceived to be the opposite is therefore true".
Like "this test has wrong results in rare cases" => "this test can not be trusted at all" => "all tests of this kind are untranslatable" => "there is no need to test because all tests are fake anyway" => "covid doesn't exists".
As a side not I have seen that kind of thinking to often in IT too like: "this library turned out imperfect once we knew more about it" => "we must replace this library" => "this other library which used the imperfection of the first library in it's advertisements but of which we don't know anything about is correct and must be used".
I believe it's of upmost importance to improve science to general-public communications and argument all kinds of reports, articles and papers with a section targeting the general public. As well as rehabilitating state organs which are focused on providing easy understandable content and explanations, while being vowed (by law required) to objectivity and transparency and making it highly punished for politicians, other state organs or well any kind of institution trying to influence it.
It isn't just the social media giants. It applies to most if not all the corporate giants. Google's search and YouTube, Apple news, Netflix, Disney, etc...
I think i would restrict it to "X narrative, where X is chosen from spectrum of acceptable opinion dictated by appointed board of directors, USA, inc.".
I mean, once they are in the business of fact checking, I don't see why any specific source would be off limits.
On the other hand I think this shows the absurdity especially of labeling things as inaccurate without saying what specifically you are actually calling inaccurate because how would you know if the fact check is valid or not.
Right, if they have to fact check anything, it seems like they'd have to fact check everything, and then if Facebook is capable and required to fact check every single thing on it's platform and arbitrate the truth it's like they'd have to be some kind of omniscient truth knower.
If they aren't in some privileged position of truth holding all you've got is facebook applying its narrative layer on top of everyone else's narrative layer. You're exactly where you started.
I guess a better way to say this is that I think the idea that Facebook is capable of arbitrating the truth is at its core, philosophically, a nonsense proposition.
My wife tried to hide her age, but not being terribly Facebook savvy changed her DOB to the current date.
She was immediately blocked from Facebook. She appealed and gave her proof of age. Still, Facebook wouldn’t budge.
Facebook’s automated mechanisms suck. If they can’t tell that someone who has been on the platform for over a decade with photos showing she is a middle aged women, married with two kids, is not, in fact, several days old, then they have no business fact checking posts from the CDC!
The simple fact is, I have massively decreased my use of Facebook in the last 6 months. I really feel that for the health and safety of everyone it might be a good idea for others to do so also.
An irony here is that Facebook is now censoring the CDC, when it previously claimed that it was censoring using the guidelines of the CDC.
Another irony is that the White House said that it was flagging comments for Facebook fact-checkers just last week:
“We’ve increased disinformation research and tracking within the Surgeon General’s Office. We are flagging problematic posts for Facebook that spread disinformation.”
Apparently this now includes posts from another part of the executive branch.
The same press secretary claims vaccines have gone through FDA approval when none of them have:
> "This is what they’re — what the information — some of this misinformation is doing — and misleads the public by falsely alleging that mRNA vaccines are untested and, thus, risky, even though many of them are approved and have gone through the gold standard of the FDA approval process."
None of the vaccines have gone through "the gold standard" of the full vaccine approval process. They have only received FDA approval for emergency use and have not received full approval.
Biden is planning to push through full approval this fall:
OK, random thought experiment. As someone without a facebook it's hard for me to evaluate, but how much damage would really be caused if facebook just disappeared one day?
Facebook going down means losing contact with my friends overseas. I myself am studying as an international student, and many of my former classmates are studying in another countries as well. Messenger (and consequently Facebook) is pretty much the place that people in my home country hang out at.
Maybe? Messenger is just an instant messaging platform like iMessage or Whatsapp. Technically speaking you can stumble upon group chats that disseminate misinformation, but Messenger doesn't show you group chat recommendations like Facebook.
Facebook has taken up much of the local classifieds business that Craigslist took from local newspapers 15-20 years ago.
For me, the most effective directory of contractors is through Facebook. I hate it, because Facebook does a bad job of building and presenting that directory, but it's the largest and most relevant one.
Would they? If I'm looking for a restaurant, and Facebook suddenly disappeared, I'd still be looking for a restaurant. I just would have to use Google Maps or something to find one.
Absolutely zero. People would find another way to communicate, socialize, or whatever. The loss of Facebook would be a loss of jobs and income for people, but that’s it. Facebook serves no positive purpose in society, and it seems the best thing a person can do, is to delete their Facebook account.
Zero damage? No way. Millions of people would lose contact with friends and family that they have no other connections with. Companies that depend on Facebook advertising would disappear. People would be locked out of OAuth accounts on thousands of websites.
> Millions of people would lose contact with friends and family that they have no other connections with.
Not at all.
> Companies that depend on Facebook advertising would disappear.
Good.
> People would be locked out of OAuth accounts on thousands of websites.
OK? If you were federating your auth via Facebook it was only a matter of time until you were locked out for one arbitrary reason or another. This just seems like an acceleration of a good thing.
> It would be catastrophic.
From your perspective, I can see that. But there are other perspectives out that that are not completely beholden to Facebook.
Some people rely on their relationships with other people. They build support systems, form groups, and take comfort in the company of others. Being cut off suddenly and with no means to reconnect would isolate many people. This would be extremely damaging from a mental health perspective alone.
In addition, so many people have reconnected with old friends, family, teachers. One friend of mine used Facebook to find her birth parents. That avenue disappears completely.
I don't know why you would say it's "good" for these companies to disappear. These aren't megacorps. It's small, local companies that rely on FB advertising. Bed and breakfasts, wineries, anything that depends on tourism. They're always hit the hardest. Walmart isn't going to care if they can't advertise on FB anymore.
Your comment shows a lack-of-understanding and compassion for your fellow human.
Actual research shows the opposite effect on mental health from what you are saying.
When you don't have Facebook your friends and family just txt or email. I mean Facebook book hasn't even been around that long. To pretend it is something like water or electricity is just preposterous.
While I agree that FB disappearing wouldn't be that big a deal, the idea that email is an "obvious" alternative is quite problematic for the younger folk.
Data point: In 2009, the admin assistant in my department at university told me she's been getting quite a few upset incoming freshmen because of the requirement to have an email address for university classes - many of them had never used email before, and had communicated only via text, and IM apps. This was over 10 years ago.
I agree email isn't the best for social communication, but I find it silly young people would get upset about needing an email address. It seems pretty obvious to me that an email is required for a number of basic adult functions, especially if one doesn't want to make a lot of phone calls. I was in high school in 2009 and my entire peer group was already using email for things like summer job applications and booking appointments.
Also, I'm surprised the university wasn't assigning them .edu emails anyway. My younger brother is in college now and he loves the school email for getting student discounts.
They were assigning them .edu addresses. Their complaint was more about the school requiring them to use email (e.g. to get updates from classes, registration, etc).
I'm not saying all those who complained did not have email - likely many/most did. They didn't like being required to use it. However, there were quite a few who claimed they had never used email. I suspect that they likely had, but only for a few formal things (university applications, etc) - something they used perhaps once every few months. They probably didn't like the idea they'd have to check their emails so frequently.
Many do not text or email, they use Facebook's messaging system. They would not even have the information required to connect elsewhere.
There is also a huge difference between people making the decision to stop using social networking, and being suddenly and unexpectedly cut off from a support network. The former can have positive effects on mental health, the latter will certainly not.
I say this as somebody who deleted their Facebook account over a decade ago.
You've managed to ignore all possible context and depth in your sarcastic answer.
The question is not "Could Facebook be replaced?". The question is "What happens were it to disappear one day". They're different questions with very different answers.
I do not think FB is a "government agent" from everything I've read. They operate within government rules and regulations, and they're certainly going to listen to the government, but listening is normal for any company. They can still do what they want.
It's getting kind of embarrassing to be a computer programmer when supposedly the best programmers in the world at Google, Facebook and Amazon can't fix simple problems and perpetrate junk on the world. A simple thing like if url == CDC then don't show the warning. Oh, and we don't know how to fix the mermaid problem in Norway. Sorry. https://news.ycombinator.com/item?id=27991322
I might be wrong on this but I think their fact checking partners just go all around on Facebook and fact check specific links and claims, I don’t think it’s flagged automatically and paired with a fact check, or at least the few I looked at were directly referencing the source for the content that was shared.
Here's a nice article answering your question and explaining the confusion here. It definitely seems like people who want to believe that covid is a hoax are taking this CDC post (which is technical and directed at scientists in testing labs) out of context and misinterpreting it to back up their narrative.
In explaining the CDC’s decision to end the use of its own PCR test at the end of 2021, Kristen Nordlund, an agency spokeswoman, in an email to us cited “the availability of commercial options for clinical diagnosis of SARS-CoV-2 infection, including multiplexed (discussed here) and high-throughput options” — referring to technologies that use an automated process to administer hundreds of tests per day.
“Although the CDC 2019 Novel Coronavirus (2019 nCoV) Real-Time RT-PCR Diagnostic Panel met an important unmet need when it was developed and deployed and has not demonstrated any performance issues, the demand for this test has declined with the emergence of other higher-throughput and multiplexed assays,” Nordlund said.
She continued: “CDC is encouraging public health laboratories (PHL) to adopt the CDC Influenza SARS-CoV-2 (Flu SC2) Multiplex Assay to enable continued surveillance for both influenza and SARS-CoV-2, which will save both time and resources for PHL.”
If I'm reading the post correctly (and I am no expert in the field), they're saying "this test will be obsolete by end of year, as better tests become available". It was an EULA, which is by intention "we wouldn't approve this so quickly except this is an emergency", so it's not surprising that as better tests were developed they would withdraw the EULA.
Given that they were withdrawing it with several months' advance notice, I doubt there was anything massively wrong with it.
The autocurated feeds are the main engine of the business, which is competing against the other apps for the User's attention (and perhaps also competing against all other aspects of the User's life)
These feeds are optimized for "engagement" first. That's business.
So, now, trying to fix the problem of false information, the arsonists have joined the fire brigade.
Facebook and other social media groups should not be allowed to automate moderation of their platforms. All content should be inspected by a human, just like the underwear I’m wearing. All users that sign up should be verified to be adults who they claim to be.
2020 was the year where tech companies went into censorship overdrive. They are not afraid of the consequences and I believe this is because they have somehow captured the government in a very significant way.
Most misinformation is at least in part founded on truth, it is just that some of it is left out, some changed and a lot of BS is added.
The links to CDC forwarded on Facebook contain the misinformation in the description, very few actually click on the link and read it.
Facebook simply used their system to add clarification.
The misinformation that is spread says that CDC, banned RT-PCR, because it confused flu with covid. When in reality FDA approved new tests that test for both to save time and resources as the flu season is approaching.
Fact checking is just so Orwellian, how is it even possible that intelligent-seeming people have fallen for this trick? And what happens when the media lies, such as when they told us a year and a half ago that "masks actually spread COVID?"
This is all so unbelievably ridiculous. I no longer trust the authorities and fact checking is just making it worse.
Hypothesis: The accompanying text in people's posts sharing the link got flagged, but Facebook's system doesn't differentiate between that and flagging the link itself. It just sees the link being repeatedly flagged and failing whatever fact-checking procedure they have.
Realizing that most arguments made here are perfectly valid in either case, it strikes me as ironic that nobody seems to question the authenticity of the image that spawned this discussion in the first place.
You know for all the crap Google gets at least they will show up on a lot of Google posts here and say "Yeah that sucks" when their organization messes up. Facebook? Nada.
The first problem is that people would get their news from a place like Facebook... That's like taking stock market advice from the guy that Mows your lawn
Report them it's an AI goof. It's the only way they really have. They can't hire enough people to possibly go through all the crap that the antivaxx contingencies pump out so they have to depend on trainable AI. I'm facebook daily and I see that they aren't filtering enough junk that is nothing less than made up lies stuck into meme form. They don't even try to make it look fake now that they know the uninformed and ignorant of the world will sop up everything they get fed as if it were scientific fact.
There was a point in which the possibility of lab origin was “misinformation” or where accelerating the vaccine timeline to one year was “misinformation” or where face masks being helpful was “misinformation”. And at each stage people look at the current “all smart people agree on these truths” and feel so goddamn certain about banning other beliefs.
I think there's a pretty clear explanation, when you look how this link and the information within it is being used in the anti-vaxx/covid denial groups.
1. People post CDC link out of context, and claim how all the numbers of infections from previous year are inaccurate. How testing is useless, covid doesn't exist etc.
2. Facebook fact-checks this, and marks the content as "False information"
3. The link gets marked as false information in the system, and regardless of the context that popup shows up, as they can not check every single comment written around this link and even if they could it would delay marking it as false information substantially.
That doesn't matter though, because most people 'live' in this new space via Facebook, in the same way that a plurality of people 'live' in the physical world we inhabit via the nation of China (PRC being the most populous nation). Simply because of that, we cannot discount the over-influence of Facebook.
Even more poignant, Facebook's dominance in this regard is much more disproportionate than China's dominance of the world population. (Facebook having a little under half the world's population).
Even though I don't live on planet Facebook anymore, the truth is Facebook heavily affects my life. I don't see how anyone can reasonably argue against this.
Here is a partial historical list of scientific consensus "deniers" proven right which this sort of censorship will either silence by big tech or due to self-censorship:
1. Ignaz Semmelweis, who suggested that doctors should wash their hands, and who eliminated puerpal fever as a result, was fired, harassed, forced to move, had his career destroyed, and died in a mental institution at age 47. All this because he went against consensus science.
2. Alfred Wegener, the geophysicist who first proposed continental drift, the basis of plate tectonics, was berated for over 40 years by mainstream geologists who organized to oppose him in favour of a trans-oceanic land bridge. All this because he went against consensus science.
3. Aristarchus of Samos, Copernicus, Kepler, Galileo, brilliant minds and leaders in their field all supported the heliocentric model. They were at some point either ignored, derided, vilified, or jailed for their beliefs.
All this because they went against consensus science.
4. J Harlen Bretz, the geologist who documented the catastrophic Missoula floods, was ridiculed and humiliated by uniformitarian "elders" for 30 years before his ideas were accepted. He first proposed that a giant flood raked Eastern Washington in prehistoric times, and who suffered ridicule and skepticism until decades of further research proved his thesis. All this because he went against consensus science. He was eventually awarded the Penrose Medal.
5. Carl F. Gauss, discoverer of non-Euclidean geometry, self-censored his own work for 30 years for fear of ridicule, reprisal, and relegation. It did not become known until after his death. Similar published work was ridiculed. His personal diaries indicate that he had made several important mathematical discoveries years or decades before his contemporaries published them. Scottish-American mathematician and writer Eric Temple Bell said that if Gauss had published all of his discoveries in a timely manner, he would have advanced mathematics by fifty years
All this because he went against consensus science.
6. Hans Alfven, a Nobel plasma physicist, showed that electric currents operate at large scales in the cosmos. His work was considered unorthodox and is still rejected despite providing answers to many of cosmology's problems.
All this because he went against consensus science.
7. Georg Cantor, creator of set theory in mathematics, was so fiercely attacked that he suffered long bouts of depression. He was called a charlatan and a corrupter of youth and his work was referred to as utter nonsense.
All this because he went against consensus science.
8. Kristian Birkeland, the man who explained the polar aurorae, had his views disputed and ridiculed as a fringe theory by mainstream scientists until fifty years after his death. He is thought by some to have committed suicide.
All this because he went against consensus science.
9. Gregor Mendel, founder of genetics, whose seminal paper was criticized by the scientific community, was ignored for over 35 years. Most of the leading scientists simply failed to understand his obscure and innovative work.
All this because he went against consensus science.
10. Michael Servetus discovered pulmonary circulation. As his work was deemed to be heretical, the inquisitors confiscated his property, arrested, imprisoned, tortured, and burned him at the stake atop a pyre of his own books.
All this because he went against consensus science.
11. Amedeo Avogadro's atomic-molecular theory was ignored by the scientific community, as was future similar work. It was confirmed four years after his death, yet it took fully one hundred years for his theory to be accepted.
All this because he went against consensus science.
Either way they play arbiter with no transparency as to how they do it. They mostly outsource the "truth finding" but then pay them so its all a huge mess and simply not what people want them to do.
I dont use FB but I would not want HN to decide whats true and what not. If wrong stuff is postedon here we (the user) figure it out just fine.
Then when you log in, you see a (0) after your user name. Not much else. You can still post.
If you've been posting abusive comments or otherwise breaking the site rules, you may get banned, but that's not triggered by your karma (so far as I know).
But is everyone that easily manipulated? More importantly, does everyone actually believe that they can be easily manipulated, or do they just think that everyone else is so easily manipulated, but somehow they're above the fray?
And at what point does the censorship to protect me from manipulation become manipulation itself?
Facebook is fighting a losing battle if they think they will survive a battle with their own users. This is way past censoring Alex Jones. You can't possibly censor every crackpot conspiracy theorist. Actually, we're probably all crackpot conspiracy theorists in some way. We probably all believe some conspiracy about 9/11 or the NSA or elections or vaccines or masks or aliens or royal families or whatever.
The rate of censoring is almost certainly accelerating faster than facebook's growth, and once you've been censored once, you're likely to radically curb your use of that platform. I can't imagine that FB doesn't have stats on how many people keep using FB after they've been censored just once.
FB only works when you and 99% of your social group are on there.
And the network effect works in reverse, too.