Hacker News new | past | comments | ask | show | jobs | submit login

So we've tried:

- Real identities (Facebook comments)

- Voting and self moderation (Reddit, HN, etc.)

- Strong moderation (Reddit, HN)

They all result in toxic comments, trolling, an echo chamber,or worse, a complete lack of participation. There's no real solution to this problem. However, if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling at the cost of less participation and an echo chamber (though, you could argue that not all participation is equal and you're mainly removing poor participants).

There's no perfect way to do this because even if you made a subscription necessary, for instance, you may just create an echo chamber. As part of the solution you'd need to prevent the creation of new accounts to circumvent any punishment received.

I'd say the most straightforward solution is that you have a forum and you get an account. Physical mail is sent to your house in order to get a single account. Then, regular moderation practices would be taken seriously as there's no way to create another. The community would be left with those who care enough to not be banned. The problem is that the moderators themselves may be corrupt or wrong.

Thoughts?




We really just need better public education, as well as clearer separation between information and entertainment.

Facebook/Reddit/Twitter/etc promote a regression towards the mean understanding... viral content reflects and amplifies the "average user's" sentiment and values. Acceptable for entertainment, but inherently prone to misinformation, propaganda, and demagoguery. Education requires valuing subject matter experts. Opinions which may not be widely held, or even popular, but supported by people who are vetted as being knowledgeable on the subject.

Traditional media could be regulated because they were largely centralized, but centralization also creates an establishment that regulated counter-culture ideas. In contrast, the internet is anarchic. Online anonymity impairs delegation of trust... any idea can be published so every individual must rationally evaluate what they consume. Attempting to regulate away undesirable behavior on the anarchic internet is just cat-herding. At best, you create a walled garden for a select few.

As I see it, the paths forward are either:

* public education, emphasizing civics/rationality, to support distributed self-regulation

* centralizing with state regulations

I want the former but the latter seems most likely, considering how the underlying networks are consolidating, and increasing awareness of how amplified public ignorance creates political/economic instability that hurts those with power.


We really just need better public education

That's great but it's a slow cultural change. Well-educated countries can still fall into extremism, which is driven by emotional and atavistic factors as well as economic and political ones, and can't simply be dispelled with doses of Rationality (tm). Arguably, the failure of rational utilitarianism to engage with this aspect of humanity and to simply dismiss everything that can't be quantified as irrationality exacerbates the growth of toxicity.

On a more practical level, the US is a country where part of the population rejects the theory of evolution on religious grounds, historical narratives are intensely contested, and political life is objectively and increasingly polarized. Educational change happens over generational timescales, and if it were as simple as making it available all our social ills would have been dispelled long ago.

Of course education and critical thinking skills are essential for a healthy social body, but when I see people saying 'we just need better education' I feel like I'm on a bus that's headed towards a cliff edge and well-meaning people are suggesting that the solution to this is better driving lessons.


My intent was to describe the system (social media is a hyper-efficient anarchic consensus-based information exchange), and a root-cause for undesirable output...

There may not be a solution that preserves the open internet, if this system is fundamentally incompatible with social realities.


"We really just need better public education"

I agree 100% as long as you put me, or people of my worldview, in charge of the curriculum and personnel.


Yes. In today's age of near-universal literacy, "uneducated" is just a euphemism for views disliked by the the side in control of the education apparatus.

While thought-policing media, schools, churches, and any other possible venue of "indoctrination" may "work" to a superficial extent, it mostly just completely destroys the credibility of your authority and leads to stunning implosion and destabilization. See: the Soviet Union.

Like it or not, you can't just beat "undesirable" views out of people, either with schools or social media moderation.


Education is not indoctrination.

Education teaches critical thinking, science, history, numerical literacy, and the general skill and toolset to differentiate fact from falsehood, rhetoric, and manipulation-- regardless of where it is coming from.

Education is an immune system for the mind. Generally it is the manipulators who don't like an educated populace, because it decreases their power. They tend to be the ones labeling education as "indoctrination".


>Education is not indoctrination.

Right, in principle, it's agreed that "education" is "good knowledge" and "indoctrination" is "bad knowledge" and/or "fake news".

As long as you think that the inoculations being administered in the school system are valid, you'll call it education. Once you stop thinking that, you'll call it indoctrination.

So you're not really arguing anything. Every side calls training that biases you toward their preferred narrative "education" and training that biases in the opposite way "indoctrination". Is your point that "sometimes people disagree"?


Do you understand the difference between knowledge and critical thinking?


> Education is not indoctrination.

This tends to last until someone realizes that an educational system is a wonderful indoctrination tool to advance their goals. Often enough this is followed by enacting that.

This isn't new. "Give me the child for the first seven years and I will give you the man."


Being able to distinguish fact from fiction is not a skill that is near universal, and it should be. Seems like a straw man to attack "thought policing" and "indoctrination." We're talking about critical thinking and logical reasoning.


This argument is completely disingenuous. Your average person is capable of critical thinking and logical reasoning -- those who aren't are either wards under the care of another person. Normal people just think logically and critically in reference to local optima, and that's not something that we can or should try to program out of them.

That quality is also known as adaptability and it's crucial to successful survival and prosperity, for exactly the same reason that it's useful in mathematics: global optima are generally difficult to deduce, if they can be conclusively and authoritatively determined at all.

Saying Side X is "not being logical" or "can't think critically" is virtually always just a cop-out. It says you either a) don't understand or b) don't want to admit the validity of some of their concerns.

Most of the time when the other side's argument is understood, the disagreements are a matter of priority and/or credibility, not nonsensical thinking. And those priorities are usually determined intrinsically; values as such can't really be programmed or taught. They're the result of the years of experience each individual has endured in the real world.

A good example of this is that many engineers are known for a just-the-facts, no-frills approach. This is because engineers tend to prioritize facts and correctness over aesthetic and emotional value. Other people who don't do this aren't objectively wrong -- they just put different weights on the considerations, leading them to different conclusions.

Another example is outlet credibility. Your average Fox News viewer may believe that MSNBC is propaganda secretly dictated by the shadowy figures in the background, and vice versa. If you believe this, the logical conclusion is to dismiss or at least discount the perspective of the propagandist.

You cannot "prove" that one side is propaganda and the other side isn't, because it is impossible to definitely deduce the intentions and motives of other people. Reports that say reports from MSNBC were more frequently errant are of no value because you can just say "Oh yeah, says who? The same shadowy figures?" to that.

It is important to understand that humans hold a variety of totally non-falsifiable beliefs -- things that cannot be definitively proven one way or the other, even if you try, like the state of mind of the speakers we're around. These have to be approached from the subterranean to be understood, let alone addressed.

All we can do is understand that our own perspective is not the default or de-facto correct one, and that other people are entitled to their own assumptions and unfalsifiable opinions just as we are. They're entitled to their own credibility heuristics and decisions about who is worth trusting. People are free to make their own decisions and conclusions, whether we agree with them or not.

Understanding that is critical to learning that it's OK to disagree with people, without having to pretend that they're insane just to preserve your own ego and self-worth.


> All we can do is understand that our own perspective is not the default or de-facto correct one, and that other people are entitled to their own assumptions and unfalsifiable opinions just as we are. They're entitled to their own credibility heuristics and decisions about who is worth trusting. People are free to make their own decisions and conclusions, whether we agree with them or not.

For opinions, perhaps. There are also people who reject facts. I don’t consider rejection of evolution or young-earth views as legitimate. Thus, those who cling to these views are empirically wrong.


>There are also people who reject facts.

Most people don't reject facts, they reject certain interpretations of facts.

For example, some people believed epilepsy came from evil spirits. They didn't deny that the person was shaking on the ground. They just had a different explanation for it than we do now.


Empirically? One suspects a different adverb would have been more correct in that sentence.


Do you contend that human beings have not empirically measured the spherical, or roughly spherical shape of the Earth?


When you build a small house, you don't account for the curvature for earth. Same for when you walk down the street.

When you build a runway for a plane or a long bridge, you do.

A model is not necessarily useful in all contexts. People still use the flat earth model in useful ways because it's simpler to assume the earth is flat in some situations. Of course, once you go beyond the capabilities of the flat earth model your numbers will wildly diverge into the realm of useless while the round or spherical models provide useful numbers for longer.


If parent had been talking about chemistry or physiology or any subject that can be explored via controlled experimentation, I wouldn't have complained. Instead the topics were geological and evolutionary history, which seem very much not "empirical". Not that I suspect that those sciences are wrong in any sense, but words have meanings.


I misread gp as saying "flat Earth", and not "young Earth". My apologies. I would agree that even if we can point to things like nylonase or the speed of light coupled with known distances to stars, those are deduced facts. Whereas astronauts have empirically observed the spherical nature of earth.


> A good example of this is that many engineers are known for a just-the-facts, no-frills approach. This is because engineers tend to prioritize facts and correctness over aesthetic and emotional value.

And yet engineers are over-represented (compared to people with other degrees) amongst Creationists and conspiracy theorists and, I would guess, terrorists. I think engineers value simplicity and direct causation more than facts or correctness.


No, it's not disingenuous at all. It is not a cop-out to say that people who believe in conspiracy theories, people who don't understand facts, people who are highly opinionated about things they don't understand, etc. are not behaving logically. They can have valid concerns and still be behaving irrationally. You clearly think these are mutually exclusive but they aren't.

Appreciate the multiple snide attacks, though.


> people who believe in conspiracy theories

Is there not such a thing as conspiracy fact? Aren't some conspiracies, in fact, real? It seems both sides of the political aisle have pet conspiracy theories these days, so it's really hard for this to hold water anymore.

> people who don't understand facts

As another commenter said, people will usually agree on the clear and present facts, e.g., Donald Trump won the presidency. Where you'll find more disagreement is on rationale: either he won because he gave a voice to the discontented American working class, or he won because he worked in cahoots with Vladimir Putin to subvert American democracy.

People don't refuse to acknowledge the obvious state of affairs. They have different interpretations, based on different values and credibility heuristics, of the likely impetus for that state of affairs.

>people who are highly opinionated about things they don't understand, etc.

aka virtually everyone. How many of us know enough to hold our own with the experts in something that we're "highly opinionated" on? If we can in anything, it's very narrow. Are all of our other opinions invalid now? Humans use credibility heuristics to try to determine who is right about something, and then they follow based on that.

> are not behaving logically

I dunno, it sounds logical to me, at least in the practical sense. If we pretend we live in a world of infinite resources and time, you might be right, but considering the constraints of reality, the logical approach seems to be to have and express opinions in the moment according to one's best judgment, since everyone else is going to be doing that too. Just gotta try not to be too haughty about it.

> They can have valid concerns and still be behaving irrationally. You clearly think these are mutually exclusive but they aren't.

I agree someone can have a valid concern and also behave irrationally. I don't agree this is what you started out saying, though.

>Appreciate the multiple snide attacks, though.

No offense intended. Edit deadline is passed, but I wasn't thinking I put any such things in. My apologies if you felt I was being condescending or passive-aggressive.


I'd like your opinion about this particular subreddit:

https://www.reddit.com/r/CBTS_Stream/

These 20,000 odd people unequivocally lack the type of critical thinking skills GP is referring to. I find it hard to believe that they are all under professional care. These people are straight out of The DaVinci Code, or National Treasure. They truly believe that they have uncovered a massive conspiracy to over throw the current American government, and they are organizing to stop it. Many subreddits choose a sort of mascot that defines their subredditors. For instance, people who subscribe to the tongue in cheek /r/evilbuildings are "6509 villains plotting", where they post pictures of buildings that have a nefarious apperance, no conspiracy in the comments. /r/CBTS_Stream has "21,333 Operators". As in mercenaries/militiamen. These people are rabid Trump supporters, seem to have a strong fundamentalist Christian bent, and appear to be extremely gullible and susceptible to any sort of theory that involves revenge upon the previous administration. They even have their own prophet, "Q". Everything from occult references, to nazis, to big pharma killing off holistic doctors, to arranging Trumps tweets into an 11x11 grid, and then playing word search to reveal a secret message. These people swear that Donald Trump's televised rallies are chock full of encoded messages and symbolism, both in what Trump is saying, and the clothes/posters of supporters in the background. These people buy toothpaste from Alex Jones, because it doesn't contain flouride. These people believe that all mainstream American history since the American Civil War is a lie created by the perpetrators of this current hoax these people have uncovered. They also believe that Trump has already secretly met with Kim Jong Un, and will soon unveil a world saving peace treaty, and that will "make the libs heads explode".

The truly sad part of this is that a lot of these people are also members of other subreddits dedicated to people who have escaped Mormonism, or Jehovah's Witnesses, or similar groups. So these people have already thrown off the shackles of psychological warfare once. But they believe now that they are "woke", and seem completely beyond talking down.

Good luck explaining to these people that they are being radicalized by Russians, or whoever. Good luck getting any of these people to not believe that any censorship is obvious proof that the sleuths are hot on the case, and that the global elite are silencing them.


> This argument is completely disingenuous. Your average person is capable of critical thinking and logical reasoning -- those who aren't are either wards under the care of another person.

Oh please, you think the average person has sufficient critical thinking skills to read the newspaper and pick out the parts that are "stretching the truth", use specious reasoning or various other logical fallacies, etc? You must roll with a different crew than I.

> Your average Fox News viewer may believe that MSNBC is propaganda secretly dictated by the shadowy figures in the background, and vice versa.

If they had critical thinking skills, wouldn't they be able to get a pretty decent handle on the degree to which they are propagandists?

It sounds to me like what you're saying is, most things within this realm are not knowable, except for the parts that are. The world is complex and confusing, but I don't think it's that confusing.


I understand and agree with some points of your criticism, but I disagree with the part that we can't beat undesirable views out of people. Well, we can't do it completely, but it's not a binary thing, and I believe we really can do a lot to educate people. And not a political education, but teaching them about their own biases. Teaching them to be critical, to not just ignore evidence when it goes against their views, to be fair to others, etc.

I don't know, I don't think we have actively tried yet.


> Like it or not, you can't just beat "undesirable" views out of people, either with schools or social media moderation.

You absolutely can. An example: https://www.theguardian.com/commentisfree/belief/2012/sep/22...


That is indeed an inspirational example.


I don't mean it to be inspirational, only to indicate that persistent propaganda and organized information warfare can indeed drive ideas fully out of the population.


Broken clocks are right twice a day.

Religion sucks, but Soviet Communism's anti-religious nature doesn't excuse its foibles.


We've already got people with the correct worldview in charge of curriculum and personnel. The problem is that there are still dumb-dumbs that sometimes think there are valid alternatives to our worldview. That's why we need better education.


We should incentivize them to have the correct worldview. E.g. if you’re a CEO you should make sure that your workforce holds the correct opinions by reminding them that they can be fired on the grounds of being a bad “cultural fit”.


I can't tell if this post is ironic or not


"correct worldview".

People like you scare me.


Sorry--it is satire, but the comment represents literally how it comes across to me when I see claims that "better education" will effectively bring about less toxic discussions. The implication is clear: If only people were rational and educated, like me, they wouldn't think the way they do, and then we would all agree.


How about this: first teach advanced critical thinking skills, so people have the skills (if not the will, that's another problem) necessary to see through propaganda from both sides.

I have a feeling a lot of people would have issues with this approach though.


How do you teach critical thinking?


By giving people things to think critically about, and ensuring that they respond in an appropriately thoughtful manner.

Of course this doesn't work when politics is taboo.


I think emotional maturity is more important than critical thinking. People in our culture have this life or death anxiety over being right, especially in social groups. You see it all the time on social media. Person 1 makes a throwaway facebook post which contains some kind of factual error. Person 2 points this out. Person 1 feels personally attacked and becomes emotionally invested in "winning." The more pushback person 1 gets the more stand their ground and will scorch the earth to save face. Where is all this intellectual insecurity coming from?


It comes from the fact that when you say anything incorrect online, there's an infinite number of people who will call you out on it. Your intellect is always on trial. You have to convince a jury of the entire planet that your opinion is valid.

Take the same comment or opinion and air it among three friends in person (or a very tight social network). You only need to convince two or three people who likely trust and respect you already, and who are not inclined to want to spend an infinite number of hours debating such trivia across all time zones.


Why not just engage in conversations on the principle of charity and good faith. There's also the concept of steel manning other peoples arguments to help extend good faith.

Not every conversation has to become a burned bridges and salt the earth affair. If the other person is just trying to "win" then disengage from the argument. If the other person is arguing with you in good faith then maybe you're wrong or have something to learn from a new perspective.


But... an infinite number of people aren't reading every page on the web, all the time. Even on Reddit, you're only really interacting with the limited subset of users who choose to comment, out of the limited subset who read a thread - which is still possibly bigger than a circle of friends, but smaller than any significant fraction of the human population.

There is the perception that "the entire world" is watching you on the web, criticizing your every move, but that's not a fact.


I guess I'm just not sure what the curriculum would look like. Is there something you could point to as an example of a course doing this well?


I would start with someone who is skilled in both critical thinking and education, or am I misunderstanding the question?

Logical fallacies would be one place to start, you can see examples of this all day long on reddit for example.


Well, my understanding is that "critical thinking" is already very commonly considered to be part of various course curricula. If it's not being taught, then we'd need to do something differently.

I've been hearing claims of the need to teach "critical thinking" since I was in high school. To me it always came across as one of those things that can't easily be taught, particularly in a traditional academic setting. Everyone agrees it should be taught, but if there were a clear way of doing it, we would.


There's plenty of material out there that isn't remotely touched upon in a traditional education.

https://en.wikipedia.org/wiki/Classical_logic

https://plato.stanford.edu/entries/logic-classical/

https://distancelearning.ubc.ca/courses-and-programs/distanc...

> If it's not being taught, then we'd need to do something differently.

When reading the news, forums, or overhearing conversations, do you not regularly encounter people who obviously have no significant skills in critical thinking?


> There's plenty of material out there that isn't remotely touched upon in a traditional education.

Right. I took a logic class for my undergraduate degree. It's actually the source of the "modus" in my username. I guess to me that's a far cry from what people refer to as "critical thinking." Being able to identify textbook logical fallacies isn't the same thing as rationally and objectively forming a judgment about something.

It's certainly a helpful part, but I doubt most would remember it any better than geometry or 1800s history.

> When reading the news, forums, or overhearing conversations, do you not regularly encounter people who obviously have no significant skills in critical thinking?

I do, but it's rarely a clear-cut example of misunderstanding a logical fallacy. More often than not, it's the blind acceptance of supporting evidence while rejecting opposing evidence. Or assigning way too much value to a poorly-sourced news story. Or approaching the issue with a different worldview / values. Or any number of other biases that affect decision-making.

To be clear, though: I agree it's clearly not being taught. I'm just not convinced you can take a bunch of high schoolers, put them in a room, and after X weeks of doing something, they'll be critical thinkers. I agree you could probably teach them logical fallacies well enough to pass a test on them, but that's not the same thing.


Do you think we've reached the absolute apex of having a well-informed citizenry?

If not, if critical thinking doesn't work, what could we do to improve this situation?


> Do you think we've reached the absolute apex of having a well-informed citizenry?

Of course not.

> If not, if critical thinking doesn't work, what could we do to improve this situation?

I'm not sure "well-informed" and "critical thinking" are even relevant to each other, but putting that aside, I genuinely don't know. That's why I asked how you teach critical thinking.

It's possible people are bound to retreat to their biases and it's a futile effort. I'm just not convinced attempting to teach people "critical thinking" will work, because it hasn't.


> because it hasn't.

Implying it's been tried, and failed.

Where has widespread teaching of critical thinking been tried?


I've seen it in numerous course syllabi and mandates. I'm not sure how to cite that, though. Here are a few examples where it is assumed the existing education system / teachers claim to be teaching critical thinking.

> Public school teachers and administrators will tell you that one of the mandates of public education is to develop critical thinking skills in students. They believe that curricula are designed, at least in part, with this goal in mind. [1]

> Common Core, the federal curriculum guidelines adopted by the vast majority of states, describes itself as “developing the critical-thinking, problem-solving, and analytical skills students will need to be successful.” [2]

> Many teachers say they strive to teach their students to be critical thinkers. They even pride themselves on it; after all, who wants children to just take in knowledge passively? [3]

Are you willing to acknowledge educators / curricula commonly claim to teach critical thinking? To me it's always come across as something claimed to be taught pretty much everywhere. Yet we both seem to agree it's not working.

We could try teaching critical thinking differently and potentially meet some success, but that doesn't change how it's been claimed to have been taught for some time with poor results.

[1] http://argumentninja.com/public-schools-were-never-designed-...

[2] http://www.newsweek.com/youre-100-percent-wrong-about-critic...

[3] http://theconversation.com/lets-stop-trying-to-teach-student...


> Here are a few examples where it is assumed the existing education system / teachers claim to be teaching critical thinking.

> Are you willing to acknowledge educators / curricula commonly claim to teach critical thinking?

I'm not in denial of some sort ffs, I'm frustrated at watching our society coming apart at the seams because the vast majority of the population seems to be incapable of intelligently reading a newspaper article, and will fall for seemingly any trick in the book.

Of the examples of "critical thinking education" listed above, do any remotely approach the critical thinking specific education I'm talking about here?: https://news.ycombinator.com/item?id=16572861

People are absolutely inundated with propaganda nowadays, like no other time in history, with social media being the most powerful weapon by far. We are graduating our children and sending them intellectually defenseless into this new world, I don't know if the average human mind can be brought to a level sufficient to cope with the propaganda created by the world class experts in persuasion who are working for a variety of deep pocketed entities, but at least we could try.


> Of the examples of "critical thinking education" listed above, do any remotely approach the critical thinking specific education I'm talking about here?

Well no, but my claim wasn't that your suggestion has been tried. It's that other people have been claiming they've been teaching critical thinking for some time, and it's not working.

I agree it's a problem--I just don't think a class in logic will do it. I'm not sure it's teachable at all, and even if it is, I'm not sure those same skills won't be ignored the moment the argument questions one's identity or becomes emotional.

Is it worth trying? It's easy for me to say "sure," but it's not on me to implement, and I'm certainly not sure how to assess whether it'd be successful.


Judging solely on the number of HN commentators who are absolutely incapable of detecting irony or satire, and indeed who may feel those are entirely out of place on HN, the average citizen isn't capable of considering two mutually contradictory propositions at the same time, let alone becoming "well-informed". The various exhortations in this thread to "just teach them!" bespeak a similar innocence. We have a rather large number of trained professionals engaged in the teaching already, so such pleas should at the very least be accompanied by considerations of why those efforts have not yet sufficed.


Dialectics are hard, man.

But to be fair, the longer we get into the current era of politics, the harder it is to distinguish between earnestness and satire. Young people who watch the movie Network today don't see Howard Beale as satirical, because there are too many people like him today who are deadly serious.


Critical thinking, specifically, is taught on a widespread basis?

What country are you writing from, and could you give some specific examples?


Most of our high schools despair of teaching mathematics to the level of algebra, to most of their students. Many haven't yet despaired of conveying literacy to those same students, but the outcome is by no means certain. I would consider both of those prerequisites to "critical thinking", no matter what particular idiosyncratic definition of that phrase you might prefer. Therefore I suggest that we aim lower, for a sort of animal suspicion that comes naturally to all humans. The result, from the perspective of political harmony, will be the same: hundreds of millions of critical thinkers would not magically all arrive at the same conclusions on any set of topics. In a perfect world of critical education, not only would you still disagree with most people's conclusions, but you would also still disagree with how they arrived at those conclusions.


Philosophy


You’d think that, but Wittgenstein minted his career on calling philosophers out for not thinking critically enough in debates.


Even if you hadn't misinterpreted the GP, this crosses into personal attack, which is not allowed here. Please read https://news.ycombinator.com/newsguidelines.html and stick to those rules when commenting on HN.


I think moduspol was being sarcastic.


> .../Reddit/... promote a regression towards the mean understanding... viral content reflects and amplifies the "average user's" sentiment and values

Comparing the experience in a niche subreddit, vs. a default subreddit, it's clear that the real problem is allowing causals in. If people have to go out of their way to participate, you wind up with only the ones who care to do so. And they have, in their reputation, something they don't want to lose.

But if people are allowed to participate by default, you get the enormous masses of people who, collectively, by virtue of their shifting roster, are immune to moderation. And you get people who set out that morning to share as many opinions as possible, instead of the people who set out to participate in that community exclusively.


I’ve witnessed this in well-meaning subredddits. Lots of arcane rules for posting. No warn band and removals mean I don’t try again.

On the subreddits I manage, I allow broad participation with no barriers. But they aren’t popular enough to have enough trolling to break me down and add hoops.


The clear separation between information and entertainment is BRILLIANTLY said. Seems straightforward but it's really important. Why is Twitter, a cesspool of memes, rap, sports, and kardashians mostly used by kids under 25 the primary means of breaking stories? Why is so much news on there? Anytime profitability, entertainment, short attention spans and news all get entwined, the resulting outcome is clear.

Facebook, Reddit, twitter, Snapchat have all started for entertainment and switched to catchbyte reaction/outrage culture news and normal news has turned into a mess as well.

I say regulations wiping trending news feed off Facebook, twitter and Snapchat is a start. Perhaps funding to any group that meets set in stone criteria regardless of political affiliation gets some federal funding to make up for the cost of making real journalism. That journalism must be fact checked and we hold them accountable.

People are desensitized to everything now and when shocking news is made every 15 minutes and the world is so connected we become numb to so much and that is incredibly dangerous


I wish this were true, but the amount of fake news spread by trusted news networks, such as newspapers spreading Russian propaganda bots, implies that education isn't sufficient. If a professional cannot discriminate between truth and lies on twitter how can an average person?


The economic incentives are misaligned. Fake news helps with page views and other KPI. If there was an actual enforcable cost associated with misrepresenting the truth, we'd see a slowdown in fake news.

The restoration of the Fairness Doctrine would also help stymie some of the biggest promulgators like Fox News *

* you can search for "fox news viewers misinformed" and encounter studies and results like http://publicmind.fdu.edu/2011/knowless/


The analogy of food nicely shows that some sort of race-to-the-bottom does not necessarily occur: there’s still organic or other high quality fresh food even though McDonald’s has been around for a while.

Similarily, there are still excellent news sources. The Economist is often cited in these discussions, and the New York Times is also vigilant in their reporting and the correction of errors when they occur[0].

What we’ve seen is a breakdown in trust of institutions, largely disconnected from actual mistakes on their part. People will quickly demand proof and invoke conspiracy theories when, for example, the there-letter agencies accuse Eussia of interfering in elections. They have learned to invoke “appeal to authority fallacy”too well, without offering an alternative. Because you cannot evaluate a new story without in some way deferring to the reputation of the publisher.


The breakdown in trust of news institutions has many sources, so correcting factual faults is just addressing one part. Omission and selective use of facts, misleading context, and misleading language seem to carry a higher penalty for trust in todays environment where it is very easy to provide the original source when ever a slightly biased news article is published. A factually error is very binary, true or false, while omission and selective use of facts gives room for much more outrage and distrust of otherwise well establish news institutions.

The Economist and the New York Times may have good practices in regard to errors, but there is a clear difference in their reporting to independent fact checking sites. To make matters worse, even those examples of "excellent" news papers tend to have a clear and open political alignment. With increased political polarization this then result in a rather natural distrust of news institutions, even those that are vigilant in correcting errors after they have occurred.


I disagree with the fast food analogy because that has obvious and direct personal costs, while infotainment negatives are subtle and externalized.

Re: trust and appeal to authority; your example made me realize people are drawn to grand conspiracies because unverifiable theories are infallible... Luring in people unfamiliar with probabilistic reasoning and consilience.


Organic food is not higher quality. Organic just means they cannot use some arbitrary list of farming practices, (some good some bad).

Sure McDonalds is not good, but there is also plenty of organic that equally bad (or worse).


> Organic food is not higher quality.

That depends. Sometimes European organic veg is preferable to Chinese industrially farmed veg when your local supermarket offers only those two choices. This is definitely true of garlic: Chinese garlic tends to be notoriously bitter and lack juice, but Spanish organic garlic is very sweet, pungent, and juicy. Now, the fact that the European organic choice was made according to the limitations of organic farming may well be irrelevant to its goodness, but there is a strong enough correlation with quality to guide consumers, and it was likely chosen by your supermarket as an alternative to the Chinese imported product precisely because they wanted to cover the organic segment.


That is not a property of organic though. Non-organic farmers are able to produce at least as high a quality as organic (nothing an organic farmer does is prohibited for the non-organic farmer, while there are a number of things the conventional farmer can do to increase quality that is prohibited to organic farmers). Of course just because they can doens't mean they do.


At the same time, I feel like we are seeing a breakdown of trust in institutions because of actual mistakes that, in years past, would have gone unnoticed.

Although this distrust does have negative impacts to our society, I view this distrust as an overall good thing.


>We really just need better public education, as well as clearer separation between information and entertainment.

nope, this won't work.

People will always need a place to voice their vile comments in a cowardly manner


[flagged]


Taxes don't pay for private education, let the free market decide how to run those schools.


Reddit is not actually strong moderation.

Forum communities with actual strong moderation, eg. Something Awful, ResetEra have near zero issues with hate speech, Nazis and other things that reddit has let fester.


I'm a regular on Something Awful.

One of the major factors that keeps it relatively clean is that user registration costs $10. That's a strong financial disincentive against trolling, bots, etc.

I really believe that successful online communities of the future will have paid signups.


Also, SA's userbase consists largely of older, tech-savvy people. It's been around for nearly 20 years now and I bet their registration peak was ~2004 (:files:). So it's pretty likely that the median age of a poster there is ~35-40.

I'd totally pay $10 for a less shitty reddit clone.


The Something Awful forums provided my first real exposure to 'internet culture'. I find myself reading their forums more and more often lately because discussions there seem less likely to devolve into an echo chamber. An account there is well worth the registration cost imo.


I think part of it is the lack of voting and the presence of easily-identifiable avatars: participants have an incentive to post things that generate maximum engagement and discussion with other specific users as opposed to maximum instantaneous agreement. In this regard it mirrors real-world social interaction much more closely than reddit/fb/twitter.


The other factor is not being shy about banning people. Heavy moderation.


I believe you are correct. If I pay for something that suggests I am the customer and not the product.


Haha if you pay for SA it only suggests that you are a sucker who paid 10 bucks to post on SA for a couple of days before getting banned. They are very ban happy there.


I have had an SA account for years and never been banned. I don't appreciate being called a sucker and I think the price is fair for what I get.

When I use Facebook I am the product, not the customer. This means the platform is optimized to put my eyeballs on advertisements or provide data about me to marketers. It is not optimized to provide high quality conversation.


Good for you but it's hard to take you remotely seriously when you try to relate SA and high quality conversation.


Are you trying to be ironic on purpose?


SA is basically 4chan except you have to pay 10 bucks to join. I don't really see the joke here.


The "joke" is that you complain about the lack of high quality conversation on SA, and yet your posting style here is extremely shitty.

No wonder you got banned so quickly there...


What, am I supposed to list my reasons with citations (of course) to justify my opinion about some historical relic of a site? And I didn't get banned from SA; I just know a lot of people who did.


If you got banned after a couple of days that means you broke the rules in a big way.


Well, that's because reddit is simply a platform, right? reddit undeniably has tools to allow for strong moderation. see /r/AskHistorians for a perfect example of this.


Strong moderation would mean that moderation is not optional, always present and always consistent.

Having a toxic community that internally declines to moderate is not strong moderation.


"Tools to allow for" is a hell of a phrase. Yes, moderation is possible on reddit but the idea that the tooling for such is in any way "good" is in error. It's enough to make moderation possible with sufficient application of effort. The fact that it's so rare is a strong indication of how useful those tools are.


Reddit tries to be just a platform, but it fails in two ways:

1) Namespace of subreddits. The subreddit which snags the most obvious name for a topic has a much better chance of becoming canonical for that topic that competitors with worse names.

2) Cross-subreddit identity and supporting tooling. For example, i can easily search for all recent posts made by a particular user, but i can't easily search for all posts made by a particular user within one specific subreddit. This sort of thing promotes "cultural leakage" across reddits and makes people think of all of Reddit as one community with one culture.

Related: see https://news.ycombinator.com/item?id=16573842 which summarizes a study that finds cultural leakage across subreddits.


We haven't really tried "real identities". We've come close.

Real identities would require verification, which sites like FB only do after the fact.

I fear that a huge repercussion of the election issues is that we will get there. A real ID may be required for you to post comments in all websites. And i'm not sure how I feel about that. Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know. But at what cost?

I really do wonder what is the root of trolling. What makes a "normal" person take an alter-ego/view/counter view, solely based on lack of face to face interaction...


Most of the problems of the Internet are mimicked in real life. That's why I don't think "real identities" would solve much, to be honest. Before trolling, for instance, there was the art of the prank (some harmless, and some downright mean -- just like trolls!), and the term "rabble rouser" seems to date back at least a couple hundred years. Some people in real life interact with others in rather toxic manners, in one form or another.

I mean, you get toxic behavior even on something like Nextdoor, where you pretty much know it's the neighbors across the street. Technology has just made things more convenient -- social media removes any curation, and technology also has made some means of harassment much easier to execute.

Myself, personally, I avoid social media that encourages toxic behavior (which usually means, smaller, special interest type sites; social circles that you know; etc.). This involves some degree of moderation or self-selection.

I don't see a good way around limited moderation for Reddit either, which is unfortunate in that it is hard to moderate something that size well (it's usually inconsistent and often arbitrary-ish).


Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know.

That is not at all clear, especially as a percentage of comments. There are plenty of sociopaths who are perfectly willing to troll under their real name. Meanwhile, more reasonable people may quite rationally be worried that expressing any opinion on a controversial issue will lead to online mobs trying to get them fired from their jobs, kicked out of school, or otherwise ostracized. Not to mention scenarios like being a gay teenager in a very socially conservative environment.


Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know.

I don't think it would. Lots of people post horribly objectionable material under their own names on a regular basis, depending on their level of financial security, peer group, and social milieu.

What makes a "normal" person take an alter-ego/view/counter view, solely based on lack of face to face interaction...

Some people are horrible, and are just as unpleasant in real life as they are online.


> Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know

Is this true? I've seen various studies saying that the opposite is actually true, but I can't currently find any of those studies. Does anyone have any sources?

EDIT: some sources, though I don't know the strength of their validity:

  - https://techcrunch.com/2012/07/29/surprisingly-good-evidence-that-real-name-policies-fail-to-improve-comments/

  - http://www.slate.com/blogs/future_tense/2014/07/17/google_plus_finally_ditches_its_ineffective_dangerous_real_name_policy.html


> A real ID may be required for you to post comments in all websites. [I]t would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know. But at what cost?

The cost is that it would prevent people from anonymously reporting abuses, which means that fear of retaliation will have a chilling effect. We've already seen this where people get death threats, houses burned down, etc, when they do things like report sexual assault.


People should be allowed to be anonymous but only explicitly so. The problem now is we have people lying about who they are and using multiple accounts and other forum moderation abuse to push viewpoints, often paid to do so.


>at what cost?

I can see at the cost of making stolen identities worth even more.

As it is websites completely suck at keeping our 'anonymous' identity secure. Our email and passwords are hacked so commonly there are websites dedicated to tracking it. Now you're just adding 'real identities' to the brokered data. Any real trolls will be able to use this data from the dark net pretending to be you. Even worse, since real names are required, any employers will look for you on the internet, they will see your "I'm an anti-gay right wing pro-russian" profiles online and say "you are not a cultural fit for our company". You will have to take the time and effort to clean up what is said about you. Since it's a real identity, it's not going to change, and they already have all the information they need on you.

Good luck in that terrible future.


A lot of this is not "trolling" but genuine hate speech. People will post hate speech under their own bylines on news sites, all day long.


There are two kinds of people who use real names on FB: those who never post offensive content anyway (the vast majority), and those who simply don’t care if offensive content is associated with their real identity (a small minority).

Those who post offensive content and do care, find it completely trivial to get a fake name. FB’s enforcement of their policy is basically nonexistent. This is the second-biggest group of users.


> A real ID may be required for you to post comments in all websites.

I honestly don't think this will ever be the case. Because there is a lot of profit from an unmoderated comment store. Also because it'd be monumentally hard to actually make all sites compliant.


> So we've tried

4chan also exists and it works and is marvelously non-toxic once you realize that any insults hurled at you are impersonal because they can only attack what you have immediately posted previously. Your attack surface is tiny, assuming basic information hygiene.


One of my biggest issues with reddit is post history. Oops this person posted something we don't like 3 weeks ago, dismiss his post and attack him. Oops, this person posted somewhere we don't like, ban them from the 46 subreddits I moderate. Oops this person posted for the first time in a default sub they have been subscribed to for 3 years today, ban him for brigading.


I imagine there are a few trade-offs when flicking the "post-history" switch either way.

Post history On: You get to follow a users comment history. If you read an insightful comment by them, and want to read more, then having a history is nice. Users are people

Post history partially On: e.g. comments could decay to anon after some period. (Cue the sites that collect all post data and match to users) Slightly increases the cost of doing a deep dive on a users history. Users are people, fading to ideas

Post history Off: Lowered attack surface for people who are actively trying to find an argument with you. Less pressure to have a persona consistent with the typical one of any particular community. Users are ideas


There are more options.

For example you can make multiple identities easier. I make use of that here via firefox containers, it is quite nice.

You can make anonymous the default and track the user under the hood, allowing them to claim a post at a later point if they feel confident enough to attach their name to it.

You could limit who can see the user identity. Or only display something more vague than a specific identity, a profile-lite behind a one-time ID.


The only real consistency i've seen between toxic communities and healthy ones is size. When you try to call 100k people a "community", it's a bad community where the toxic people have a louder voice than the good ones. If the community is a core group of <1000 participants and maybe 10000 spectators (following the 90-9-1 rule of online communities) it can be good. When it grows larger than that, it needs to be split up or shut down.

The only way i see to save reddit is to set a maximum size for a subreddit, and shut down or otherwise isolate every subreddit that grows bigger than the maximum threshold.


I believe the somethingawful forums involve a cost to participate, something like a one time $10 fee that was designed to remove low content or negative content participators. Might be an interesting case study of your hypothesis.


Unfortunately, while this works, it does not align with the goals of the companies running these forums-- to get as big as possible, to get the biggest valuation and/or slice of advertising dollars as they can. Even small monetary barriers to entry decrease participation substantially, and that's just not acceptable.

This is why I'm quite pessimistic about the current situation: the current in-vogue business model of surveillance/advertising capitalism demands massive size beyond what can be moderated, and thus makes this problem inevitable. And it only gets worse when the most toxic users are the most profitable, viz. Twitter's refusal to ban Donald, even though by any reasonable interpretation of their TOS, he breaks it every other day.


Worked, but not 100%. Many people were still willing to pay 10$ over and over and over again, for whatever reason, to reregister and keep posting (badly) just to get banned again.


The number of people committed enough to being trolls that they will keep paying you is small enough that they don't dominate the discussion.


At least you get money out of the trolls though. Just donate some of it to a bullying campaign.


Metafilter does this as well. A one-time payment to post along with heavy moderation will remove/keep away most of the toxicity.


Ten bucks as ante for the entertainment?

Cheap, if you ask me.


See other comments about metafilter.


I think your suggestion would lead to an echo chamber. Take a look at metafilter and how samey they've gotten. One thing to remember is that even small barriers implicity give mods much more power.


It just occurred to me that that forum moderation is essentially politics, and the failure modes of web-forums follow the failure modes of the political systems their moderation emulates.

The most common form anarchic moderation (i.e. no moderation). When a forum's small, unwritten social rules keep things under control. However, as forum grows, that breaks down, and things become more chaotic.

Metafilter essentially has an authoritarian moderation culture, the rules of discussion are both made and enforced (selectively or not) by the same group on another subject group. There's a wall to keep outsiders out (the paywall). It avoids chaos, but its failure mode is ossification, devolution into an echo chamber, and eventually desertion; as public forum behavior comes to more-or-less rigidly reflect the opinions and preferences of the moderators.

Reddit's somewhere in the middle of the above two forms. There are anarchic hordes in the less moderated reaches, and little authoritarian kingdoms without the walls to keep the hordes out.

I don't think anyone's tried real democracy in a forum (with elections, politics, checks and balances, and the time investment that all entails). It'd be interesting to see how such a forum would fare, and if it could avoid chaos without become an echo chamber. Democracy isn't the public-opinion-style voting we see in forum's today, but instead actual accountability of the moderators to the users.

Not claiming this is a novel insight, but it's new to me.


LambdaMOO tried switching to democracy around 1993, with "LambdaMOO Takes A New Direction". Basically, the mods ("wizards") instituted a petition system for technical changes and ceded all social decisions to a separate arbitration board. The resulting three and a half years of chaos is summarized in the last post on http://meatballwiki.org/wiki/LambdaMOO

In the end, the wizards published "LambdaMOO Takes Another Direction" and took back control, concluding:

> Over the course of the past three and a half years, it has become obvious that this [refraining from making social decisions] was an impossible ideal: The line between 'technical' and 'social' is not a clear one, and never can be.

A great deal has been written about this experiment (and a Web search will find much analysis, along with full text of LTAND and LTAND2), and there are a wide variety of perspectives on why LTAND failed, but one conclusion that nearly everybody seems to reach is that attempting to give a community democracy with no higher guidance is almost guaranteed to be a recipe for disaster.

Reflecting on all of this, I have no idea how the US founders managed to get something that worked at all, much less as well as it does. (And notice that it took them several tries to get it right.)


Freedom fighters rebelling against authoritarian regimes also start out with the goals of empowering the populace ("power to the people!"). They cast off their shackles, stage a coup and seize control...only to realize after some amount of chaos that people are incapable of governing themselves, and the former champion of freedom becomes the new dictator.


I complete agree. My premise is that you can only remove two out of the four -- echo chamber, trolling, toxicity, community. If you accept this premise the only logical conclusion is to remove trolling and toxicity. Without community the point is moot, after all. So really you're removing two out of echo chamber, trolling, toxicity.


Hmm, I think I would choose to remove toxicity and echo chamber. I don't mind a certain degree of bad faith behavior if removing it comes at the cost of having discussions with people different from me

I think maybe a good solution would be to (1) pay mods and (2) make everything they do transparent. This will give you better mods to start out with but also gives users the power to notice mod overreach before it spirals out of control.


What's the difference between trolling and toxicity? The words have seemed interchangeable in most contexts where I've heard them used.


Within a narrow community, a little bit of "echo chamber" is worth the quality of commenting online. I'd rather get downvoted a little and have to debate my minority opinion than it be drowned out with spam and bots here.


Well your comment presupposes you can trade between a little echo chamber for a lot of spam. Sure, but my comment is more about moving beyond pure spam to stuff like trolling - call it low-level bad-faith behavior. I'm okay with some of that in exchange for a community which is a bit more diverse.


An echo chamber is the only real option. Computers are unaware of concepts of good or evil, so at best a moderation algorithm can do is enforce a certain viewpoint. The question is what viewpoint? The viewpoint of the consensus of users, or the viewpoint of the community leadership?


I don't think moderation should be done by algorithm, as soon as you give the task back to humans you're much more capable of shades of gray and thoughtful, real moderation. Humans have been moderating public spaces for thousands of years, we're more than up to the task if a little bit of care is put into the implementation.


But humans are VERY very slow at this. Even our world-class moderation systems (the legal system in many countries) is excruciatingly slow, often taking months or years for a single decision.


MetaFilter is more of a non-forum than an echo chamber. There's almost 0 discussion; people go there for the links, not the conversation.


Have you ever... looked at the comments on MeFi? Some posts get lengthy, complex discussions, on subjects related to the link at hand; some do not. There is also Ask MeFi, where you can ask questions and get answers from other users (ads shown against this section to visitors without an account used to be a major portion of MeFi’s revenue until Google did some stuff that lowered MeFi’s search ranking). And there’s MetaTalk, which is for talking about the site and has its fair share of “hey let’s hang out and talk” posts.

I mean, yeah, it’s structured mostly around links, and you can certainly use it as a source of Interesting Links. But there’s conversation and community there if you look around a little.


I was a member there for years. There's a community, which is highly normative, but they also resist implementing threading or commenting by reference precisely to keep the focus on submissions.


I'm still a member! For the record, we resist implementing threading or commenting by reference because it's makes for an unreadable discursive shitshow when trying to follow busy, active discussions.

Staying relatively on topic is an unrelated aspiration and one that we mostly let flex a lot depending on the specific thread and context.


This is remarkable for the terse confidence with which it is misapprehends the actual structure and content of the site.


There is a fundamental approach that has not been tried as yet

- Incentives

The real-world has that figured out long ago. If you find something that is truly useful & timely, you would be willing to pay real money for it.

Google Answers (answers.google.com) had tried an approach wherein a price can be put on a question and any legit reply which answers that q can claim it. 'Reputation' definitely still plays a role in this, but the system is flexible enough to allow a new comer to attempt answering a question & stake a claim to the funds.

The real-world has many of these aspects sorted out, like calling a plumber or a carpenter from your neighborhood to get your work done. The problem is we have embarked on creating a 'global' network (aka FB) without first having adequately understood how to create strong family & community network, before we go global with our social networking..


Stack Overflow site utilizes this really well. You get more privileges the more trusted you are.


skeptics stackexchange is a good example. Lots of people complain that too many comments are deleted, but it's an absolute lions den of controversial issues. The mods do a very tough job and handle hate speech better than anywhere I've seen.


Reputation will mean different things to different people.


I think you're underestimating the cost to society of your parenthetical. To the degree that "the personal is political", sharing the details of one's circumstances — especially as a marginalized or disenfranchised individual — can reveal ostensibly unique struggles to be widespread societal problems. Twitter does this, and has been good for highlighting shared experiences. We'll lose a platform for that very important, seemingly trivial disclosure if we improperly disincentivize contributions. We need to keep "poor participants" in the common conversation.

Secondly, I notice no mention of deliberate, paid propagandizing, i.e. professionally divisive sock-puppets employed by sock-puppet firms.

Any serious discussion of threats to a healthy public discourse must address deliberate attempts to undermine the legitimacy of the common voice.


HN generally works very well. The echo chamber problem is due to allowing downvotes IMO. In my experience that simply leads to minority viewpoints being downvoted. Instead, downvotes should be removed, and people should be allowed to flag abusive comments.


I think the Hackernews' "echo-chamber-ness" is extremely exaggerated. It only feels like that if you're in the minority viewpoint in a thread (which happens to me, too). However, it's not echo-chambery if dissident viewpoints live side-by-side dominant viewpoints, even if the latter are 80% of the thread replies and upvotes.

An echo chamber is arguably more when we actively suppress dissident viewpoints. Reddit is infamous for moderators doing that simply by deleting comments under some pretext of 'spirit of the subreddit' or such. With Facebook there's a first-and-last-name-and-picture-visible shaming that can be scary and damaging, repulsing the opposite viewpoints. At a more extreme, you can help foster an echo chamber by organizing a large group of people to scream and picket and threaten a speaker that has the wrong views, reminding all the others of what happens.

HN to me is an oasis. Even if I get downvoted when I have a minority view. I still feel as if intelligent arguments are considered.

With Reddit, how many intelligent comments are there? The English grammar alone is awful, full of shortcuts, cliches and new millennial-speak. But worse: the responses are short. One-liners. And even worse: argumentation is ad-hominem and emotive.

In summary, I think Reddit is about emotional expression, and HN is about (an attempt of) rigor and rationality.


I've seen a lot of very interesting, usually quite short comments on politics-related threads in the past few weeks, that were posted less than ten minutes prior and already grayed out and marked "[dead]". In each instance, the user was not using inflammatory language at all, yet HN was implicitly saying "yeah we're not going to allow discussion on this topic." I'm sure there's Very Good Reasons(TM) for this but it always feels like wasted opportunity for interesting, out-of-the-box discussion.


> In each instance, the user was not using inflammatory language at all,

Downvotes are not only for inflammatory language; a comment can be a negative contribution to the signal-to-noise ratio, and even violate the commenting guidelines, without using inflammatory language.


There might be a lot of reasons for that and we'd need to see specific links to say why, or make a good guess.


Ah. HN is special: they punish political discussions. It's unfortunate, even tragic in my opinion. They allow it sometimes if there's specifically a tech or science-related topic very very closely attached.

I avoid poking the moderator lions (I used to post political articles maybe a year or more ago), but I do wish HN would have another view of that particular topic. It's rather unavoidable that adults (and we are adults), highly-educated ones at that, would sometimes slip into politics when science or tech news (or legal news about tech or science) is discussed.

But yes, you're generally right about that.

I think emotive political discussion is useless, but rational policy discussions aren't useless.


On top of downvotes, you can say very toxic / abusive / condescending things and get away with it if you share the "correct" viewpoint, but unpopular views have to be exceptionally polite to avoid biased moderation. You can't bluntly refute or critique a questionable (but popular) argument without being accused of lacking civility...


> if you share the "correct" viewpoint, but unpopular views have to be exceptionally polite to avoid biased moderation

Where HN has been falling short (lately, in my observation) is where discussions about the ethics of certain business models get lost via the "buried" option or killed off completely.

You cannot come to HN to discuss the potentially-negative ecological or economical impact of a YC company. The voting rings will literally send your comment or post to the void: buried or killed off completely. HN does still post lots of interesting links, but for truly interesting discussion that isn't (for lack of a better word), tainted by bias, I prefer Reddit these days.


Other areas where I see this happening on HN:

- discussing the risks of psychoactive drugs.

- pointing out flaws in overhyped press releases about the next wonder drug/treatment

I guess you're right that you can avoid getting downvoted by being exceptionally polite and spending about 15 minutes crafting a response saying "crap science, uncontrolled trial, possible placebo effect", but sometimes I just don't have the time and energy for that. I'd prefer it if people here didn't automatically assume I'm full of shit when I point out a flaw in an argument without writing my response absolutely perfectly the first time.


> "You cannot come to HN to discuss the potentially-negative ecological or economical impact of a YC company."

People say negative things on HN about YC companies all the time. We moderate HN less, not more, when YC or a YC-funded startup is at issue. That doesn't mean we don't moderate it at all—that would leave too much of a loophole—but we do moderate it less. This is literally the first principle that we tell everyone who moderates Hacker News. You can find many posts I've written about this over the years via https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme....


Thanks for the reply.

What I meant, is that one cannot start a discussion of such things without being willing to lose lots of points and karma. Observations that AirBnB might be doing more harm than good to cities having "housing crisis" issues, and the fact that Uber and Lyft are actually harming public transportation rider numbers and putting more automobiles on the roads (creating congestion).

Two issues I've seen brought up here that get downvoted into oblivion. Why risk that? It's far easier for people to jump on the "attack the poster" bandwagon... as they have done to me in this thread.

Granted, I've been reading HN for over 11 years now, and the site is not the same as it used to be. A lot of interesting posters have left. Probably I need to lower my expectations for what to see when I come here.


It's hard to say why specific comments have been downvoted. Often it's because they break the site guidelines in ways the author didn't notice. Sometimes it's simply not fair, and other users need to (and often do) fix that by giving a corrective upvote.

Plenty of comments arguing that Airbnb/Uber might be doing more harm than good routinely get heavily upvoted, so I'd question your overall generalization.


The real issue is binary choice. I might disagree with a comment, but acknowledge it's a valid well thought out argument. On the flip side I might agree but acknowledge it's a poorly formed argument.

Something might be totally off topic or funny, but if I made me laugh do I down vote it?

Slashdot's model of tagging posts was a pretty good idea I think and allowed one to filter out the 'funny' or 'offtopic' comments.


> I might disagree with a comment, but acknowledge it's a valid well thought out argument.

So, upvotes and respond.

If it's a net positive contribution, you shouldn't be downvoting.

> Slashdot's model of tagging posts was a pretty good idea

Its a good model for a customizable user experience, and a bad model for a community. Those two goals are often opposed.


I disagree. Upvoting is like a High 5, downvoting, especially on Reddit, is used to bury something people don't agree with. Downvotes, IMO, should require some sort of intellectual effort as to why you are actively burying a comment or post and thus require a reply.

Dragon, you commented and downvoted on something that you are doing right now which is commenting on a comment system. No? At least you had the decency to reply, which most Redditors don't. Which makes Reddit Toxic.


I never downvote a comment, reddit or here, simply because I disgree. I find such behaviour (subjectively) wrong since it does not encourage discussion in an open, civilized manner (on reddit, all that happens is that you get 30 comments deep and you just downvote eachother's comments to 0 while being increasingly aggressive).

I reserve downvotes for when a comment is being needlessly toxic, doesn't contribute to the discussion or otherwise not helpful for an open discussion.

I think the best cure for "downvote to disagree" is to firmly hold to the principle that the opposing side of the argument has the best intentions to the extend of their knowledge and that at the end of the discussion, all participants should have learned something. You should also always be willing to change your mind on what you argue about.

Always.


HN works very poorly, and worse by the year. I've been using HN since 2009, I've been on a wide variety of discussion platforms going back to usenet, HN has had its moment in the sun and that has largely passed.

Voting on HN barely has an effect, and I suspect that the average votes per comment on HN has gone way down year over year. People just don't vote on posts as often as you'd think, not anymore. A related problem is that commentary doesn't go on for very long. In the usenet days you could have a good thread that would last for months and months that would continue to spawn good and interesting commentary, a flash in the pan thread might only last a few days. On HN the window of commentary for a post is rarely more than a day and typically only a matter of hours. It's just people strafing comments into the void and then disengaging. Long comments typically don't get read, and don't get upvoted, don't get commented on, etc, for example.


> ...I suspect that the average votes per comment on HN has gone way down year over year

OK, I'll bite! A cursory look at the data shows a clear increase in average votes on comments from 2007 until 2012, which is the only year with a dip, followed by steady growth until the present all-time high.


Huh. Is this total votes or votes per comment? I'm curious what the median number of total votes on comments that have at least one vote is.


I wonder if (or how) one should take population into account, too. We might figure that a majority of the people read only threads that are on the frontpage, so, if the number of people on HN has doubled, then each thread will get viewed by 2x as many people, and, if the new crowd has the same likelihood of voting on each comment as the old crowd, then you'd expect 2x as many votes, assuming no change in comment quality. Instead you might want "votes per comment, divided by number of users".

Of course, there are lots of "all else being equal" implicit assumptions there. First, if the population doubles but stories move off the frontpage in 0.7x the time, then you'd only get 1.4x as many votes—and this is one of InclinedPlane's points. Second, the newer crowd could be significantly more, or significantly less, active. To control for these two things, the measure you might use instead is "votes per comment per pageview", or "votes per comment per second a user spends on the page". Third, there might be more comments posted—well, duh, it would be weird if the new users never posted any comments.

Fourth—and I think this another thing InclinedPlane wants to focus on—comment quality could have changed. Comment sorting is relevant too, because I'm sure lots of users don't read everything. If we suppose that, due to an increase in population, we get 2x as many comments but they have the same quality distribution, and if we suppose the best comments always go to the top, then the average quality of the top n comments should increase; you can see something like this in extremely popular Reddit threads, where the top several highly upvoted comments are clearly optimized for something (often clever jokes). If we suppose a decent population of users only read the top n comments, and always use the same function that maps "quality of a comment" to "probability of upvoting", then, when the set of comments doubles and (by assumption) the best rise to the top, we'd expect these users to generate more upvotes overall, and hence "average votes per comment viewed" should go up. (It's also possible that people's standards would rise. But I think people's changing standards would lag behind the changes in what they're viewing.) That said, for the comments that aren't in the top n, the fraction of people that view them (and consequently might comment on them) would go down.

The question of how long threads sit on the frontpage is relevant, both for comment exposure and for InclinedPlane's point about conversation longevity. (There are also pages like "new" and the no-longer-linked-at-the-top "best".) I wonder how best to quantify that... perhaps "the frontpage tenure of the thread with the longest tenure of all threads on that day".


> HN generally works very well

After reading a few discussions over the last few days, I was thinking to my self that HN was better than ever, and very good (with one serious shortcoming). Even the echo chamber is much better than I remember.

EDIT: The shortcoming, IMHO, is the abandonment of politics. HN is the ideal place to solve that problem, with s sophisticated audience open to and interested in experimentation and in problem solving. The goal of propagandists is not to persuade you, but to paralyze you; to shut down real discussion and debate. HN is, unwittingly, capitulating and cooperating with them. HN is another success for them.


> The shortcoming, IMHO, is the abandonment of politics. HN is the ideal place to solve that problem

No, it's really not, as HN demonstrates most of the time it interacts with politics.


> HN is the ideal place to solve that problem

That's an illusion, for reasons I attempted to describe here:

https://news.ycombinator.com/item?id=16443431


Thanks for responding.

> That's an illusion, for reasons I attempted to describe here

That implies that it's an unsolvable problem, if I understand correctly. There's no reason to think this problem is any more difficult than all the other 'unsolvable' ones and this one is particularly, I would even say 'extremely' valuable to work on.

I don't believe we simply could introduce political topics and it would work due to some HN magic. It would take serious work and experimentation to find a solution, but I think HN is better suited than other places to do that work. And a solution could change discourse in the country and the world, at a time when discourse on the Internet problems SV has invented has become a very dangerous weapon for some, and is tearing society apart.

I realize that "we" means you and sctb more than anyone, and so it's a request and encouragement. I still think it's the most valuable thing HN could do, potentially world-changing. Previous generations had books and leaders that changed the course of history; this time it might be software or a software-based technique that turns the tide. I hope that at least you will keep it in mind.


I don't know that it's impossible. But if I'm certain about anything re HN, it's that it would be unwise to try to make it be that, for the same reason we don't do radical medical research on living humans.

Our first responsibility is to take care of what we have. The way to take care of a complex system is to be sensitive to feedback and adapt. We can apply that principle here. Look at what happens when the political taps get opened beyond a notch or two. Discussion becomes nasty, brutish, long, and predictable. That's what we want less of, so opening the taps all the way is not an option. For similar reasons, closing them all the way isn't an option either.

I don't disagree completely. I think there's a chance HN can slowly develop greater capacity in this area. But it would need to be very slow and not something we try directly to control. Anything as complex and fragile as HN needs a light touch.


Well... downvotes are definitely abused. They're not supposed to be used to express disagreement, and they are. All. The. Time. And it stinks to be on the receiving end of that.

But I'm not sure that they should be eliminated. The alternative is to leave moderation as the only way to deal with bad (abusive, off-topic, trolling, unintelligible) posts. I'm not sure that having people flag every bad post they think they see, and letting the moderators sort it out, is really the optimal way to do things.


> They're not supposed to be used to express disagreement

That's a common misconception. Downvoting for disagreement has always been ok on HN: https://news.ycombinator.com/item?id=16131314

I think people have the wrong idea about HN downvotes because they think Reddit rules apply to HN. It's a bit like how in Canada we think we have Miranda rights because we've seen it on American TV.


I would argue that while it's totally okay to do it, I'd find it better if people used it less for simply disagreeing. In my experience, the resulting discussion ends up being of poorer quality because of it (and less exposure due to being pushed down and hidden once it hits a certain threshold)


Such evidence as I'm aware of points in the opposite direction: HN without downvotes would be like a body without white blood cells. Disease would quickly kill it.

The problem with your argument is that it doesn't reckon with just how lousy bad internet comments are, or how many of them there are, or how completely they take over if allowed to. To a first approximation, bad internet comments are the entire problem of this site.

It's easy to imagine an HN that would be just like the current HN, only with some negative X (e.g. bad downvotes) removed. Most of these ideas are fantasies, because the thing you'd have to do to remove X would have massive side effects. You can't hold the rest of the site constant and do that.

https://news.ycombinator.com/item?id=16131314


>bad internet comments

Bad != disagree. I think we all agree that it should be ok to downvote and hide "bad" comments. However the problem is that many good comments are downvoted simply because people disagree with them.

I think it might be better to remove the downvote and replace it with "flag", so people can flag bad comments (spam, abusive, pointless, etc). At least that way people would need to think a little before the comment gets flagged, which would hopefully result in fewer minority viewpoint comments getting hidden.


I'm not arguing that we should never downvote, if someone is writing garbage, I will happily downvote them. But maybe people are a bit too quick to downvote when they disagree...


Sure, but people have been saying that on HN for many years.

I think you have more success if you ask everyone else to upvote unfairly downvoted comments.


I try to do that, yes, though I always (wrongfully) hope that people change...


I stand corrected. I agree with zaarn, though - overuse of downvotes for disagreement is not helpful for having a real discussion.


This is why 4chan always had it right. Anonymous + minimal moderation. Instead of echo chambers you get extreme contrarianism. The upside of this is that non-conformal viewpoints aren't buried and low effort trolling just gets ignored.


Reddit does not have strong moderation.

Some subreddits do have strong moderation; some subreddits think they have strong moderation but they have fucking idiot mods who call down trolls; and there are some subreddits that have permissive moderation and those subreddits leak.

> However if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling.

You genuinely don't. There are forums where you have to pay real money to be able to read and post, and where acting like a jerk will get you banned. They haven't eliminated the jerks. About the only advantage is paid mods which ensures some consistency.


I apologize -- I mean the platform itself offers strong moderation. Unlike Facebook, for example. I agree with your point, though.


It closes subreddits at the owners whim, that seems strong. It doesn't need to be always exercising that power to have demonstrated that it has and uses that power.


My friend introduced me to a project that he'd been working on that touches on some of these issues.

https://doxa.network/

While I can't come to a conclusion on it (whether or not it's a good/bad idea) - I wonder what others in the HN community think about it. I apologize in advance if this is a bad place to comment, I've been mostly a lurker thus far!


I think this has been posted before, actually. The underlying concept, I get. But I'm not sure how it'll ever become a 'product'.

In addition to all that, I still fail to see how a blockchain would solve any of these fundamental issues.


Reddit and HN are both missing two key ingredients on the moderation side: Transparency and accountability.

There's nobody watching the watchmen, essentially. That leads to a lot of frustration, anger, mistrust, and abuse.


In particular, there is no guarantee that moderators are any good. Whoever registered a subreddit first "owns" it and moderates it themselves/chooses additional moderators. That's it.

This means there was a "gold rush" in the early days and if some shitter is sitting on prime real estate (like brand names of products) there's nothing you can do about it. If they decide to close the subreddit by making it private, nothing you can do about it. If they go inactive and are still squatting on prime digital real-estate, nothing you can do about it... if they later get hacked but are still inactive, doubly nothing you can do about it.


> Physical mail is sent to your house in order to get a single account.

That's how NextDoor works and it seems to work reasonably well.

There are some big communities that seen relatively healthy to me. For example, Instagram is easily my favorite social network. I see photos that my friends take and that's about it. I pull it up and am done with it in a minute or two.


Your list consists entirely of push media. Some example of pull media that "play well with others" or at least are not actively anti-social are podcasts and email listservs. Of course its easier for advertisers to monetize push media.

In the long run push media is always going to be troll-ier and offend more people than pull media.


I believe that heavy moderation is necessary to weed out toxic stuffs.

HN seems to be doing a great job in that regard (in my opinion) and it is a model with which other websites can learn from.


I think one way to help prevent echo chambers is to have term limits on moderators. So many moderators become sour towards their own communities but feel an obligation to stay involved. Term limits might help with that and encourage new users to become moderators themselves.


Relate 'karma' to the ability to post at all?

More karma, more posting; less karma, less posting. Everyone starts every month/week/day with only so much, no roll-over per timeframe. Modify it so that 'popular' threads cost more to post in. You can give karma to others too via the upvote and take it away with a downvote, but still no roll-over. Troll/shill accounts would still get upped around en masse, but less so and it would be 'easier' for mods to tell. (I'm sure you can model this without too much effort vis a vi prisoner's dilemma). You'd have to pick and choose which to comment in. Posting content would work similarly, but a slight mod to the cost to posting.


The NRA has its own social network where users must perform certain tasks before the user can post or comment. You have to tweet at your legislator or share things to your personal Facebook in order to show fealty to the community, in order to get enough karma to talk to others in it.

If you're curious: https://www.bloomberg.com/news/articles/2018-03-01/the-nra-h...


Jesus, that is effective.


Are you saying this would create less of an echo chamber?


I'd think so. More popular threads would 'cost' more to post in and there would then be less posters in it as a result. 'Brigading' would be difficult.


on the contrary, popular opinions would gather more karma, thus allowing them to share their ideas more often, unpopular opinions will quickly have their posting ability removed via downvotes. Soon enough only the prevailing popular opinion will be found

I had the same idea at first but I don't think it would work in practice


Hmmm, you're right. Perhaps a sliding scale then? The cost to up/downvote increases exponentially?


> However, if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling at the cost of less participation and an echo chamber (though, you could argue that not all participation is equal and you're mainly removing poor participants).

Twitter actually does this but only for some types of accounts. Every see the "see more" button under replies? That's where people of low quality go. Twitter has a sort of rating for figuring out if someone is of low quality or not and they usually get hidden. Even their likes and retweets get hidden.

Hasn't to help overall on Twitter though I have noticed I get less assholes replying to me.


I'd say hierarchical credibility. More authority. Give the leaders power to bless others with disproportionate power, who in turn have the ability to bless others with disproportionate power.

Remember that Reddit ultimately only imparts one vote per person regardless of whether you're an admin or moderator or whatever. End that system. We've already established that these systems are not libertarian - the leadership has an opinion (in /r/Science the opinion is that you must post good science) and we want to empower them to enforce it.

Not only that, but provide negative feedback. If you endorse a terrible person, make it impact your credibility.


It is wildly inaccurate to call HN or Reddit “strong moderation” — the tooling alone is abysmally bad, plus Reddit can’t put the “post anything you want as long as it is not literally illegal” genie back in the bottle so easily after a decade.


I don't see the connection between the quality of the tooling and whether or not there's strong moderation. HN is pretty strongly moderated and reddit inherently allows for very strong moderation. Do you think HN is not strictly moderated or that reddit does not allow for strict moderation?

I'll agree that reddit doesn't seem to expose easy to use moderating tools, though.


I feel like HN is such an echo chamber it doesn't require strong moderation.


Real identities doesn't work when FB threads aren't indexed by Google.

If googling myself found comments I'd make on "platform X," you can be sure I'd carefully consider my comments on that platform.

Having anonymous commenting is important, but if you want a less toxic platform for mainstream comments, real identities + indexing is a good start.

The real WTF moment for online discussion is yet to come. When ML chatbots are able to comment indistinguishably from human comments, purely as a measure of capacity of time/scale, comment threads are eventually going to just become chatbots arguing with chatbots, drowning out actual discussion by humans.


This nightmare scenario has natural stable states, I think. At some point, it will cease to be profitable to propagandize at machines, no? We should work to construct a society that negatively incentivizes bot proliferation, but it can't be a purely financial incentive. Free speech must not have a price tag, or else "stop, thief" may become impossible to shout.


IMO the problem is size/scale. If you're too small, then you just have people going through the motions of talking to each other in an attempt to make the platform seem real so more people will join it. If you're too big, you won't be able to curate effectively. There's a sweet spot in the middle, you start heading up around only maybe 20 or 30 users (the point at which it becomes impractical for everyone to know everyone), and when it starts heading back down depends on the culture, but go down it will.


I'd like to see what impact posting limits would have. For instance, let every user make 2 submissions and 10 comments in a month. One of the issues on every forum is that it takes a lot less time to make a ton of low effort comments than it does to make one thoughtful post. Another is that a small minority end up making the vast majority of the posts (even when that minority is well intentioned).


Another key factor is the community reaction to low-effort posts; Many users of social media are not looking for or willing to put in the time and effort to digest more in-depth discussion, and will happily skip over more thoughtful posts in favour of the next meme or bite-sized thought. In a platform such as Reddit, where votes per unit time are paramount, this results in the burying of most longer discussion.


Set up a forum and try it, then you too can be the mayor of an internet ghost town.


You still have sybil attacks.


We’ve tried real identities plus serving as many ads against user conversations as possible (Facebook), and “algorithms” designed to keep users on as long as possible regardless of the cost to their mood, or the world around them. Perhaps “social media” should stop trying to be a way to make lots of money.

Naaah. Gotta monetize every inch of everyone’s life. It’s the American way.


I think nextdoor does this model and it’s still notorious for trolling/internet fights.


I’m amazed at how much trolling there is on Nextdoor. It’s not hate speech level, and most posts seem pretty cordial but considering the community venue aspect I’m just floored at how reflexively shitty people can be the minute there’s a divergence of opinions.


Yeah, Nextdoor is astonishingly vile given the non-zero chance of actually running into one of your fellow members at the grocery store.

My hypothesis is that Nextdoor primarily appeals to those who already have that kind of territorial "they're all out to get us" mindset since territoriality is literally what the app is about. It's the angry "get off my lawn" old man of social networks and it attracts exactly those kinds of people. Mix in some Internet depersonalization effects and you get something pretty nasty.


It's a virtual home owners association in my experience.


My Nextdoor community gets a lot of hate speech. They have filters that catch the words you know, but they don't have a way to stop the ideas.

My neighborhood is 50%+ non English Speaking Chinese. There are posts almost everyday that say things like..

"Some people in this neighborhood need to OPEN THEIR EYES and stop hitting the gate with their cars. I'll post this in the appropriate languages so everyone and the community can appreciate the significance of my message. Ching-Chang Chow... Bang Bong, Bing bong bing."


That must depend on the neighborhood - I don't see any of that.


There was a podcast (I think freakonomics?) saying that sports forums had the least toxic political discussions on the internet. This was probably due to their common interest in a sports team over their political views.


>Thoughts?

Imagine a blank graph. Randomly place 1 million circles on it. No two with identical boundaries. This is our hypothetical Venn diagram of people's preferences for how communication happens online. Drop another million circles to represent how those people will actually act online.

It doesn't matter what points on the graph are labeled "toxicity," "echo chamber," "uncomfortably friendly," or "sparse comments, experts only." It only matter's that the circles are not all identical.


Metafilter set the gold standard: accounts require a one-time payment to discourage sock puppets and they have full-time moderators who’ll jump in before behavior gets out of control and tell people to tone it down.

I don’t know an easy way to scale that model.


It seems like if moderators of subreddits were given the ability to A) limit posting rights to paid users and b) given the ability to ban at will, that most subreddits would see a much improved commenting culture.

It would also help reddit monetize.


> Real identities

Real real identities (i.e. government issued digital id) have never been done. I am sure they will come eventually. The political process is just very slow compared to the pace of technology.


The purpose of using real identities (government ID) is not to facilitate debate and sharing of opinions, but to punish and neuter debate and limit sharing of opinions.

Compare the outcomes of totally anonymous reputation-based forums such as HN, reddit or 4chan with near-real identity forums such as Facebook or Linkedin.

There is a very open flow of ideas and debate on 4chan and other reputation-based forums. There is at least as much hate speech and trolling going on at Facebook as on HN, yet Facebook has near-real identities. Linkedin have at least as much spam and criminal phishing posts going on as HN, yet Linked in has near-real identities.

HN would not be a better forum if everyone had to register with their government ID. The main benefit would be to make it easier to ban one person from accessing the forum, and silence that individual.

My vote is for reputation-based forums.


South Korea tried this for a while, but I believe eventually gave up on it. China effectively manages it transparently.


No one will use it. Anonymous commenting is much more fun.


> There's no real solution to this problem.

I disagree, There IS a solution, but no one likes it.

Segway: Football aka Soccer had a problem back in the day. Games became (more?) boring because teams would go up a goal, and just play "kick the ball to the goalie". Goalie'd pick up the ball, bounce it, pass to a player who kicked it back to the goalie, rinse and repeat. Then they instituted the back pass rule - https://en.wikipedia.org/wiki/Back-pass_rule - where basically, pass back to the keeper and he can't use his hands, only his feet. A generation of goalies had to learn to play with the ball at their feet, and the bit people enjoyed happened more often.

Rugby Union has had a similar evolution, although more intense. Rugby's goal is to have a game that can be played by people of all shapes and sizes, and they mostly succeed. George Gregan was a great player at 5'9", and most second rowers are over 200cm/6'7". But rugby always gets boring, because the coaches start to playing boring rugby (lot of kicks, lots of positional play, less running). So rugby, every few years, overhauls the rules. For a year or two, the game is exciting again, IMHO the best sport in the world, then it is boring all over again as coaches play safe. We then get another overhaul.

I think the Soccer back-pass rule and rugby's ever-changing-rules are models of how changes in rules can dramatically affect the quality of an activity, I don't think they are doable for online sites at scale. Instead, the only solution I can see working is to constantly change to new platforms.

I started in SEO in circa 2001, and there were heaps of forums run on phpBB et al. They had all recently started, so they were figuring themselves out. The forums all interacted, but they all had their own feel and their own rules. Over a few years, they developed "personalities", and everyone could find a place that suited them. This "personality" then morphed into a kind of group think, and the forums grew into echo chambers where the rules where strictly enforced to keep out the others, and each forum became hostile to all the evil others, and they devolved from fun places to hang out to the same old same old.

SM then came along, with sites like FB, Digg and reddit being born, and this process started again. A new place, no real rules yet, and it was the same exciting process of discovery. Over time, the bad parts set in, and these places became stale echo chambers filled with all the bad bits everyone talks about.

That's why I think the only real solution is to tear it all down and start afresh, because this process has repeated several times now. I think that partly explains why SnapChat and the other platforms exist, and why, IMHO, SnapChat et al will grow to a point then fail to grow anymore, as the "freshness" fades, and all the nastiness intrudes. Unless sites can figure out a way to change this process, which after almost 2 decades of this process is either unlikely (pessimistic), or it is too soon to tell (optimistic).

TL;DR when a platform gets entrenched, it starts to exhibit more nasty traits, and a new platform started afresh is the only solution.


just fyi I enjoyed your comment and anecdote but the word you're looking for is "segue" not "segway."


Not quite correct.

We've tried:

- Real identities (Facebook comments)

- Voting and self moderation (Reddit, HN, etc.)

- Strong moderation (Reddit, HN)

All within the context of rampant financialisation, land policies returning us to feudalism and continuous lying about the banking bailout including removing the one candidate who was going to take on the banks (Bernie).

Maybe it's not "the internet" that's the problem. Maybe the dissemination of information as we slide into this rentier hellpit is causing people to be pretty pissed off?


How can I read up on these land policies?


Here's what I know or believe I know about feudalism. It's almost entirely from the book https://www.amazon.com/Pre-Industrial-Societies-Anatomy-Pre-... , which while excellent does not deal with feudalism in any depth.

- European feudalism was an unusual system in that the government had no taxing power. The lord who owned land held the taxing power ("feudal dues") over that land, and the king funded himself by collecting feudal dues from land he owned personally, rather than e.g. by taxing the dukes. This might contrast with a more advanced state of civilization in which the caliph / emperor / whatever collected taxes directly from everywhere by virtue of being the supreme ruler, and paid a salary to his lower administrative functionaries. Or it might contrast with a system where the use of land wasn't much of a source of taxes. Or both of those latter things might be true simultaneously.

The US system of property taxes has a lot in common with the system I've described above, and some obvious differences. Similarities:

- The federal government ("king") can't assess property taxes. Only the states ("local lords") can do that.

- People other than the government cannot own land outright, but must pay the property tax ("feudal dues") for its use every year.

Of course, rather than the federal government receiving tax income based on federally owned land, it instead double-taxes the citizens of the states. (But on mercantile revenue rather than on land.) This is arguably worse than the feudal system.


How are devolved local states equivalent to private landlords? States capture land value via land tax and socialise this. Private landlords capture it and keep it.

Night and day.


Are you referring to a feudal lord as a "private landlord" or a "devolved local state"? Both would be more or less fully appropriate.

"Devolved local state" is slightly more accurate than "private landlord", because unlike a landlord in a more commercialized society, a feudal lord was not legally able to sell the land he owned. He was legally able to govern it.


My interest is that the feudal lord was living off the backs of others. The state taxing land to build hospitals is not the same thing.


You might enjoy Stop, Thief!: The Commons, Enclosures, and Resistance https://www.goodreads.com/book/show/17802312-stop-thief


Lords often got their start in lording by building a bridge.


I would read the heck out of a book that purported this.


I don't like "the economist" however a while back they did do a piece on land value tax, maybe HN will find it more palatable coming from this journal:

https://www.economist.com/blogs/freeexchange/2015/04/land-va...


The real solution is making blatant lies and misinformation illegal.


Without perfect information and transparency there's no way make lies illegal.


And who determines what's true and what's not?


At the risk of excessive red tape, the judicial system has been doing a reasonable job of determining facts in most modern countries.


It actually doesn't. Example: Did OJ kill anybody?

Judicial system engages in determining whether someone is guilty based on the evidence that it itself filters.

What about more complex things? Like what if I say "P = NP"? Is that a lie?

And then, do you want a trial for every comment? That will not work even for a fraction of comments.


>Like what if I say "P = NP"? Is that a lie?

That can't be determined to be a lie unless someone solves the problem. It's only a conjecture or assertion until then.

>And then, do you want a trial for every comment? That will not work even for a fraction of comments.

Only really has to apply to political statements made by the most powerful office holders, and only when contested, and only when there is a imminent intent to deceive and impact policy.

It's not really an all-or-nothing situation. It's just a matter of how much can be achieved within a reasonable cost.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: