Hacker News new | past | comments | ask | show | jobs | submit login

Google has largely solved the spam problem for end users. What I can't figure out is why platforms like Facebook and Twitter couldn't mark these sort of posts as, essentially, spam, by the same sort of rules and heuristics as email spam. Why can't they look at the metadata of verified troll-farm-generated clickbait/junk/spam/fake news, and make their own filters, and demote that content?

Simple. Because they're making money on the garbage. It's a misalignment of incentives. Until that is fixed, through reorganization or regulation, the problem will persist. They will only do enough to fight this -- both in terms of technology and public image -- so that it doesn't impact their bottom line(s).




I think you’re trivializing the difficulty in categorizing the low quality, and extremely brief, comments of social media.

Go look at a political subreddit, or some controversial tweet. There’s little conversation or context. Most users could be bots.

Google often has the advantage of having hard (url) or soft (product/key word mention) links that point to something extremely rare and “unimportant”. Everyone is bitching about politicians and policy in a way that’s not far from a Markov chain.


Thee larger problem is factual accuracy. With all the propaganda going around, it is hard to find the truth. Any attempt to classify such opinions will inevitably further a propaganda.

As someone said, we live in strange timeline where comedians are more trustworthy than politicians.


> With all the propaganda going around, it is hard to find the truth

If bot/organized posts are properly classified as spam then surely the factual accuracy of someone's postings don't really matter anymore, as anything "inaccurate" falls into the realm of "opinion" or just human error. Presumably it's ok to be wrong.

So I don't see any problem with a person believing that e.g. Epstein was killed, and posting online about this belief. I don't see a problem with grassroots communities arising around this belief. It only seems to me to be an issue when such postings are sponsored/encouraged because of e.g. some political aim, and proper spam detection would mitigate this.


FB does work with 3rd party fact checkers. We should be able to trust people from places like Reuters to remain fairly neutral.


Authority doesn't really work on the net. The information will be discarded anyway if people just want to.


When they use Breitbart as a 3rd party fact checker how can we trust them?


3rd party fact checkers themselves are highly biased. If you think reuters is neutral, then you really don't understand the "news" business. The "news" business isn't in the business of news. It's in the influence business. The founder of reuters started off peddling radical revolutionary propaganda.

Paul Julius Reuter worked at a book-publishing firm in Berlin and was involved in distributing radical pamphlets at the beginning of the Revolutions in 1848.

https://en.wikipedia.org/wiki/Reuters#History

Besides, the problematic part with "news" involve "non-factual" news but rather political "news". Whether globalism is good or bad. What, if anything, to do about climate change. Nonsense like veganism, etc. There, by nature, can't be fact-checked because it is a value/contextual judgment rather than a factual one. Capitalism vs communism, nationalism vs globalism, immigration vs nativism, alt-right vs alt-left, traditional media vs social media, etc.


Reuters having started off spreading radical pamphlets is about as relevant as Hitler being involved in the development of Volkswagen.

Nobody in charge needs to (or even can) decide if something like globalism or communism is good or bad, just that no obvious lies are spread while discussing it.


> I think you’re trivializing the difficulty in categorizing the low quality, and extremely brief, comments of social media. Go look at a political subreddit, or some controversial tweet. There’s little conversation or context. Most users could be bots.

Never mind Google, most people are unable to recognize when they are behaving in a bot-like manner themselves...and if you point it out to them, in my experience 90%+ of the time they will become hostile and double down on their clearly not correct statements.

EDIT: Which is often reflected in willingness to entertain or debate ideas, as well as voting patterns. But alas, it seems I am done on HN for the day, due to "posting too fast".


How could they not know?


This would be a great pet project to work on, to get a good understanding of why it's not easy. Plus, if you succeed, you'll be incredibly wealthy.


Ah. So it's my job to watch dog Facebook? Proving what we know must be true? Because reasons. And me performing this due diligence is going to make me fabulously wealthy? Because magic.


This is 100% correct. FB and TWTR are publicly traded companies; their executives and their board have convinced themselves in an extremely cowardly fashion that their only duty is to the shareholders. They absolutely know that the second they take this stuff seriously, they have to report both how bad it was/is, and how much traffic/revenue to their product was fraudulent at best.

It’s so gross.


it's ridiculous how blatant it all is, to the point that even in modern media (e.g tv show Succession) pokes at all the cancer this stuff produces and its one of those things most people are like "yep its happening" but there is nothing being done to stop any of it. ive stopped using facebook/twitter ages ago. that stuff is the downfall of modern society


I think you're absolutely right. It's frightening that there are so many wrongs happening in society today yet nobody seems to be able to stop them from happening. I'm not sure what has gone wrong here but I suspect it all stems from deep-rooted corruption in our leaders and institutions.


One of the unfortunate side effects of significantly reducing the power of democratically elected governments is that while the government has less power to do damage, it also less power to correct wrongs.


On the other hand, if you reduced the Federal government's power enough, it wouldn't matter who was President.

The staggering amount of power we grant to the executive branch is one of several elephants in this particular room that no one really wants to talk about.


I think it's a pretty far stretch to say that the power of the Federal government has been "significantly reduced" in the past hundred years or so.

Happy to hear a good counter-argument, though...?


Its not corruption per se its perversion. Honor and integrity have been usurped by greed, money and power. As long as money = speech and corporations are considered constituents, the issue will prevail. The real solution is to outlaw campaign contributions and lobbying... But that's not going to happen as long as the foxes are guarding the hen house.


The executives and board members are not delusional. According to legal precedent set by Dodge v. Ford Motor Co, a 1919 decision that held that "A business corporation is organized and carried on primarily for the profit of the stockholders. The powers of the directors are to be employed for that end. The discretion of directors is to be exercised in the choice of means to attain that end, and does not extend to a change in the end itself, to the reduction of profits, or to the non-distribution of profits among stockholders in order to devote them to other purposes..." from https://en.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.

BTW, I agree with you that it is gross but thems the rules. So if we extend this logic we need to convince the corporation that it is not in there financial interest to continue with the status-quo.


Go talk to an average person about a subject outside their career expertise.

No matter how "smart" they seem like they should be you'll get some of the worst, lowest quality opinions imaginable.

Most people do not know how to say "I'm not really sure about that". The absolute worst thing you can do south the average person is ask them to think about a new concept, and the internet is a medium which puts a box in front of you and says "have your say".


If you manually type up an email with false political claims and send it to a gmail/yahoo mail account, it'll likely go through.

The spam blocking they do is designed to eliminate unwanted bulk mailings from automated systems, not some one off bit of content from an individual.


Yes and no.

If it is reported by a recipient using the report button most email providers have then your email host gets a black mark in various databases. With repeat reports over time, more and more of thier legitimate customers stuff gets filtered.

That means your email host is incentivised to find and shut down spam accounts, ideally before they start sending mail. Register a bunch of accounts from one computer or from a proxy? Closed.

You could get around this with your own smtp server buuut... If you send a lot of reported emails your home isp will chuck you. They don't want thier ip block blacklisted. Your cloud smtp spam server will be pulled by amazon and they won't let someone who seems to be you create a new account.

So yes, you can send a few manually written emails and send them to a couple people at a time, but anything with wider reach will be shut down rapidly at various levels.

Meanwhile, you can script create thousands of twitter bot accounts with obvious bot names and... If they are looking for them, they aren't looking harder than they are being made to.

I guess this is a strong argument for federated Internet infrastructure. Everyone is incentivised to play nice because thier partners can cut them off.


People report stuff they just don't like. You would need manual content control again.


In addition to the making-money thing, I think it's often a class-interest thing. As a recipient of the harms of paid disinformation, I'm perfectly happy to ban it all, and with stiff legal penalties. But if you're a billionaire (or work for one), then banning paid disinformation means you're giving up a tool of power, and taking power away from the people you regularly hobnob with.


As a billionaire your interests are a bigger target for well funded or well organised opponents as well. And you probably have a lot of other ways at your disposal to exert influence. So I don't see a net benefit of disinformation specifically for billionaires.


Billionaires have plenty of ways to fight with other billionaires. Disinformation gives them an advantage over the very large number of non-billionaires.


Billionaires also have plenty of ways to influence large numbers of non-billionaires. They can own media empires and think tanks, fund/lobby politicians and political movements with armies of volunteers, or become media personalities themselves.

I would be interested in an analysis of who funds those troll campaigns. I'd wager billionaires rank very far below governments, political parties and corporations.


Again, that they have other tools doesn't mean that they want to give up this one. And they're not more likely to just because those groups use it. Indeed, governments and political parties are generally quite solicitous of the interests of the very rich. [1] As, of course, are corporations.

[1] https://www.vox.com/2014/4/18/5624310/martin-gilens-testing-...


>Again, that they have other tools doesn't mean that they want to give up this one.

It means that disinformation is relatively less important to billionaires compared to others.

Not sure what the linked article has to do with it.

Edit: The reason why I'm even debating this not very important difference is that I feel that structural issues with our political and economic system are too often obscured and trivialised by blaming it all on powerful evil individuals.

For instance, billionaires want the companies they own shares in to make as much money as possible. CEOs who measure their success in increasing share prices are therefore incentivised to occasionally break the law.

Does that mean billionaires wanted or even told the CEO to break the law? Not necessarily. Does it mean CEOs wouldn't break the law if the shares were owned by a pension fund and not by billionaires? No.

So there are more complex structural issues that shouldn't get buried in too much finger pointing at rich individuals. Of course that doesn't mean there are no evil billionaires or that some of them wouldn't like to own a personal troll army. Who knows.


The problem is this: Trolling engages users. It makes users angry and it keeps users on your site because they are now “engaged” and being “retained”. If we got rid of all the noise (read: All of the heated arguments not backed by objective facts), many people would spend a lot less time online.


Detecting Gmail spam is a wildly different problem than detecting paid shills on a forum.


Google has not solved the spam issue. Search results have become worse and often alternatives such as Bing or duckduckgo give better result. The same can be said about Gmail which very often misses spam or marks non-spam as spam.


Apparently FB now prevents certain posts from spreading, making them only visible to the author. An example is one containing "Alex Jones". It's like a shadow ban, except FB will notify the user.

Naval Ravikant made a very good point on Joe Rogan where he said that as soon as social media platforms decided to censor content they started a war they can't win because they end up having to chose sides. He said their position should be (recalling from memory here) "we're a publishing platform not responsible for the content that's being published. If you have a warrant, we'll take it down, otherwise it's hands off". Censoring, demoting or stopping the spread, it's basically the same thing.

They absolutely are making money on the garbage as well, as you said.


> "we're a publishing platform not responsible for the content that's being published"

Can't help but notice that this kind of argument never really worked for torrent platforms.


> Google has largely solved the spam problem for end users.

Google hasn't solved the spam problem for end users at all. Google has solved its liability to spam.


I think OP means spam email via Gmail, which it pretty much has done save for the odd spate of 3-4 every few months.


It’s easy to see when a new email account tries to send thousands of emails to people they don’t know.

But how do you tell the difference between a political activist who just joined Twitter and a troll account? You seem to think this is a trivial problem so I am curious to see your answer.


The number of political activists that are just joining Twitter now and want to start by posting my times a day about the one issue must be small compared to the troll accounts. It even should be relatively easy to come up with a model to predict if someone is an experienced user based on the actions they take on the platform.


Once you come up with this “relatively easy” model I am sure any company would love to pay you good money for it. It’s a pity they never thought of that before.


I think a big part of Google's solution has to do with the number of recipients of a message with a particular hash.

Whereas social media doesn't really have a to: list. By design it's broadcast. That puts a major limitation on using as similar solution.


Google hasn't solved "spam". Solving email spam is pretty easy.

Solving comments isn't.

Also, google has the same incentives as facebook and twitter. I don't see why you are separating google from facebook and twitter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: