Hacker News new | past | comments | ask | show | jobs | submit login
Twitter Is Crawling with Bots and Lacks Incentive to Expel Them (bloomberg.com)
236 points by rayuela on Oct 13, 2017 | hide | past | favorite | 183 comments



On the contrary, it would appear that Twitter is incentivized to encourage bots and trolls to flourish.

The definition of a bot could be problematic though. If a bot is merely automated posting, then virtually every company, news org, website, and anything else that uses automated posts is a "bot", and are those bad? I don't think so, those are useful. The problematic bots are basically automated trolls.

Troll bots increase "engagement" metrics by quite a bit, this is obvious to any person if you look at the automated bot replies to virtually any post from a news agency or anyone with a large twitter following. They also jump onto every hash tag or breaking news event spreading garbage. You'll find dozens of bots making absurd statements followed by bots responding to those bots, and clueless people arguing with the bots. Combine that with the endless seas of human trolls that also jump into everything, and those interactions probably make up a huge amount of usage of Twitter.

Twitter remains useful for scanning headlines and breaking news, but outside of that the service is a complete mess.


Fun story: I once wrote a primitive Twitter bot as an experiment, and forgot about it. It's been running on a Raspberry Pi unattended for a few years now, tweeting random stuff and periodically "engaging" with a predefined set of topics.

It has a Klout score. It has more followers than me.

[edit: spelling]


Do me a favor and check up on it once in a while.

If it becomes self-aware, we may have a problem.

I've seen this movie before somewhere...


I need to make a kill switch of some kind.


I wonder how many are other bots? Also, Klout still exists?


Millions.

Klout still exists, apparently. I don't see many people referencing it anymore though.


Bots can be useful. Good bots (ones that don't pretend to be humans or ones that don't fake human engagement) should be allowed and regulated.

The bad ones though ruin the experience for everyone (and defraud advertisers).

I've previously attempted to classify bots somewhat:

https://medium.com/@iTrendTV/dumb-bots-have-ruined-twitter-s...


I propose marking bots as such. Keep following useful bots. But users will at least know that a troll is automated and are able to act accordingly without wasting time and energy.


Posts and interactions like likes or retweets via the api should be marked as automated, but identifying users as bots is problematic and not very useful as there are good bots and bad bots.

Many users use some automation and that’s a good thing, e.g. posting blog posts when published, bots to repost top #shithnsays for those who want to follow it, that amusing horse books bot etc.

More problematic than bots to me are the many many ways the twitter platform simply rewards the wrong thing, from verified nazis to crowds of trolls and spamming hashtags.


What about the many third party Twitter apps some users prefer to use as opposed to the official client?


They could just identify those as 'posted from x', as twitter should know.


Why is it problematic to mark a good bot as a bot?


It's problematic to identify accounts as bots. Many accounts are partially posting from a bot and partially posting from a human, so what are they? Better to do that at the level of individual posts then the user would see right there that something was posted by the api, and it'd be harder for political spambots for example to flood threads, you could have options to turn off api posts for a thread. This would drive some bots to use selenium etc (perhaps some do already), but I think would be more effective than just marking accounts somehow as 'bot' with fallible detection methods.

Also most of the problems on twitter are caused by humans, not bots. Humans can be cruel, harass, bombard victims with messages, threaten them with death, rape etc (this is regularly happening right now to women on twitter), and those are the more problematic interactions. Yes sometimes spambots liking posts or replying to posts are annoying, or the ones that DM you are pretty bad, but they're not as annoying as humans at their worst.


"Replicants are like any other machine: they're either a benefit or a hazard."

Yes. Make bots legitimate, with verified bot status for ones that are truthful and helpful to the end users.


Couldn't Twitter immediately improve "reputation" by having people display badges based on verified link to other accounts (social, whatever)? Maybe even take it as far as Airbnb does with the copy of your license.

I use Twitter daily, but mostly as a news feed, or with people I already know in real life. There is no point in engaging with a random person on Twitter, because it's quite often a bot or someone that wants a fight.


> The definition of a bot could be problematic though. If a bot is merely automated posting, then virtually every company, news org, website, and anything else that uses automated posts is a "bot", and are those bad? I don't think so, those are useful. The problematic bots are basically automated trolls.

Except if you see that for example the BBC's World News account is a bot, that makes sense because the BBC isn't a person, it's an organization.

The problem is bots masquerading as individual people.


Marketing bots and plugins on Twitter are 3 things.

- A reason not to pay to advertise on twitter

- A way to make the platform look more active than it is

- A distraction to users looking for conversations


I think part of the problem here is twitter can't decide what it is.

Is it a place for people to chat with friends?

Is it a platform for free speech or are there limits to that speech?

Is it a platform for brands/people to broadcast to a broad set of followers?

Is it a source of news and a place to talk about politics or what is going on?

It can't be all of those things.


Twitter's UI is an example of this.

It seems it was not designed with any purpose or philosophy in mind.

It is not good for reading text. Not good for reading conversations. Not good for looking at images.

I do admit that they got the scroll until your finger is sore thing down.


Its still useful for providing feedback.


How about the SEC opens an investigation into whether Twitter is defrauding investors/advertisers by including bot accounts in user metrics? Or is that too hard to prove?


Facebook has been caught dead to rights a few times, like this one:

"According to a recently published report, Facebook says they reach 1.5 million Swedes between the ages of 15 and 24. The problem here is that Sweden only has 1.2 million of ’em"

"Facebook’s Ads Manager says that the website is capable of reaching 41 million Americans between the ages of 18 and 24. The problem is there are only 31 million Americans of that age." [1]

Nothing ever happens, other than maybe some grumbling from paying advertisers.

[1]https://mumbrella.com.au/will-facebook-ever-stop-bullshittin...


>Why should a person's mere nonexistence prevent you from serving an ad to her? You have built a scalable technology platform; it hardly seems fair that your growth should be limited by the human species's comparative lack of scalability.

(Via Matt Levine, tongue firmly in cheek)

http://www.thevab.com/wp-content/uploads/2017/09/Facebooks-R...


I didn't read your source. But I'm sure that for those age gaps the number of accounts could be explained by underage kids lying about their age to be able to create an account.


a lot of people create alternate personas for varying reasons. There are a sizable cohort of people that like to create accounts for online fantasy. also, there are many game accounts (some that even produce money)

trying to police them off would be a mistake by facebook.


>a lot of people create alternate personas for varying reasons.

I'm sure they do, but I doubt 30% of them do. That's what you'd need to balloon from 30 million to 40 million. The 30% also assumes that EVERY 18-24 year old American has a Facebook account in the first place.


You'd be surprised. A lot of people are technically inept.

I've seen people create new accounts every time they lose their password. I also had a friend who'd create a new account every time she had a personality change of some sort (new hair? new profile!). Another friend owns a couple businesses and has accounts for each one (Pages are completely lost on him...). My aunt has entered her birthday incorrectly so she's supposedly 19 years old right now. A neighbor has a "family account", which is really just the wife posting pictures of kids.

Don't take "technically inept" to mean "my abilities, just a bit lesser". Take it to mean "no ability to reason about software applications whatsoever".


If they aren't logging into those accounts any more, then Facebook's advertising isn't "reaching" them. That explanation doesn't paint Facebook in a good light either, because presumably they are measuring how many impressions they serve and to who, right?


I agree, it's not good on Facebook to advertise inactive accounts (maybe they don't?), but it's not really in their immediate interest to do so.


facebook makes around 80% of its revenue from mobile


Start by assuming virtually every one below 21 that has an FB account has at least a couple of accounts - One with their public persona and one their parents won't be allowed to see. Add in accounts pet owners have for their pets etc. Then add in the bot farms. You'd get there pretty fast.


Another way to slice those data - 33% don't have accounts, 33% (9.9 million) have just one account, and 33% have the remaining 30.1 million accounts spread between them - on average, 3.04 accounts per person in that group, but one or two people/groups may have thousands of accounts.

Edits: Correcting math. Thanks, kbenson.


30% of the populace has 3+ accounts? That doesn't make sense - it doesn't follow power-law distribution.

Most like the .01% have hundreds to thousands of accounts. What does that imply?


If somebody has hundreds or thousands of accounts, it implies they are running bots ... which I guess brings the whole argument back to the point that active humans with multiple personas are not responsible for inflation of reach stats.


You're implying that identities created using persona management software [1] are bots? I guess you could though they'd likely pass a turing test.

[1] https://www.dailykos.com/stories/2011/2/16/945768/-


I'm implying that if you have hundreds of accounts, you must be using software to manage it and aren't actively engaged with all of them yourself. Therefore, they are no better than bot accounts.


Well, to correct it entirely, you should probably use thirds instead of 30%, since that was either an allowance for simplicity or an error originally. I believe the correct answer is just under 3 accounts per person in the group with multiple accounts.


Alternative accounts are much more common on sites like Twitter and Instagram. I don't think it happens as frequently on Facebook.


That would be because Facebook actively makes it difficult for you to have multiple accounts.


It's not amusing to me that fake numbers can be used to justify a higher valuation, especially when those fake accounts are used to scam or mislead the "real" ones.

At some point should there be regulation on what terms like MAU means?


I don't think MAU numbers are being used to justify valuations. They may inform analysts' understanding of growth trends, in general. But, most analysts are looking at revenue, revenue/user, and margins.


This is the classic foible of calling for the mere existence of some regulation and assuming that is the same thing as the world really behaving according to some good rule.

Right now, investors researching Facebook stock have to be aware that published metrics include botspam. Suppose an authority imposed a regulation to force Facebook to publish "bot free" metrics.

Then what will happen? Legal will impose a demand on engineers, and the engineers will scratch their heads and do what it takes to satisfy legal, and legal will scratch their heads at the result and write up whatever report it takes to satisfy the regulator.

In the end, the relationship of the published number to reality will have changed in some difficult to predict, probably undocumented, way. Will investors then really have an easier time doing their research?


They haven't really been caught on anything. These aren't ad measurement results saying they did reach more people than exist, this is a reach estimation service. It's a technically complex problem that needs to take an arbitrary targeting spec with boat loads of dimensionality and return a reach estimate in 100s of milliseconds. It's based on a sampling method, of course. The reach estimation tool is a planning tool for approximations and nothing more. That's why advertisers aren't making a big fuss.


It's not that challenging to present real stats on active accounts.

OTOH, it's pretty tempting to use numbers already pre padded for you.


Some bots are not technically bots.

There may be real people behind them, but they are "employed" by various agencies as proxies for RT/likes. Some do it manually, others employ automation.

Also, many of the more prominent accounts are almost fully scripted - these people aren't really "there". Here's an example of a Salesforce exec (Verified account, btw) employing "automation":

https://twitter.com/iTrendTV/status/738375663229505536

We've done some twitter fraud analysis in the past, and it's terrifying.


Exactly. I know some people that are indistinguishable from bots even in a face to face conversation.


Ha. But the bots aren't viewing advertisements, which is what matters to shareholders.


... here's a current example. Just saw this promoted tweet from Prudential on my timeline. It's already generating engagements (10 RTs, 38 Likes):

https://twitter.com/Prudential/status/918475795479351296

Click on the Likes below the tweet (will show you list of accounts engaged), and tell me this is real human engagement.

You will see accounts like this:

https://twitter.com/Electriflyy

Twitter won't consider this a bot, yet this "person" already liked 192,000 tweets:

https://twitter.com/Electriflyy

Here's another account who engaged with this promoted tweet, for which Prudential is getting billed:

https://twitter.com/JonTheGr8

Again, look at the number of tweets this "person" liked (72,000).

Other "engagements" come from:

https://twitter.com/RobertT70111579 https://twitter.com/TeghanJustin https://twitter.com/brandon_gorsuch ... Notice the pattern?

Prudential is getting ripped off right in front of us.

This is happening now across all brands, at massive scale.


I like/reply most ads I get so they waste money on nothing. That's my little fuck you to the ad industry.


Looking at that RobertT account I wouldn’t be surprised if it’s one of the Russian IRA accounts.


Facebook is defrauding advertisers by including ad block users in their metrics!


Bots generate engagements.

"Engagements are new followers from Promoted Accounts and/or clicks, retweets or likes on Promoted Tweets. You will never be charged more than your maximum bid and you usually pay less."

https://business.twitter.com/en/help/overview/twitter-ads-gl...

The bulk of all engagements now are fake/automated.

P.S. as a bot, you don't need to "see" a tweet to engage.


How does accounting reconcile users who have ad blocking enabled? Do they even bother?


Now if you can trick the bots into viewing the ads and change their behaviour...


> How about the SEC opens an investigation into whether Twitter is defrauding investors/advertisers

The SEC protects investors, not advertisers. When non-securities fraud gets companies in trouble with the SEC, it's because said companies failed to disclose the fraud or the risk of the fraud to their investors.

Facebook discloses their "advertising revenue could also be adversely affected by a number of other factors, including...the availability, accuracy, and utility of analytics and measurement solutions offered by us or third parties that demonstrate the value of our ads to marketers, or our ability to further improve such tools; adverse legal developments relating to advertising, including legislative and regulatory developments and developments in litigation; decisions by marketers to reduce their advertising as a result of adverse media reports or other negative publicity involving us, our advertising metrics, content on our products, developers with mobile and web applications that are integrated with our products, or other companies in our industry..." [1]

[1] https://www.sec.gov/Archives/edgar/data/1326801/000132680117...

Disclaimer: I am not a lawyer. This is not legal advice. Don't commit fraud.


I think the bigger challenge would be convincing the SEC that including the bots in their user metrics constitutes fraud.


The bots count towards ad revenue, so add prior knowledge by Twitter, and boom... fraud.


Yeah, but they are still real accounts that exist. AFAIK, whether they're operated by humans or machines isn't a consideration when reporting user numbers.


But your comment almost says it: Whether or not they're actual users isn't a consideration when reporting user numbers.

Are they reporting user numbers or account #s? Is it ok if they generate 1billion accoints themselves? Why/why not? What's different?

Impressions are based on "eyeballs" on ads. Machines look at ads. Customers can project ad blocker %.


Intent to defraud would be clear if they generated a billion new, fake accounts.


Why is it not clear if they report metrics that suggest they have more users and higher engagement than they do?


The purpose of buying ad space is to expose your product to people as an impression or as an action. If I tell you I can sell you a service that will show your ads to 10 people and you agree to give me money for that statement, but 5 of those people are running 2 accounts each, that's misleading and, dare I say, fraudulent.


You're wading into "plausible deniability" territory.


Bot use endpoints that don't return ads so I don't see how they could cause fraud.


The current US administration and Twitter/Facebook bots are in your standard strange bedfellow relationship. The GOP benefits from divisive [0][1] speech not unlike that employed by bot accounts. As long as rhetorically the bots share the same intentions as the administration, namely: divisiveness and promotion of nationalistic views, there will be minimal effort in chasing down the full extent to this defrauding.

The intentions are clear - what will happen is not. The problem is you cannot prove malicious inaction.

[0] https://newrepublic.com/article/125952/gop-party-fear

[1] http://thehill.com/homenews/senate/352768-gop-senator-russia...


Both parties benefit from divisive speech. It allows them to argue about purity of ideas instead of the ideas themselves which is _fantastic_ for party loyalty.

It's not a GOP- or Democrat-specific problem.

Plus, it's not like there's only "right-leaning" bots on Twitter. Back during the election there were a couple times I tweeted something negative about DJT and my tweet was instantly retweeted and liked thousands of times by blatantly "left-leaning" bots. They hit up both sides of the aisle to stir up drama. It's practically their stated goal.


There are far, far, far more 'right-leaning' bots on Twitter than anything else. As someone who was replied to by a 'real' account connected to the bots, I got to watch the waves of fake retweets and likes on a regular weekly schedule, often a hundred or more on a Saturday morning around 2 or 3am. And this on tweets that were 2 to 3 weeks old already. It's an amplification technique to make it seem like more people support the views. You can watch it happen live in the replies to many of Trump's own tweets with accounts with names that are a combination of the words, trump-maga-america-patriot-red-right and mom or veteran or whatever and often a random number. Often accompanied by a profile picture lifted from elsewhere online. These accounts have 30k+ followers and generally post nothing except pro-Trump memes. Some do nothing except like and retweet from other areas of the botnet.


More than the US political parties it is foreign adversarial nations like Russia that benefit from dividing the US electorate.

When two guys fight they both get a black eye. It is the third guy who got them to fight that gains the advantage over them.


> Both parties benefit from divisive speech.

This is correct until you ask "how much" does one benefit from divisive speech.

There is one nationalist branch of the Owner's Party of America, and the other is Democrat.


You mean to say all those anti Trump bots replying to his tweets are doing him a favour?


I mean to say all of those pro Trump bots organizing rallies for him in America out of Russia ;) [0]

[0] http://thehill.com/policy/technology/351557-russian-actors-m...


Ever noticed how Russian groups creating Facebook pages for fake pro-Trump rallies that don't exist and don't have any attendees is "Russian groups organized pro-Trump rallies on Facebook", but them creating fake (say) BLM events that don't exist and don't have any attendees is "Russia fakes black activism on Facebook to sow division"?

Also, it requires a rather curious definition of "bots" to include actual people manually doing stuff like this. One that's political rather than technical.


Didn't the same problem show up at Reddit where they were capping the visible user numbers of certain subreddits? They were showing massively higher numbers on their advertisers page than the subreddit front page.

So either they're inflating numbers and lying to advertisers... or they're capping numbers and lying to make the "deplorable" subs look less popular.


Not too hard to prove, just need to know where to look.

The problem isn't just bots, but fraud in general.


Once our tweet used a popular hash tag and got re-tweeted by some bot network. It was really easy to tell the accounts were all bots, all the re-tweets happened simultaneously, profiles were very similar, with similar messy streams.

I went into trouble of reporting each of the fake accounts manually (the UI for this is far from convenient). After some time I checked the bot accounts again. None of them was deleted, our original tweet still had a large re-tweets count, which basically misleads the real Tweeter users, making the tweet look more interesting that it really was.

From this incident I no longer look at re-tweets as any reliable metric of popularity or quality of tweeted content.


Flip side, it's realllly easy to tell when businesses hire botnets to push up their follower count in hopes of being verified. If an account has tens-hundreds of thousands of followers and no interactions from them, save to assume its all faked. Whats sad is twitter seems to verify them without actually looking.


But how can you tell if the bots are actually hired by the company? What if it was someone outside the company trying to influence Twitter for their own purposes?


Well, botnets and incentivized users are hard to tell apart. Incentivized users often appear to be bots, as their sole purpose for using Twitter is to get something from companies that want the numbers, and they often don't even bother setting a profile photo, or doing anything but retweet promotions.


This problem is much bigger than any of these articles seem to imply. I am also convinced that Twitter knew this all along.

Unfortunately, bots don't just inflate user numbers for them, they create ad revenue.

Additionally, the majority of self-proclaimed "influencers" rely almost entirely on automation for publishing (i.e. they are not really "there" on the platform), and bots for amplification.

If you dig deeper, there's a number of "certified" Twitter partners there, that provide social media analytics and management platforms. They pay Twitter for data access, but they employ 3rd parties to provide "amplification" for them, via bot farms - creating the illusion of effectiveness for the subscriber.

The rate of real, authentic human discovery and engagement on Twitter right now is incredibly low.


> Unfortunately, bots don't just inflate user numbers for them, they create ad revenue.

How? At the end of the day, a human sees the ad and so generates ad revenue for Twitter. A bot a) does not see the ad tweets in his normal timeline feed and b) does not feed back impression tracking to Twitter.


That's impressions (even those can be gamed).

If you promote a tweet or account on Twitter, you will get billed for Engagements (RTs, likes, follows).

https://business.twitter.com/en/advertising/campaign-types/i...


This article states Twitter's position ("Bots? On my platform?") vs 1st hand experience from advertisers:

http://adage.com/article/digital/brands-worry-twitter-undere...


If the bots are posting content or attracting real human eyeballs with their comments, would they not be able to generate revenue then?


Yeah, but that is entirely legitimate revenue.


> A bot a) does not see the ad tweets in his normal timeline feed and b) does not feed back impression tracking to Twitter

Source?


I have written both bots and a client, I have yet to see a sponsored tweet. In fact this is why Twitter cracked down on 3rd party clients because they cannot show sponsored tweets.


There are tons of fake accounts on Twitter. It doesn't take much to get hundreds or even thousands to engage with you (follow you, reply to your tweets with a common hashtag, etc.).

I thought a year or two ago that Twitter was avoiding cleaning house and removing the obviously fake accounts because they were simultaneously trying to show growth on the platform and if anything removing these bots would hurt their growth metrics.


Insstagram is in exactly the same boat.

Although they technically ban botting, and make cursory efforts to stop it, it's not really stopped.

Here's a good article on Insta botting, which goes info further detail: https://petapixel.com/2017/04/06/spent-two-years-botting-ins...


Twitter is also crawling with fun and light-hearted bots that are non-manipulative and generate content for others to consume. I would hate to see any of those bots get thrown off the platform.


Spitballin' here but, if Twitter allowed for "registering" a bot, then these sorts of fun and light-hearted bots could easily stick around.

If someone has a legit reason to fear registering a bot they run, then it's probably for "not-good" reasons. And in my estimation "not-good" could be as innocuous as those follower-fishing bots.


Most bots that I've encountered on twitter "register" a bot by saying it's a bot in the profile.

The "up-to-no-good" bots do no such thing, of course.


Not sure I agree that Twitter doesn't have incentive to get rid of the bots. Bots are bad for user experience and contribute to noise that affects the quality of inputs used by journalists, marketing types, etc. to evaluate trends. The article references the number of active users as a critical piece of Twitter's valuation, but Wall Street can't be dumb enough to completely ignore the quality of those users.


I suspect we could brainstorm a long list of things that Twitter should be doing, but isn't.

Since I've never personally tried to solve any of the problems on that imaginary list, it's tough for me to discern between:

1. It's hard

2. It's possible, but has bad side-effects (like censorship)

3. No incentive (ultimately will not help their stock price)


Exactly. At the end of the day, if the problem is not handled, the entire house of cards collapses. At some point, your users abandon your product over the bad UX. At some point before this abandonment, it becomes clear these consumers are looking for more - users are just now starting down that road, I would say. Ball is basically in Twitter's court at this point. Will they adapt? We will see!


If they were serious about getting rid of bots, they'd have to stop lying about the real size of their user base.


The question is whether bots are bad enough for the user experience to make people leave.


Anecdotal, but I left in some part because of the noise created by bots.


I left because there weren't enough bots to bring the average tweet quality up to marginally awful status.


Maybe "journalists, marketing types, etc" shouldn't be using twitter engagement metrics to report public sentiment.


sure, but they do and they're spending money on the platform so


> The article references the number of active users as a critical piece of Twitter's valuation, but Wall Street can't be dumb enough to completely ignore the quality of those users.

Someone on Wall Street is. And as long as there is a sucker to be sold to, the charade will continue.


Every time I try to use Twitter, it feels like they actively try to sell all the noise as signal.


One thing I've been contemplating is the extent to which feedback on twitter can alter a user's opinion or focus on a given topic.

Ex.

- make a bot network of 100 bots.

- identify some accounts, split them into control and experimental groups

- choose a topic, such as "NFL"

- every time someone in experimental group tweets something about the NFL, like or retweet it a dozen times from a random slice of your bots

- do this for a few weeks

- at end of few weeks, does experimental group tweet about NFL more frequently than control? Has the sentiment within their tweets gotten any more extreme?


Oh cool, this is something I've been thinking about too. Please, if you feel you have the time, do email me (see my bio) with some lessons if you manage to build something.


The problem isn't that they lack incentives, it's that by expelling bots they'll inevitably expel some real users. The cure isn't worth the side effects.


How do we know if they won't report the extent of the disease? And we're not even confident they know the extent. If they don't know the extent, they couldn't have made an informed judgement.


If gmail can spam filter emails, why can't Twitter do the same for bots?


Gmail gets a big amount of metadata with each mail which they run lots of checks against.


So you’re saying that instead of trying to rescue their stock price, Twitter are looking to treat everyone fairly, cost no object?


Well apparently so, as they are choosing not to do so. Or maybe it isn't costing them much money.


No, I think the difference gp ignores is that they could either protect false negative real users, and eat a loss for the good of those, vs. ignore real users getting squeezed and boost stock price by...? This is where it gets fuzzy.


How about these bots?

@newsyc250 @newsyc100 @newsyc50 @newsyc20

Why should these bot accounts be expelled? I just love the way some bot accounts are so useful. In fact, following the above mentioned bot accounts, gives you just enough number of Hacker News posts that you want to see, based on how popular they are.

Similarly, there can be many bad bot accounts, but differentiating those from good bot accounts may not be an easy task, and bound to fail with false negatives and false positives.


The article says not about bots generally but about the bots that (allegedly) were controlled by Russia and (allegedly) were posting pro-Trump tweets and (allegedly) could help him to win the elections.

But from a legal point of view, is Twitter obliged to find and ban such bots? I don't think so. Marking them as bots would be a good idea though.


> Marking them as bots would be a good idea though.

I agree. I like how Telegram does bots. They have first class support for bots, and the separation between real user and bot is extremely clear. You have to specifically create a bot account, and bots have certain restrictions that users don't have (e.g. they cannot initiate a conversation themselves).

Of course, that assumes the bot is set up as intended. One could masquerade a bot as a real user if they were clever enough and had a spare phone number, presumably. I've not tried it, but I'm going to assume it's possible, since you can very well write your own Telegram client.


True. I’d like bot accounts to be marked as such. It’s annoyingly to see bot accounts as real accounts, folllowing 5k or 10k + accounts following or liking my tweet because of a specific keyword.

If twitter can fix that, it’ll be awesome.


How about incentive to expel users who violate their ToS like our President? Surely threatening nuclear war violates their "Abusive Behavior" rules (which are part of their TOS)... [1]

> "Violent threats (direct or indirect): You may not make threats of violence or promote violence, including threatening or promoting terrorism."

And that's just getting started.

Has there been ANY statement from Twitter leadership on why they permit Trump to continue with this behavior? I have my own cynical answer, but I'm curious if they've gone on record for what is such a blatant abuse of their ToS it is unconscionable that they let it continue.

[1] https://support.twitter.com/articles/18311


The short answer is probably that if Twitter were to ban Trump, they'd face an enormous backlash from folks who take it as suppression of political opinions. It's a blessing and a curse for twitter, I'm sure -- having Trump tweet regularly has probably been a boon for business, overall. If you're generous, you could make the argument that Twitter is allowing the President to bypass the traditional media outlets and have his voice heard directly (for better or worse).

> Surely threatening nuclear war violates their "Abusive Behavior" rules.

Even if you were to try to live by the letter of the law, you'd have a hard time really getting this to stick, I imagine. If this is the tweet you're referring to:

"Just heard Foreign Minister of North Korea speak at U.N. If he echoes thoughts of Little Rocket Man, they won't be around much longer!"

Then I think you'd have a hard time construing this as "terrorism", any moreso than half of Twitter's population has said (sports teams getting "killed", public figures "not being around anymore" (i.e. probably meaning "in office")).


The interpretation of the Twitter rules that people seem to expect is one which bans everyone they detest and permits everyone they like. I mean, there's a huge overlap between the people complaining about Twitter not banning Trump because nuclear war and the Nazi-punching brigade who rely on Twitter's rules on threatening violence not actually banning all threats of violence.

(Not to mention that that the main reason the demands to ban Trump have restarted is because Twitter briefly suspended someone for posting another person's private phone number to their 800,000 Twitter followers, and she and everyone else is using this to justify why Twitter should have let her get away with it.)


This has nothing to do with Trump specifically -- the rules are different for people at the top. Twitter needs him more than he needs Twitter.


No they don't - if it gets into the news that Twitter bans Trump for abusive behaviour, that will look super positive to the majority of people IMHO. I'd certainly put Twitter in a much brighter light.

When you take a stand for something - you might break some bridges - but you'll also be building bridges.


A significant portion of the American public would be outraged and would be screaming about censorship. Others would be elated.

It would be pretty interesting to see what the response would be, because Trump would no longer have an easy, low-effort medium to get his message out. Would he just turn to a right-wing social network like Gab? Would he just use traditional media instead?


>>super positive to the majority of people

Extremely unlikely.


The majority of people... in your tiny social bubble.


Twitter did respond to this recently. Apparently "newsworthiness" plays a factor in their decision making.

https://twitter.com/Policy/status/912438362010783744


That's such a copout. Any threat of large scale violence from anyone could be considered "newsworthy." The real answer is they never anticipated a POTUS would break their TOS, and they have no idea how to deal with it, without being hypocritical.



I wouldn't be particularly sad to see him lose his account, but I don't think this is a useful angle of approach.

This clause is pretty clearly about illegal violence. Part of the President's job can involve both threats and actual violence. I think this president is a dangerous loon. But I don't think Twitter should a priori ban government accounts from talking about, say, a declared war.


Sorry, but I'm not clear on what part of "You may not make threats of violence or promote violence" specifies it must be "illegal" violence. Part of the President's job may be to make threats and conduct violent actions against other nations. That doesn't change the fact that it violates the terms of use.

That kind of talk has no place on Twitter based on Twitters own rules that say no threats are allowed.


I get the theory, but you're over-interpreting the specific words of one part of the policy.

The words are an expression of the ideas in people's heads, and are written with a particular context in mind. That context is fighting the normal kinds of platform abuse they see. The policy was definitely not meant to encompass the typical actions of governments and heads of state. Nobody was even thinking about that at the time they were written.

If they were to use that to ban Trump, it would be pretty obvious rules lawyering [1]. Which many people certainly feel is justified, and I understand why. But from Twitter's perspective, rules lawyering, no matter how much people feel it's justified, undermines trust. So if they want to get rid of Trump, they'd be better off writing a specific policy that Trump (and others) are clearly violating, not bending something else to fit.

[1] https://en.wikipedia.org/wiki/Rules_lawyer


They have. Basicaly... too bad. He gets us users. At least that’s my interpretation.

https://www.avclub.com/twitter-releases-statement-confirming...


I bet this would hit the news in a huge way - Trump would certainly not be able to contain himself and do video responses - or it might set him over the edge, not having that ability to vent - maybe a good thing, maybe not.

It would certainly gain a lot of positive attention and give huge props to Twitter IMHO too.


> It would certainly gain a lot of positive attention and give huge props to Twitter IMHO too.

From your tribe, and it would generate a lot of negative attention and give huge condemnation to Twitter IMHO too from the other tribe.


Because they're acknowledging and disallowing abusive behaviour? Or of course that's just all #fakenews - the abusive behaviour?


You are under the impression that everyone shares your opinion and then go for the dismissive #fakenews. I guess you assume anyone who thinks ill of your view must be a Trump supporter and that is some type of slam that can dismiss without thought of others.

> Because they're acknowledging and disallowing abusive behaviour?

They might have a bit more of a leg to stand on if their abuse council had a wider political spread or they actually policed threats that are illegal in a consistent manner. They are so ham fisted that they cannot even point out a rules violation with suspending an actress's account.

> Or of course that's just all #fakenews - the abusive behaviour?

The whole "threatening nuclear war violates" as abusive behavior is a losing argument. The President doesn't like a lot of people and doesn't conceal that information. The idea any of his tweets go beyond hurting someone's feelings is pretty absurd. In contrast, Twitter allows actual threats of assassination without consequence. Nevermind all the threats of rape, etc. directed at women on Twitter without any answer. Look at any popular, conservative women's timeline and see all the crap the abuse council and twitter support lets through. Actual threats versus hurt feelings shows what should have action taken.

All of this is just some people's desire to remove a direct outlet from groups of people. It has nothing to do with actual threats.


This argument is really so laughable. And it's unrelated to the article


Perhaps their argument has some merit but it's disappointing to see such an off topic comment at the top.


It's just a cheap shot, I've seen this argument being made, some people just want Trump off Twitter because it's a huge propaganda machine for him. But that's simply never going to happen for a myriad of reasons that we all know. It's pathetic they keep pushing this argument just to make noise. There are many things one can do to oppose Trump, this is one of the worst ones.


white people are a protected minority group in the eyes of facebook/twitter


If twitter doesn't ban bots, at least give them an official designation and require people to be honest about whether the account is bot controlled or not. Advertisers want to reach people not bots. If they got together and demanded more accurate reporting then there would be a financial incentive.


If they did that, then wouldn't normal people just claim that their accounts were bots, and then they wouldn't be served ads anymore?


Twitter could differentiate between access through official clients and APIs.


Good point. Sounds like there needs to be a bot tax. ;)


I was thinking the other day about how bots effectively hack the first amendment. If you're one to believe that the proper remedy for offensive speech is more speech, the bots kind of throw that out the window. Trolls are at least actual people, but bots are not. You're not going to exchange views with a bot. So it's reasonable to suppress bot content. But then the problem is, how do you know it's a bot? What's the foolproof algorithm that determines whether someone is or isn't a bot, without false positives or false negatives? What if it's someone that is merely scheduling their own tweet? So it means you've opened the door to suppressing someone's speech based on the content of their message.


There's a line we can draw and should draw. A scheduled tweet is a nice feature but it's acceptable to call a bot tweet. It's not the content of the message we want to stop, it's the sender


Went to a meetup at twitter the other night. We got a cryptic response to that question "don't worry, we are working on it". Someone asked them why they don't just require every account be verified, or non-verified accounts labeled as boys, but they avoided the question.


ctags are practically dead on twitter. Good luck searching for something like $LTC or $PTOY without crawling through pages of ctag spam with russian names linking to telegram accounts. It really shouldn't be that hard to fix.


I think a lot of traders switched to StockTwits: https://stocktwits.com/


I googled this, but it didn't really help.


I hope someone else appreciates the irony of the anonymous message projected onto the side of the Twitter building, ostensibly intended to amplify the opinion-holder's voice and sway public opinion.


Seems inevitable that more governments will follow china into the digital identity business. Just because Black Mirror made a dystopian version of this doesn't mean it's a bad idea.


This problem cannot be solved. As natural language processing and output become ever more sophisticated, it will in the very near future start to become impossible to discern between bot and human. Consequently, I think instead of trying to solve the issue of bots, it's more important to start educating people that just because they read something on the internet does not mean it's true. And just because lots of people seem to support an idea (or condemn an idea) says nothing about the actual public view towards said idea.

This sounds somewhat patronizing (the first part in particular), but falling victim to confirmation bias is something we all do. When ideas fit our own personal biases, we tend to become much less critical of them. For instance animal testing, when visible, is something very few people can emotionally accept. And there have been countless hoaxes [1] where people will share an image of an animal in one context (such as a rabbit suffering from severe hair less / skin damage at a veterinarian) and then claim it's an image of the result of a named shampoo company testing their products on animals. It gets people riled up and interested in stopping animal testing, but the problem is that it's completely fake. This is an obvious example but the exact same is true of words themselves and it spreads into everything -- most notably politics.

It's in many ways bizarre that we don't deal with this issue as a part of basic education. Imagery or messages designed to spark an emotional response are very effective against people who are not aware of what's happening. At the same time, they can be rendered far less potent by simply educating people about these tools of manipulation and giving them a wide swath of examples. In today's ever more connected world, with ever more people looking to shall we say 'utilize' other people, the complete neglect of this social skill in education today is perplexing.

[1] - https://speakingofresearch.com/2017/05/16/context-matters-ho...


>This problem cannot be solved.

identify verification when you create your account. charge a small fee when you create your account.

>Consequently, I think instead of trying to solve the issue of bots, it's more important to start educating people that just because they read something on the internet does not mean it's true

now THERE is a problem that cannot be solved!


I agree that it's important to educate people both to question what they see and also how to question it. I disagree that this is the answer. We've been teaching that for years.

It's impossible to take everything critically, and honestly few will. So the entity with the largest bot army still has the longest propaganda lever.

Twitter and the other social networks know who is a bot, or else they haven't bothered looking. Something needs to force them to act.

I do see one mechanism: the bots go too far, and users don't want to be on a platform where they just interact with bots, so they go to more curated places to get their fill. So the business health of the platform depends on having trust. FB has a leg up on this since your friend list probably has people you've actually met. There the problem is your gullible friend forwarding you crap. A deputized bot, if you will. No level of education helps there.


Where/when do we teach people to question what they read? This is rather different than a critical analysis - this is understanding that things like propaganda are not the crackly loudspeakers repeating chants to glorious leader that we characterize it as in our media and entertainment. In reality propaganda is something that tells a story, but subtly (or not so subtly) pushes the reader in a certain preconceived way. For a stereotypical instance, anytime in war an image or story of children being hurt is used as justification for anything - red flags should go off. It's easy to see this when I say it, but few recognize it when they are actually being fed such imagery from a source they believe trustworthy -- again our biases shuts down our systems of critique. I certainly received no formal education on this whatsoever until university and even there it was only because I chose to take an array of classes focusing on war, revolution, and marketing.

When I speak of bots, I am implicitly speaking of the inevitable adaptions to any sort of attempt to crack down on them. I do agree with you that right now many bots can be detected pretty easily. But that's largely because they have no reason to disguise the fact that they're a bot. In many ways, I think the current system is more desirable. As bots progress to actually trying to emulate human behavior it's going to result in the sort of paranoia you see on many forums today where individuals call one another 'shills' as a means of expressing disagreement. And ultimately, I do not think it will be at all difficult to pass a heavily crippled turing test of 140 character unidirectional messaging.


Turing Test passed! That was easy.


It's so frustrating because I'm trying to genuinely engage people with the opposing opinion on Twitter, and sometimes I have real discussions, but easily 50% of the time I realize I'm dealing with a bot. Or a troll. I can't really tell the difference anymore.


Oh, sounds like while most bots are only posting stuff, some do actually consume it for analytics-reposts-engagement. This is huge because (wait for it) Twitter could be the first social network written and consumed entirely by bots!


So is Facebook, so is Reddit, so it Instagram..... rinse and repeat.


Twitter needs a "real name" policy, like Facebook.


Those bots are used inside Russia too, for example, they like and repost pro-government tweets and add agressive comments to tweets from opposition politians to make it look like common people don't like them. They reply to the tweets so there is a real human behind the account. They are not a computer program.

But of course Twitter has no obligation to ban them.

If anyone is interested, here is an example of probably bot account [1]. It was registered in Nov 2013 and has posted 27 000 tweets and retweets which is 18.5 tweets per day on average. Another account [2] has a random user id and has posted 2000 tweets in last 3 months.

And by the way I don't think that bots influenced election results. I watched the debates and Hillary's position was very weak.

[1] https://twitter.com/alexflex777

[2] https://twitter.com/cHprdSiZ8MQONMl


Only 1600 bots? This sounds like the makings of a moral panic to me.

There's a lot of misunderstanding about what these bots are capable of... and I keep seeing top ranking comments on Reddit talking about these bots daily.

The media seems to be spreading the idea they are autonomous agents automatically posting content on politically useful threads, with bot voting rings pumping them up. When in reality they still rely heavily on a human-intensive process, requiring lots of manual targeting and copywriting, and Reddit/Twitter/etc are very good at detecting voting-rings, having been perfecting algorithms to detect them for over a decade.

And do people think Russia or fringe right-wing groups have some super-effective autonomous bots that weren't also available to the best paid US consultants?

If these fake Twitter accounts, usually bots only followed by other bots, were understood in terms of the hundreds of millions of real users on Twitter, the billions spent by both real parties, and within the context of the technical limitations of bots, it wouldn't seem so scary.


That was one researcher for one specific topic, anti-Macron sentiment.

> A researcher finds 1,600 bots tweeting extremist posts in U.S. elections also spread anti-Macron sentiment in France

A second quote from the article shows a much larger bot population.

> Research from the University of Southern California and Indiana University shows that 9 to 15 percent of active Twitter accounts are bots.


The real question is how many of those bots are spambots and bots attempting to pose as real people, as opposed to bots that are open about the fact that they are bots? There's plenty of top-shelf novelty accounts on Twitter that are also bots.


That's one reason why a no bot policy wouldn't work.

A bot posting articles from your favourite news site is actually useful. A bot which prints out Trump's tweets, burns them and then tweets out the video [@burnedyourtweet] is mildly amusing (and some pretty smart engineering). A bot which automatically replies to every tweet by $personality with a meme is usually an 'orrible troll, but occasionally amusing and often owned by someone who's very convinced the First Amendment applies to Twitter. A retweet bot which does a plausible impression of an angry Trumper might be indistinguishable from the real thing without investigating its network, and I don't think eliminating the real thing would be a great move for Twitter to make.

The other reason is that Twitter's review process is more or less completely random in its responses (I'm reminded of the girl that reported literally the same abusive tweet directed at her from three troll accounts, and received a different response for each one)


This is why Reddit style voting is super useful. Twitter suffers from a noise problem as a side-effect of focusing so much on newness and sorting by date (I know they've made recent changes here).

This community driven ranking seems like the best solution to low quality bots posting. A modding driven centralized editing process comes with plenty of issues, as we've seen on Reddit. And yes Bot's provide plenty of value, far outweighing some niche political accounts making the rounds.


Bots will actively upvote each other while real humans could be lazy to click the like button.


Not to mention, a single person creating thousands of bot accounts is easy, you need those thousand other bots so your one primary bot has enough followers/likes. But in the press that will be counted as 1000s instead of 1. Not to mention getting those bots to get a meaningful audience is very challenging.

I have plenty of fake 'bot' followers, and 99.9% of their followers are other bot. It's largely a bunch of bots tweeting to other bots.

The main risk is hashtags being gamed and people buying voting rings/followers... which has been a problem on the internet forever.


The open bots have an incentive to make one bot for each usecase, one person trying to make spam bots has an incentive to create as many bots as possible. I have no numbers on the subject, but I would be shocked if it were anywhere close to fifty percent.


Frankly, I'd expect the percentage to be significantly less than 50%. Determining good from bad bots without a manual check is an interesting problem, though, and it's not cool on Twitter's part if they ban legit bots that people find useful/entertaining on top of the bad actors.


    There were about 400,000 bots posting political messages during the 2016 U.S.
    presidential election on Twitter, according to a research paper by Emilio
    Ferrara, an assistant professor at the University of Southern California.


The next line which you left out:

> He told Bloomberg that he has discovered that the same group of 1,600 bots tweeting extremist right-wing posts in the U.S. elections also posted anti-Macron sentiment during the French elections and extremist right-wing content during the German elections this year.

So of those 400k political bots, 1600 were identified as talking about fringe topics.

Not to mention it's not clear how this fits into the bigger picture? What percentage of ALL of Twitter accounts are bots? How many of those bots are active? How many actually have an audience of real people, not just other bots? For ex: 400k fake accounts, with few real followers, out of hundreds of millions of real users talking politics may not seem as significant...

Not saying it's not a 'problem', but it needs to be understood in context, especially considering the constant news coverage and outrage.


>So of those 400k political bots, 1600 were identified as talking about fringe topics.

Read it again. Of those 400k bots, 1600 were identified talking about the US elections, anti-Macron sentiment, and extremist far-right topics. All three topics. That could be one person/group reusing bots.

Additionally, hundreds of millions of real users talking politics is unlikely. Twitter has 300 milliom active users per month, 9-15% of them bots, the US election had 120 million voters. Hundreds of millions would be nearly everyone.


> Twitter has 300 milliom active users per month, 9-15% of them bots, the US election had 120 million voters

Fair enough. So if 10% of voters used Twitter, that means 3% (400k) were bots, and of those a tiny percent were fringe/far-right.


>tiny percent were fringe/far-right.

Nothing posted is proof of that. The 1600 number is interesting as it means one entity was trying to influence elections in three countries. It is not the number that were far-right, that number was not in this article.

As context is supposed to be a needed thing, you could have just found the research paper rather than perform guestimates math. Fifteen percent of users tweeting about the election were bots, twenty percent of tweets, and about three Trump bots for every Hilary bot.

http://firstmonday.org/ojs/index.php/fm/article/view/7090/56...


> I read into the investigations on FB/Twitter and they only found 10k-100k ads/posts bought by Russians on each platform.

Those numbers aren't ads/posts, they're dollars. In September, Facebook confirmed that Russian operatives spent $100,000 on psyops advertising.

Keep in mind that (1) $100,000 is 15-25 million impressions reaching millions of highly-targeted people (Cambridge Analytica is now under investigation by the HPSCI), all amplified by the targets' engagement, and that (2) FB is sharing only 100% confirmed activity discovered during very early, self-investigated findings.


Keep in mind - total spent on the 2016 election according to CBS was $6.8 Billion [1].

In that context, how could $100,000 of FB ads (0.001%) possibly be relevant?

Speculating that it might possibly be relevant if we hypothetically were to discover it was 100x bigger than we currently know, to me that defeats the whole argument.

"Big if true!" is the rallying cry of the whole Russiagate fiasco. "Big if Bigger!" is just the next contortion.

[1] - https://www.cbsnews.com/news/election-2016s-price-tag-6-8-bi...


What do you mean by highly targeted? People who can't think for themselves.


This! this is the perfect target!


It's a shame that hostile foreign governments can do this to us without a strong retaliation. We should be doing the same to their social media platforms, such as VK and OK, but it appears that we are not. Though we are more than capable of equal retaliation.


I disagree with your statement that "we should be doing the same."

That said, I think it's possible that we actually are doing the same thing.


What do you think the CIA does?


Hack routers.


Not saying "retaliation" makes any sense, but do you not think the CIA, NSA, DIA, et al do their own share of "mischief"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: