I wish someone would create a branch of mathematics related to the study of relationships between a bunch of "nodes" and "edges".
Jokes aside - and I'm aware that the FB folks have plenty of good graph theorists - how hard can it be to spot these kind of non-organic actors in the social graph? If I'm paranoid, I would guess that there's probably little distinction between FB's valued corporate customers and advertisers and these 'information operations' in terms of their presence in the social graph.
In any case, it seems like you could do a lot of worthy stuff to distinguish between content that spreads relatively slowly among people who appear to post things judiciously (and might even comment about it in ways that add content) vs various types of bullshit or low-quality content. Again, paranoia suggests that weeding out reflexively transmitted garbage and 'fake news' (for whatever value of 'fake' you're holding today) cuts a bit too close to the business model...
> how hard can it be to spot these kind of non-organic actors in the social graph?
Very hard. Graph theory contains some of the hardest problems in computer science. For example, maybe you want to find the largest group of nodes that are all connected to each other. We don't know how to do this. If the largest clique has n nodes, our best approximation methods can find log(n) of them, which is pretty useless.[1]
Spam detection at FB scale is difficult because with so many users many actual accounts will appear unusual in some way, and of course all data can be faked. Google and Yahoo have worked on this for years, yet spammers can still evade their systems: [2]
> there's probably little distinction between FB's valued corporate customers and advertisers and these 'information operations'
The distinction is pretty clear. FB knows exactly who its advertisers are since they pay FB money to promote their posts. It's true that fake news increases engagement, but FB is tackling an easier problem--identifying active operations making millions of fake accounts.
I agree with you in principle, but I think the problem is more one of scale than of complexity theory.
Sure the clique problem is hard to approximate in arbitrary graphs, but the graphs that appear in social networks are far from arbitrary. And indeed, in such scale free networks the clique problem becomes quasi polynomial or polynomial (http://link.springer.com/chapter/10.1007/978-3-642-35261-4_6...). Approximate solutions are also much easier to find.
On the other hand, the sheer size of Facebook's social graph means that anything other than streaming algorithms or local search is pretty much intractable. That makes solving these kinds of problems into interesting engineering challenges.
Good points. Pedantic aside: scale-free is a dangerous assumption for two reasons. First, there may be fairly large deviations from power law in real data, and any such deviation renders the problem intractable. Second, botnets can modify local and possibly global properties in adversarial ways, so as to hide in or make large areas that are not scale-free.
Actually, the graph theorists at FB solved the problem in a few days, but various government agencies insisted that the code should not unearth anything potentially negative about their operations. Its now been years, and they are still trying to get it to work within that constraint.
how hard can it be to spot these kind of non-organic actors in the social graph?
It's easy to find suspected bad-actors, but very hard to get it accurate enough to be useful.
The problem is that organic actors are extremely diverse in their behavior, and non-organic actors are usually "human driven" and actively trying to avoid detection.
Graph-based features are useful but to get a useful system you need to combine graph features with other things (temporal behavior, NLP, image analysis etc).
I don't do Facebook, never have. But a few months ago I wanted to add login via Facebook to an app I was developing so I signed up for an account in order to get access to the developer stuff. I verified my email address, entered a mobile number, and my account was immediately blocked with a message that I had to send in copies of government id and such. Nope.
My question is, if it's so easy to open new fake accounts why did a legit account opening like mine get blocked?
> My question is, if it's so easy to open new fake accounts why did a legit account opening like mine get blocked?
It's simple really, facepook already collected the so called shadow dossier on you, now they need to attach it to your real identity so that it could be sold to advertisers.
Is it a smart way of telling us that is quite difficult to detect and delete those accounts ?
I mean creating a facebook account is getting ridiculously difficult, several hoops to jump in a row and when you manage to do so it hits you with a brick and instablock the account before you even log in asking to send government issued identification with a picture.
I'm currently in Las Vegas, and I was struck once again with the realization about how big the place is. The airport is huge, the streets are huge, the buildings are huge. And, it's all air conditioned.
Power here is cheap, but getting from point A to point B takes time.
If we could figure out a way for an individual to "waste" time on getting from point A authentication to point B authentication, then we'd have a way to prevent these types of issues from occurring. Or, make them heat up and have to stop to cool down, as it were.
As it stands, identity on the Internet is completely broken, and the idea of a federated Internet is just a twinkle in Google and Jeff Bezos' eyes.
Some scammers on Facebook have seized on to my aunt's name and image as the front for their operations. As family, they friend us from fake accounts on at least a weekly basis. Despite reporting these fake accounts more than 100 times, I got a new one today. They ask me for feedback every time I report an account. They are not getting positive reviews.
Based on this experience, I'm not optimistic in their abilities.
I should start photoshopping a fake scar onto my face in profile pics, so that I have a chance to be believed when I cry "Doppleganger!" with no scar on my real face.
I think a small lighting bolt on the forehead would look dashing.
I've heard reports that a non-trivial percentage of their accounts are fake. I share your lack of optimism, though I also wonder how hard they've really tried until now.
There are persistent rumors that Zuckerberg wants to get into national politics. The spin will ramp up to neutron-star velocities when that happens. It's probably safe to stop believing anything they say, starting right about now.
He is literally doing a whistlestop tour around America right now, replete with bizarrely folksy essays laced with nakedly political rhetoric. Not so much a rumor as an impression he appears to be intentionally cultivating
Well, Trump has shown that the populace is accepting of businessmen as leaders. I'm sure if the rumors are true Zuckerberg won't be the last to think they can parlay success and fame into political power, and he wasn't the first (Reagon comes to mind).
Reagan was an actor, not a businessman. He did shill though.
Zuckerberg is waking up
to how easily manipulable the portion of Americans who voted for Trump are. It's too easy. Just look at the republican response to sb18 in CA. The counseling association was suppossed to be working on an initiative to help but I'm starting to worry.
There's some truth in this (I assume), but there is quite a bit of evidence that those on the right value conformity with a leader's views more than those on the left. The infamous poll results on the Syrian strike[1] show this behavior. I'm not sure this counts as manipulation exactly, but sometimes the effect is the same.
There's some truth in this (I assume), but there is quite a bit of evidence that those on the right value conformity with a leader's views more than those on the left
It's tough to justify this generalization in a world where leftist leaders like Stalin, Castro, Mao, and Chavez are still warm in their graves.
I used to believe the same thing until the aftermath of this election. It's not even remotely in the same ballpark. It's like the Dunning-Krueger effect except there's two more peaks in the 0-20% range, and a lot of people with an island of intelligence in their career path outside that range.
it's actually very much in the same ball-park. The sheer fact that many liberals believe that a sizeable portion of Trump's supporters are fascists, or that he himself is a fascist, is proof of this. In fact, in my experience right-leaning people tend to be more skeptical than left-leaning people - too skeptical in some cases, leading to things like conspiracy theory and climate change denial.
I know far more people who believe Trump represented military aggression and big business while Hillary'just wanted to help people;' which is a very strange idea given that Wall Street, the Military Industrial Complex, and Saudi Arabia donated more money to her than any of them donated to anyone in history.
Meanwhile Obama prosecuted more whistle blowers than any candidate in history after campaigning to protect them. Then dropped 26,000 bombs on Muslims last year.
What about the evidence that came out that the Iraq War's WMDs were a fsbrication of the Bush admin's? Who has gone to jail over that? I personally know Americans murdered in this fabricated war for profit and oil. Even insiders like Greenspan wrote about how that Iraq War was for profit and oil.
The people who are represented are not the working class, productive and honest citizens. You are all too tied up in the Dems vs Repubs debate and other politics to notice this though.
Oh no I'm actually completely with you on that one. I think Bush and Obama were two sides of the same coin, although I don't think anyone knew that in 2008.
Living around a great number of conservatives, but interacting with a great number of liberals online, my experience is that it simply goes both ways. It's a much-remarked-on idea that "red" and "blue" America are increasingly more divided. If you see people being skeptical today, it is rarely based on the content of the idea but rather on the political implications of it. Right-leaning people are suspicious of things that left-leaning politicians say or promote, and vice versa. I don't think either party, as a whole, is more skeptical or, conversely, more incredulous. Ideas have become more political, and to the extent that one idea is more or less accepted at face value it is probably because that idea has already been normalized by the mass culture.
You're saying your own personal denial of this election's results led you to double down on accepting your own personal biases as fact and justify your resulting increased conformity in believing 'what should have happened' onto others whose non-conformity bothered you?
Not a Trumper, but the 'boo hoo' media pity-party in the aftermath of this election has been oh-so delicious to watch after seeing through the level of spin in the last couple of years...
"But we told them to vote for Hillary, and they didn't! What went wrong??? Must be 'facism' and 'russian fake news'! Anything but people actually disagreeing with the moral/philosophical/political position I have absorbed without thinking from MSNBC-Universal-Vivendi funded Saturday Night Live skits!"
But yes, absolutely, the 'conformity' is entirely on the 'right'..
No, that's not even remotely close to what I'm saying or believe, you've constructed a strawman.
But your comment is shockingly similar to what I've heard from people who still support Trump irl. Do you personally ever talk about what's actually happening in washington, or just stick to this he-said she-said crap?
Trump is about American nationalism, and he resonated with the poorer parts of the country by talking about making America great again. He's also "old school". Zuckerberg is in most ways the opposite.
> It's probably safe to stop believing anything they say, starting right about now.
It was safe to stop believing anything they say after facebook got caught neck deep in wrongdoings and apologized for the first time. IIRC this was circa early 2004 or late 2003 and it's been downhill from there.
Haha I started at least five years ago, the fourteenth time I heard someone complaining about automatically-changed FB privacy settings. Fool me thirteen times... Thus I wasn't surprised by #JeSuisCharlie or the recent wall-building kerfuffle.
I've got a similar problem with my own face. Scammers take public information from bar associations (I'm a lawyer) and build fake profiles using photos from firm websites. As I don't have any facebook account myself, finding and reporting these is a real difficulty. Most large law firms have someone dedicated to protecting the firm's name on social media but small firms just don't have the time.
FYI, don't believe anything said by a "lawyer" on facebook. We don't ever start communications that way. Visit the local bar association's website and check the lawyer's real contact info before saying anything.
It sounds like they're using your name, not just your face. Or are they just creating plausible fake lawyers? And what's the game? I can imagine that they're soliciting "customers", who they'll go on to dupe and rip off.
I do like your profile, by the way. It sounds like the lawyer-expert dynamic to me.
They send threatening messages to people that include links and data from firm/bar websites to make it seem as if it comes from a real lawyer. In extreme cases they setup entirely fake websites with data harvested from legit law firms.
No lawyer will send a threat via facebook. And no lawyer will ever demand a payment via bitcoin or gift cards.
So they scam people into "settling" fake litigation. Over Facebook. And many people, who of course don't have much of a clue, fall for it. Amazing. But some of them, frightened and angry, contact your firm.
In some other world, where GnuPG had become widely used, legal communications would be signed, and people could just check signatures. That wouldn't help for fake websites, however.
Law firms arent coke or pepsi. They generally dont care about public opinion. Only client opinion matters. What they dont want to see is thier name being used for criminal activity. But being seen as dark and scary isnt always a bad thing (see Wolfram & Heart).
It seems like more and more the threat of fake news is being treated as security theater. The solutions that google and Facebook are setting up don't seem like they would be effective in the real world. Instead they are justifying more control over their social ecosystems.
If it is 'security theater', or alternatively just a tempest in a teacup, doing nothing would be a perfectly fine alternative, at least for the time being.
This is the definition of "liberal elitism". Near certainty? You keep telling yourself that, and the Trump supporters will keep voting their people in.
Surely the amount of disagreement on this makes it clear that it's not a 'near certainty'? Furthermore, an issue such as 'the election of a US President' strikes me as so inherently complex that we can say very little with near certainty.
That said, I'm very open to hearing why you think this is a certainty (honestly!).
> Though the goals may often be to promote one cause or candidate or to denigrate another, another objective appears to be sowing distrust and confusion in general, the authors wrote.
> In some cases, they said, the same fake accounts engaged with both sides of an issue "with the apparent intent of increasing tensions between supporters."
Where I come from, we call that trolling :) Sounds just like Usenet.
> "We're not arguing for censorship, we're just arguing for - take it off the page"
I actually facepalmed at this.
These people know that impression drives reality. They're basically saying "so hey, we found this great slippery slope that looks like a lot of fun!" What the hell?
The grail for algorithms should be to HELP display/persist the disproof of misinformation, objectively and efficiently. Not just to HIDE it and hope it goes away!
> The grail for algorithms should be to HELP display/persist the disproof of misinformation, objectively and efficiently. Not just to HIDE it and hope it goes away!
Or like, give people control over their own data..
I have ~/.procmailrc - where is my ~/.facebookrc :b
I believe that this takes us to a dark place. I fear that social currency is going to be the end of humanity. The price of verifying that every member is an actual human comes as a great cost to society. Social media should be fun and goofy, but if this continues, it is going to double as a passport soon (it almost already does)
Then you will be no one if you do not have a Facebook account.
The core problem is that many political activists, from both the left and right wing, need "fake accounts" because Facebook is quite ban-happy and we don't want to risk our personal accounts being banned for political speech.
Given that Facebook now is more or less neccessary to stay in contact with friends and family, it should be regulated like a public communications utility - which means that banning people from communicating with friends and family must not happen, unless that is abused for spamming. Your DSL or phone provider is not allowed to cut your service for anything than abuse, too - so why should Facebook (and also, Twitter and Snapchat!) be excepted?!
Did you read the article? It's not about banning fake accounts. It's about information operations. The government wouldn't let Russia use our phone lines to spread a disinformation campaign.
Half of the ones that I report which are obviously fake, are allowed to stay up. Pretty sure Facebook employees have a working agreement with certain scam operations.
I don't think there's any need for conspiracy theories. The human cost and PR impact of closing an incorrectly fake account are massive compared to the cost of closing one spammer.
Facebook is right to be incredibly cautious. Imagine getting your gmail or Facebook account shut down because of suspected fraud, and what a huge personal impact that would have on you. Facebook realizes this, and also realizes it is a PR nightmare every time one of those stories makes it onto HN / reddit.
I am not sure I agree with this. If your bank freezes your account based on suspicious activity, do you change banks, or are you (usually) thankful? If FB is so important that it cannot even be unavailable for a few days - which BTW I find a bit hilarious, then you better be prepared to accept that it has to operate on a similar level of caution. It seems like on the one hand people want FB to be as dependable as a bank (after all, there is a phrase 'banking on it'), while they also don't want the level of precaution that a bank is usually expected to take.
Not that I actually care if FB is successful or not in their venture, and I will be very happy if they go the way of MySpace, but I don't agree with the human cost and PR impact you are talking about.
> If your bank freezes your account based on suspicious activity, do you change banks, or are you (usually) thankful?
I stopped using my personal PayPal for this reason.
I think the issue is that unfortunately a lot of internet based companies fail customer management when it comes to edge cases. I see this in the YouTube community relations, Facebook advertising and pages and AWS Help. It can be extremely hard to get edge cases solved by a company that is technology focused.
The customer relations for brick and mortar companies seems more flexible. I think sitting down with a manager at a bank works wonders when there is some strange thing holding up your account.
Would it really be a "huge" impact if you were locked out of your account for a bit? They could have an appeals process. What negative things could actually occur in your life if you were denied access to Facebook for a week while they reviewed your appeal?
I do agree that conspiracy theory may be too much here, though your explanation is not in line with the reality as facebook is known around the world for blocking and closing accounts without blinking.
Turns out gmail has also a reputation for doing this and locking you out of your digital life.
I'd much rather have my account verified & tied to some data, than have 5+ fake friend requests per day where people that used to use FB for scams, said that FB would give them free ad credits. They never stopped them from adding people to groups or mass messaging them using bots.
Getting that data "back" implied that it was yours to begin with...?
You at no point anted up the resources or effort to maintain that data, that was all them. You freely volunteered it to them in the name of 'convenience'; I don't see what part of that arrangement implies they owe you anything once you've finally realized the stupidity of that decision.
Life is not a series of 'okay, I learned my lesson, now please fix it for me' kindergarten tutorials. If it takes a significant loss of data you thought was rightfully yours for you to learn the value of data, then take your lumps and consider it a bargain.
You're moving the goalposts. First you said there's no personal cost to losing an account; now you're saying there is a cost but one deserves it. Which is it?
What if all comments had information about the poster's IP address. Maybe a unique hash based on IP, country of origin, registered organization?
Maybe that way people could at least identify suspicious patterns themselves.
But even without any new data, a third-party could identify troll rings on reddit, youtube, and twitter by scraping the data.
Some pro-democracy group should probably be funding this kind of technical approach. It's likely to be a lot more effective than waiting for all these companies to do it themselves.
IP is not personally identifiable data, there's a European ruling on this.
Then there's a trove of techniques to mask IPs that bad guys know and regular people don't, so this would negatively affect regular persons and have no effect on the targets.
>But even without any new data, a third-party could identify troll rings on reddit, youtube, and twitter by scraping the data.
I don't think it is as easy to mitigate as you make it seem. Here is a study which details a real-life sock puppet operation tested on Reddit, and the difficulties in detecting it:
FB, Twitter and other social sites are under market pressures to maintain/increase user base. At the same time they must justify to skeptical customers/investors that the numbers they publish are legitimate somehow. So, they must adopt a token effort to to combat this, but really it is in their best interest to allow masquerading so long as it is sophisticated enough to not be to obvious.
> (hint: the US Constitution does not guarantee you a right to use Facebook, or any other private-sector service)
Oh yes, it does - your phone/DSL provider, for example, is not allowed to terminate your service for anything but missed payments and abuse (spamming/hacking). Facebook and other social networks must not be excepted from this!
Jokes aside - and I'm aware that the FB folks have plenty of good graph theorists - how hard can it be to spot these kind of non-organic actors in the social graph? If I'm paranoid, I would guess that there's probably little distinction between FB's valued corporate customers and advertisers and these 'information operations' in terms of their presence in the social graph.
In any case, it seems like you could do a lot of worthy stuff to distinguish between content that spreads relatively slowly among people who appear to post things judiciously (and might even comment about it in ways that add content) vs various types of bullshit or low-quality content. Again, paranoia suggests that weeding out reflexively transmitted garbage and 'fake news' (for whatever value of 'fake' you're holding today) cuts a bit too close to the business model...