>Facebook can no longer deny its moral responsibility to try to understand how cyberspace, law, and politics collide in each of the countries where it operates, nor its responsibility to do something about it.
That sentence doesn't make sense. Do something about what? How those things collide? It's very unclear what the author thinks Facebook is supposed to do, or what he thinks Facebook is actually responsible for. Is this a call to regulate posts so there's no fake news, or more, to prevent all speech that can incite bad actions? In perilous regions of the world, sometimes people are arrested for their posts, and can be seen to be connected with others on Facebook, so that they also fall under suspicion. Is Facebook supposed to keep those users safe somehow? I mean, what's the author's gripe?
A better article would have mentioned that free speech, a "good thing," albeit sometimes dangerous, is powerfully fostered by Facebook. Give it some credit. It would also have been fairer if it had mentioned how difficult it is to detect and thwart bad actors who find ways to abuse any platform. (Removing trolls from small forums is hard enough.) Instead, it insinuates that Facebook is willfully indifferent. At least it isn't accused of colluding with the bad actors, as sometimes happens when US companies do business overseas.
Yes, it is interesting that the author sees no place in the modern world for the kind of free speech enshrined in the U.S. Constitution. They are happy with a giant global monopolist controlling all speech, as long as they use that power to enforce their political agenda.
That is an incredibly dangerous mindset and one that seems to be becoming more pervasive in respectable circles.
> Yes, it is interesting that the author sees no place in the modern world for the kind of free speech enshrined in the U.S. Constitution.
Well, there is such a place. The U.S.
However, in Germany, German rules of speech apply. As a German, I consider this a good thing. And remember, the German constitution was written at the pleasure of the allies -- the US were apparently quite happy with the way our constitution balances freedom and responsibility.
And, corner cases aside, so am I. I prefer "responsible speech" to the US "wild west speech". This is largely cultural I'm sure.
Of course once we accept that Germany can insist on their own model of speech, so can China. Or Syria. Or Thailand. Which I find hardly ideal, given that I'd like to propagate liberal (and, dare I say it, "western") values. But I don't have a decent solution to this conundrum.
That is sort of the point. Once all speech transits corporate networks and those corporations are empowered to censor that speech then the public square protected by the U.S. Constitution no longer exists.
But Facebook already "censors" algorithmically. You are not shown a democratic slice from a marketplace of opinions. You are given a curated echo chamber of voices algorithmically optimized to keep you engaged/enraged. The voices that rise to the top are either the ones with the deepest pockets or the ones that most activate the lizard part of our brains.
> But I don't have a decent solution to this conundrum.
I have. The German model is wrong for the reasons you allured to. If states can't be trusted to be arbiter of acceptable speech then they shouldn't have that job.
> And remember, the German constitution was written at the pleasure of the allies
That would be an appeal to authority and we all know US made few mistakes in the past so I'm not sure why you'd want to use that fallacy.
> This is largely cultural I'm sure.
Must be. Thinking that the state is always right and bossing people that disagree is historically very German.
The Constitutional guarantee of free speech mandates the behavior of the government to not interfere with the rights of private actors. When a private actor (such as Facebook) chooses to publish or suppress content from a third party, that is a voluntary exercise of Constitutionally protected speech.
Social media services have neither a moral nor legal obligation to publish hate speech.
Correct. But private corporations can choose whether or not to follow in the path of liberal democracy or authoritarianism in how they relate to their customers.
Facebook has more customers than any country has citizens so this question is of great importance to how their service continues to grow and impact the world.
> can choose whether or not to follow in the path of liberal democracy or authoritarianism
Liberal Democracy can also be authoritarian. The words you're thinking of are libertarian and authoritarian. Liberty as in free, authority as in ruled.
Do you not recognize that both words have the same Latin roots? It is only in the US where this illogical distinction is made because the word liberal has been bastardized.
"William Safire points out that liberalism is attacked by both the Right and the Left: by the Right for defending such practices as abortion, homosexuality, and atheism, by the Left for defending free enterprise and the rights of the individual over the collective."
>> the kind of free speech enshrined in the U.S. Constitution
> The Constitutional guarantee of free speech mandates the behavior of the government to not interfere with the rights of private actors
Free speech is an ethical concept that is (imo) fundamental to open societies. The US Constitution enshrines this ethical ideal by creating a prohibition against government interference in free speech. GP is lamenting that major corporations do not seem interested in the ethical ideal, not claiming that the codification of the ethical ideal found in the US Constitution requires major corporations to permit certain types of speech on their platform.
I think it is an oversimplification to say that free speech is a general ethical ideal. As with many things, the Constitution is attempting to strike a balance. The check on restriction of speech is the prohibition of the government to restrict speech, but there is an opposite check as well: private entities are permitted to restrict speech. In the most extreme case, this means people can determine for themselves what is discussed in their own homes. There is a lot of tricky gray area between private homes and the public square.
When people's lives depend on your actions and policies, you have some moral obligations no matter who you are - a government, a corporation or just an abusive lone parent.
exactly. When Facebook reaches the size of a nation state in terms of its influence (and it arguably has exceeded this given how far the platform reaches) it ought to be held responsible.
Does anybody seriously think that any responsibility we demand from a actor like a government would go away if we'd write "limited liability company" on the door of the white house or congress?
This is so true and something that is confused all the time by “free speech advocates”. It’s quite frustrating actually.
I’m not happy about the amplification of speech that social networks offer, but l’m even more unhappy about the attitude of “smart” techies who shrug their shoulders and say “free speech - cant do anything about that”.
We can do something about that, but that would mean solving hard problems and not creating another way to type a few hundred words into a text box.
Free speech is a dangerous idea by itself, and without democracy and solid institutions, it can easily be the tool of populists and demagogues. It's not an absolute value all on its own.
Please elaborate on how populism and democracy are two separate entities, when, by definition, both are characterized by majority rule. Are you sure you aren't confusing democracy with republicanism?
One measure of a functioning democracy is how it approaches compromise between differing opinions and respecting the rights of those who don't agree with you. Actively pitting one group against another works against this. It can easily lead to what's known as the tyranny of the majority: https://en.wikipedia.org/wiki/Tyranny_of_the_majority
On the other hand some people are worried about the rise of a ruling elite centralized in government, media, and universities, with a partial influence and cooperation of the military industry. The deep state.
This group of decision-makers would like the public to think they operate on behalf of the interest of the state and act on the basis of "scientific principles".
Arguing that this is essentialy social control and that this constitutes the basis of the "tyranny of the elite" may not be flawed.
There's always the need of a balance between extreme tendencies. The liberal premise that the political elite has the right and obligation to make fundamental decisions on behalf of the mass of citizens has its limits.
The whole article doesn't make any sense. It's clickbait for people whose brains have buzzword recognition swapped in for critical thinking.
He talks about how falsehoods and doctored photos are being spread on Facebook, and that government officials are posted them on their pages. In the absence of Facebook, _only_ those government officials would have a platform. Why on earth would that be better?
He even pays lip service to the fact that this communications platform has enabled positive forms of communication as well (ie human rights organizing), and then quickly switches back to a complaint that Facebook didn't remain optimal for the specific workflow that Cambodian human rights workers employed.
His argument is fundamentally "don't give the unwashed masses a platform", but he doesn't have the courage/intellectual honesty/self-awareness to say so. I can imagine a contemporary counterpart of this idiot writing the same article about the printing press or the telephone.
I've got a very low opinion of Facebook as a company, but these complaints literally boil down to "allowing people to communicate more effectively". Newsflash! People are shitty. They do shitty things with whatever technology they get their hands on. The idea that the pre-Facebook world would be more or less likely to have the norms you like only makes sense inasmuch as the norms you like are oriented around limited communication options for the masses.
The author is a journalist. Facebook is anathema to journalists because without it only journalists would have a voice. With Facebook, everyone has a voice, but no-one has been trained in how to use their voice.
The author is also female. Not that that is relevant to any point except your use of pronoun.
> The author is a journalist. Facebook is anathema to journalists because without it only journalists would have a voice.
Yea, at some level I don't expect honesty out of journalists: there's no sainthood exam to become a journalist; they're people, and people aren't generally honest. It's just bizarre to see how credulous HN is when it comes to swallowing stuff like this (relatively) uncritically.
> The author is also female. Not that that is relevant to any point except your use of pronoun.
Thanks for pointing this out. I commented on a separate post about Facebook which was written by a "Jim Davies" and I think there was a crossed wire in there.
Yeah the article is a bit vague. But the stuff happening in Myanmar eg ‘They Threw My Baby Into a Fire’ [1] helped along by facebook giving platform to people like Wirathu's "Muslims are dogs" type stuff [2] is horrible. Facebook could at least tune their algorithms for more sharing of let's be tolerant and less of let's kill all the outsiders type posts.
The article is a call to action for Facebook to work with local stakeholders in order to differentiate and treat differently local propaganda from genuine news. The author doesn't believe that Facebook commits enough resources to that effort and cites an email from Facebook that says there isn't a one-size fits all solution, which it seems, to the author, what Facebook is trying to do. The author is also saying that Facebook has a moral imperative because its lack of being involved is causing great harm to people and uses the "Rohingya" as an example. If you country is controlled by a Military Controlled "Junta, than the real name policy has a staggering different effect in Myanmar than if it is in the United States where there is a strong rule of law, for example.
Hah. If face book did that it would enjoy the same fate as media firms around the world - entire worlds of fact checkers, staffing costs, HR costs - and their profitability would evapirate.
The article is fair. Wilful indifference has been the status quo for how social networks have been run for most of their existence, under a kind of techno-libertarian idealist vision of free speech. Child pornography was the first hard check against that, but it's still taken a long time for these companies to realise that they are in control of the narrative through their recommendation engines and filters and content policies and that they need to actively engage with the social responsibility that entails.
It makes more sense when it is acknowledged that "Foreign Policy" is the official magazine of the Council on Foreign Relations, a group of people who are used to running the world.
The CFR's main concern is that Facebook allows non-traditional actors, mainly not them or organizations which they do not have control over, to have some influence on the narrative that explains world events to the public at large in each particular country. They consider this an undesirable state of affairs. Thus, Facebook should, in their opinion, work to alter its policies to maintain and even amplify the dominance with regards to narrative setting that the traditional media has given to the traditional narrative authors, which are composed of CFR members and their proxies.
> It makes more sense when it is acknowledged that "Foreign Policy" is the official magazine of the Council on Foreign Relations, a group of people who are used to running the world.
My idea for a quick fix is to remove likes/dislikes/emoji reactions. If people can't see how others feel about a post/article in an instant, they might have to read and decide for themselves how to feel.
Upvotes on HN/Reddit do essentially the same thing. I don't have the user statistics, but I'd imagine the typical HN reader only reads a sampling of the top-voted comments, which essentially allows them "to see how others feel about a post/article in an instant."
If you want an example of the opposite approach, you can always look at 2009-era YouTube, when all comments shown were in a strict "newest" order. If you remember, YouTube had a bit of reputation for some of the lowest-quality comment sections anywhere.
There simply is no "quick fix" for clickbait, aside from altering basic human psychology, or some type of (probably complex) regulation.
What I've noticed a lot on sites like HN and Reddit is that there's always heavy 'tilting'. If any comment ever drops below +1 (or below that of a different viewpoint, so +10 vs +1), people just blindly start ramming the downvote button without any thought, as if they aren't capable of analyzing the comment on its own merit because 'the crowd' has already decided for them and they take the path of least resistance in terms of mental effort. This is also why 'brigading' is so tremendously effective, you only have to tilt the balance a little bit and the crowd will take care of the rest.
For what it's worth, I always upvote downvoted comments when they are contributing something novel to the thread and/or taking a position which is explained and justified, and of course are civil. Even when I wouldn't have otherwise upvoted them, or when I don't agree with them.
That's just to help get them back to neutral, and avoid their point being grayed out and thus probably more likely skipped over by others. I think everyone should be allowed to put across their point, and we all end up better for seeing those different angles.
I quietly hope that others who agree with this also do the same, thus making the problem you describe a little less pronounced. We all need to play our role in creating the kinds of online communities we want to see.
I've caught myself before sometimes being more careless with the downvote button if the comment was already grayed out. After I realized this risk, I now attempt to be more mindful.
So it is a thing that happens, at least to a certain extent.
I found top Youtube comments to be more or less funny/enjoyable now. The shitty comments got downvoted so I no longer see them unless I delibrately try to see them, they don't appear. It's very different from youtube comments back then. But then they are still mostly unhelpful/uninformative.
This might be true for some videos. But there seems to be a first in approach behind the sorting. The first who says something remotely relevant is often the top comment for a long while. I can not see much quality in those posts usually. Newer posts Barely have a chance to get to the top if you aren't a youtube influencer in some way.
Reputation and kill switches. Ultimately: editorial voice.
If I've got tools to block sources of clickbait, and those feed into reputation systems, that's a start. On Google's properties, I can block G+ posters, but I cannot block a YouTube channel. Not even if I'm logged in -- that channel will still show up in searches and recommendations. I'm actually better of not logging in so that recommendations are keyed far more to current search and view history, and there is no long lag of "you watched this once three years ago so we'll cram it down your throat".
For Web, I've taken to fairly ruthlessly blocking clickbait domains at the firewall. They pretty much completely disappear.
(And, if I've got second thoughts or want to poke through: archive or simplified-view sites will generally show me the content. And it's almost never worth looking at.)
I'm not saying that my individual tools will scale out, but if they're tied in to larger systems, and feed back to what's promoted, there's the possiblity of this.
Ultimately, though, massive media channels have to take an active voice in what they do or don't permit, most especially where there are real and serious harms possible. That's the editorial voice option. It's not a popular opinion on much of the Web, and it's not one I would have advocated even just a few years ago, but I'm coming to feel it's increasingly proving necessary, and that the historical record, over the past 500 years of mass communications, starting with the printing press, bears this out repeatedly.
As you point you Up votes are a useful filter. However "Like" only voting allows things to go viral that piss off large chunks of your readership. Readers end up being assaulted with emotional appeals devoid of content which many people disagree with but without down votes they simply don't count.
I.e., the stuff that is upvoted by people who "upvote like you" is ranked higher. (I would like this feature on HN, as it would eliminate e.g. articles about CSS which I find boring; of course that is personal, but that's the point).
Otoh, there's a danger of "echo chambers"; not sure how to deal with that.
I think they should remove the media content they add to external links and only show URLs. If people had to click a link to see the title, summary, images, or other clickbait meta content they'd be less likely to share links thoughtlessly. They might also see that the URL is untrusted and choose not to click through. Facebook could charge brands to show their content alongside click bait media. It would be a lot easier to verify the authenticity of ten thousand brands than every random link that's shared.
Wouldn't solve all the issues but it would be a start.
Passive ranking is subject to the same bias, though. If I click - even if I dislike the content - it contributes positively.
I don't think there's a quick fix for wanting autonomously produced graph-based timeline without the ability to exploit it for clicks, propaganda, etc.
I've been doing this using uBlock Origin's filtering on the social media sites I visit; anecdotally, it makes a significant difference in how much I engage with content that is not highly upvoted. I highly recommend it.
If there's a market for something like it, there will exist something like it. The interesting question is how to make something like it so uncompetitive that nothing like it can survive.
I was under the impression that you could see it when clicking the "[-]" at the right side of the comment metadata. But it seems it's just the number of child comments, my mistake.
Facebook has to be broken down to pieces. I think it has reached a state where it cannot be allowed to continue as a single company being able to observe/monitor public and private communication / social interaction of so many people and not being regulated and controlled by the government.
I'm afraid it won't work well without extra regulation laws. Even if Facebook is broken, it has shown the way. Before, it was MySpace (although it was never so popular). If someone breaks FB, the strongest competitor living in its shadow will step up.
I think it is the nature of certain services to centralize. Search engines, social networks. Unless someone forces them to inter-operate, it's winner takes all.
This is one of the main reasons why I have a custom domain for personal email. Right now, it forwards everything to gmail. In the future, I can have it forward to a different provider, or a mail server that I run (if I feel particularly masochistic). All of this without having to tell everyone I know to start using a different email address.
While social media profile portability is generally impossible, almost all email providers will let you export email as a MBOX that you can then import into your new account.
Moving your existing emails is not equivalent to phone number portability. Equivalency would be allowing me to take my gmail address and have it point to Proton Mail instead of Gmail.
On the social network side full portability would mean being able to take your profile off of facebook but still having full interoperability with facebook so that users could still see your new posts and vice-versa.
Ahh, I see what you're saying. The issue here is that most people have a company's name baked into their email address–so the equivalent for phone numbers would be like me having to dial the carrier name along with the phone number. There is a solution for this problem: give everyone an email address on their own domain, and allow the service "backing" it to be swapped out (this is what I do-I have an email on my domain, but it's transparently mapped to a preexisting Gmail inbox). Of course, the barrier here is that people are used to email being free and easy to set up, while this solution requires a domain name registration (so, around $10-$15 a year minimum) and some technical knowledge on how to set up MX records.
The US government should take possession of all Facebook shares and use the profits to fund universal basic income. I don't think anyone would complain... Except a tiny percent of the population who owned Facebook shares but who cares, they're evil/morally bankrupt anyway.
Government has to be broken down to pieces. I think it has reached a state where it cannot be allowed to continue as a single government being able to observe/monitor public and private communication / social interaction of so many people and not being regulated and controlled by market forces.
Aside from the federal/state/local split, which has already been mentioned, the Constitution split Executive/Legislative/Judicial branches out at the federal level. We may have weakened the intent of those checks and balances in our current environment, but they certainly were designed into the system.
So yes, to answer your question - I would definitely would apply the same logic to government.
Part of me (my young revolutionary part) is libertarian so yes I would say government is to be broken down.
To be more precise in regards to your question. I want to live in the world where internet and social networks are businesses that don't rely on the government good will.
I would prefer there to be no great firewall of China and the great firewall of US and Europe as it is forming right now.
I know that is a losing battle. But we should fight for internet being a place without nationalistic restrictions that is able to operate without corrupt people in power being able to force their worldview upon us.
The biggest problem of big companies is when they hurt consumers with predatory practices.
With Facebook I fail to see how that's the case. Everybody is perfectly capable of not being hurt by their own accord by just no using it.
If the problem is how it influences politics and government, I think that's even a point against letting the government mess with it, as it is a power that can be easily misused by the party in power.
Yeah, why don’t opioid addicts just stop? And why don’t fat people just lose weight and eat healthy? And why don’t poor people just stop being poor? Clearly every issue in society is a result of a lack of personal responsibility.
I'm against someone deciding that I cannot do something because it can be bad for me.
You don't live in a vacuum or a state with a population of one. If your actions have consequences - and most do, even if it's only through the socialized pooling effect of insurance - then there is a moral argument for society to regulate them.
Personally I'm more of a fan of utilitarian thinking that takes into account second and third order effects. It's not enough to regulate X because X is bad; you need to look at what X is currently displacing, black market effects, underlying demand. But it's just silly to deny the moral legitimacy of regulation in itself - we all live in a society and it's childishly selfish to think you can just do as you please. It's prohibitively expensive to capture all the externalities of our actions, so we regulate instead.
Of all those issues, only opioids are regulated and there seems to be some evidence that drug regulations themselves are part of rise in addictions. I'm not seeing much evidence these days that new laws and giving government more control is a good fix for anything.
Sure, just stop using Facebook. I mean, you’ll miss your little cousin growing up, your sister’s wedding photos, invitations to fun events, and being able to easily communicate with most people you know, but yeah, it’s so easy to just stop. Network effects don’t exist at all. And hey, stop using your cell phone too. I mean, you have a choice. Don’t like your job? Might as well just quit. There certainly is no way to dislike your situation but continue in it anyway because you have to or it’s all you know.
In fact I quit my job when I found it sucked and just did my own thing. Same with Facebook. I see your arguments but I don't think they are valid. If you want to. There is always a way. Obviously you simply don't want to.
I also don't miss events or pictures. Facebook is not everything. Never was. There is this thing called communication which works pretty well when you quit facebook
You’re kidding, right? This is grade A trolling. “There is always a way.” Ah yes, millions of low-wage workers hate their jobs, but it’s onvioisly some kind of personal, moral failing that they don’t pull themselves up by their bootstraps.
Don’t play with a ouija board so much because I think you’re possessed by the ghost of Ayn Rand.
I think by blocking these people, Facebook violated the first amendment, and probably even conspired with a foreign totalitarian government.
In US laws however, Facebook is free to do whatever they want with the data stored on the servers they own. Also the blocked people have accepted EULA. So, according to the letter of law, FB probably did nothing wrong.
I have no love for Facebook, but it's important to recognize that Facebook role in these problems is only incidental. If there was no Facebook, we'd have the same growing fake news, astroturfing and bot problem—these are inherent to social media and widespread internet access. I'm not against regulation of Facebook per se, but the broader question of how we contend with online misinformation and propaganda in an online world is much more important.
> Facebook role in these problems is only incidental. If there was no Facebook, we'd have the same growing fake news, astroturfing and bot problem
You don't know this. A major flaw of Facebook is that the CEO thinks his users are "dumb fucks", and who is hell-bent on growth without checks, who is a morality that does not include personal privacy or seemingly any human emotion for his users who made him rich. He has refused to acknowledge the problems on his platform that led to the issues with fake news today.
Perhaps a different company, run by a different person or group of people, will be less intentionally vile and evil.
I think it would be interesting if online personas worked a little bit like video game characters. For instance, in an RPG, no one would believe a character could be equally effective at wielding heavy weapons and casting spells, or other tasks: you generally have to pick one or two key areas of expertise and be “OK at best” when anything else comes along.
Yet somehow, in the world of online discourse, everyone’s an equal expert on everything? What are the odds that a person really has good knowledge of both nuclear physics and farming?
If you could only post or upvote things that were considered relevant to a few limited areas of expertise in your profile, we would see a very different online world. Suddenly, seeing an article on the front page on a certain topic might actually mean that it got there due to “likes” from people with relevant experience, and not just whatever random definitely-not-statistically-valid sample of “friends” put it there.
That's an interesting idea, but it's also probably one that would be impossible to implement effectively. For example:
1. Who would determine any given person's area of expertise. This is a difficult and potentially difficult thing to do.
2. People who are ostensibly experts in a topic, even if they have a lot of experience with it, often have incredibly uninformed opinions. As a simple example, most people I know have a lot of experience with dating/relationships, parenting, and politics, but their knowledge of these topics is based on a bizarre and woefully incomplete combination of old wives'tales, anecdotes, and just-so stories. They fully believe they are experts and often have some paper evidence to back this up. Somethin similar can be said of programmers. There are plenty of people who can write a program that works, but are they experts? To a non-technologist, maybe (and maybe not). To the grande dame or grand homme of a field of programming, probably not. The range of defining an "expert" is huge.
The first issue is how to make sure that people aren’t creating 100 profiles each (something that’s a real problem on sites today). This is where I expect data breaches to “help”; as it becomes more and more ridiculous that our current identity schemes aren’t working, something better can be developed and act as a basis for a unique-by-birth profile that is as anonymous as you need it to be.
Part of the problem of expertise would be solved by the natural upper limit on how much expertise is reasonable for one person to have. No one should be able to claim strong experience in more areas than they could have conceivably spent time learning/practicing. Therefore, if you’re going to make your profile claim expertise in something you know nothing about, you’d definitely be wasting your limited opportunity to comment and you wouldn’t be able to do that for everything. Also, if comments/votes/etc. on something are only permitted for people in related fields, an uninformed person’s ramblings might still be drowned out by the only other people commenting.
Determining how to limit a profile is tricky. Some restrictions would be easy, e.g. a 14-year-old can’t claim 30 years experience in an industry. Some would be harder, perhaps enabled by webs of trust (e.g. as a side effect of employment, your employer adds credibility to your profile somehow in a particular area of expertise).
It's gotta feel bad for the local reporters that are literally risking their lives reporting news in Cambodia, to then have their views go down 80%. Because Facebook decided to experiment and implement the Timeline feature. Facebook is barely even tied to the fate or all that concerned, since it's only 18million users.
This article was published three weeks ago, so it doesn't mention the recent forced dissolution of the main opposition party in Cambodia, the CNRP. That has pretty much landed Cambodia unequivocally in the category of dictatorship, though most younger Cambodians would tell you it was never anything else.
I also believe I know who the unnamed Cambodian blogger quoted in the article is. She is known for discussing progressive issues and has well-deserved popularity among young Cambodian people, but apparently circulation of her posts and videos took a huge hit after these changes in the algorithm.
It is a little ironic that the Cambodia Daily is having its "second life" online -- they were so embittered by the impact of online journalism on traditional print journalism that there was an editorial policy against even posting links or URLs in any article on their website. Of course, I'm not cheering for their demise, given the need for a free press here. I know a number of people who work at the Phnom Penh Post, the other big Cambodian newspaper, who have plainly said in recent weeks that they can't / won't write anything critical of the government. The political climate is feeling more and more volatile here by the moment. In the days before the CNRP was dissolved, we even had military police going through every stairwell of every tiny alley, including my own, on some made-up pretense but really as a cheap intimidation tactic. As the election approaches there will be squads of heavily armed military police posted on nearly every corner of many districts in Phnom Penh.
Is it fine to have "Mass Scale socio-political experiments on countries that are a slight socio-political shift from civil war"? That is pretty much what a major foreign third party news distribution platform did, while having very little skin in the game.
Move Fast and Break Stuff, is ethically and morally unsound when the end result is making it easier to silence, jail, and execute the press.
Unfortunately, there is no solution in our current system. The only solution would be to accept that the western (current) capitalist decadent society is causing wealth and power to concentrate in a very narrow group of chosen people who end up having extreme power over masses of people that make up society today.
The only way is to try to try to regulate this capitalist wet dream and try to break it down to smaller companies with less power to pay off politicians. Legislature will have to change dramatically.
American culture really comes down to this. The driving force behind our "leaders" is a blind will to dominate and hoard wealth. It feels like we're reaching the point in history where this human instinct is backfiring tremendously.
I have a theory that one way of decentralizing power is to create protocol standards that help us share freely. For example, if the concept of a user was somewhat standardized across social media, social media companies could expose an API for sending cross-platform messages, tagging each other in photos, etc.
As the internet itself became standardized, so must our social media. Hell, what's stopping it from becoming a crowd-funded FOSS network of independent social media platforms? Costs a whole lot less to run a social media website if you're not trying to take over the world!
Seriously now: I think there needs to be a grassroots push for a vety selective approach to technology adoption as described by Kevin Kelly in his book What Technology Wants. If people generally treated new apps and devices with a skeptical 'what does this do for me?' attitude, there might be more herd immunity against abuses of power by large tech firms (Google's stealing location data from users, Intel's engineering backdoors into their products, and Facebook's cynical propaganda profiteering) and resulting social disruption. In Canada, engineers in training are asked to swear an oath that they will serve the public good and only work for honest enterprises. I think that if those values were more widely taught and accepted, toleration of abuses within large tech companies would evaporate. It's a cultural change that needs to happen, and the way to effect cultural change is to practice what you believe, "be the change" as Ghandi said, and to share your ideas with others in your workplace and community.
Though we have to say many Eng. don't respect that oath and are not held responsible... For example civil eng. firms giving money to political parties to have contracts.
> If a solution depends first on wishing reality was different, it’s fantasy. At best a reality to realize.
What an awful mischaracterization of my suggestion. I suggest that a person change themselves as a way of influencing the people around them (e.g. stop using facebook if they think facebook has a malign influence on our culture).
If any inconvenience at all is too much inconvenience for a person to live by their principles, I would say that person doesn't have any perceptible principles at all. 'Domesticated animal' might be an apt descriptor for such a person, but it's really unfair to the class of domesticated animals because they don't have the freedom to choose anything but domestication for themselves.
This is also true for a lot of older people new to the internet. My mother doesn’t understand that her laptop can do more than browse Facebook and buy things on Amazon. Her bookmarks are the random pages of her friends. Her primary source of news is fast becoming whatever click bait she sees there.
I tell her not to trust Facebook, I explain how she is their product and they exploit her but she doesn’t listen.
People with less formal education also equals Facebook and Internet. I had a startup that sold B2B2C products to restaurants. I had to train wiaters and cashiers to use their product on their phone (as the customers would). Most of the times when I asked them to open the internet they would open Facebook app directly. They wouldn't understand "browser" or "chrome" either. When I clicked on the chrome icon for them, they would say "Oh, you mean Google??". And then I had to go on and explain that you could go to the URL field and type an specific address you of a site.
This is a good point and one that is recognized often here: there's a good chunk of internet users that are mobile only.
That's an entirely different way to come up and learn technology than those of us who went through any variation of DOS > MS or Apple Desktops > Smartphones.
Conjecture: the browser, if thought about at all by mobile users, is an afterthought to apps like FB, messaging, Youtube, etc.
I guess this is something like a CRM, where you're primary customers are another business, but you're building something that's visible to their customers, so you're also trying to satisfy these end users.
Never seen the term used before, but pretty sure it means Business to Business to Consumer i.e. they are a business that sells consumer facing software to other businesses. Example would be a POS (Point of Sale) system which I am guessing based on above anecdote is the family of software grandparent was referring to.
I can see a few problems with this. The concept of "you are their product" is not an easy one to understand if people don't have any relevant background or experience in thinking critically about the problem. It requires thinking through several non-obvious layers: for example, it's easy for people to get stuck at "yes, I don't pay for Facebook, that's why it has advertising" without making the leap to "most businesses cannot just support themselves on advertising, they actually sell something to you too - so how does Facebook do it? By making their advertising more effective. How do they make their advertising more effective? By building up a detailed profile of everything about you without your knowledge or consent"... and even then people will get stuck at why their knowing so much about you is bad.
Secondly, exploitation sounds very scary and bad, but it makes people think of slaves and working in coal pits and blood diamonds, not looking at pictures of babies from the comfort of their sofa.
Do I have a solution? No, of course not. Many people just don't want to hear it, they want to carry on blithely communicating with their friends in real-time for free and looking at pictures of babies.
The “Facebook sells your data” myth is too often rehashed. Facebook has done studies and actually found that your personal info is not all that valuable in and of itself. Your likes, statuses, friends and photos do not provide meaningful targeting data. (Age and geography are meaningful, but those are already available with traditional media). Facebook’s huge money maker is its ability to let other companies target their own customers on the Facebook platform.
If Nordstrom keeps track of its website customers and what products they look at, and can then target those same users directly on Facebook with related products, they’re willing to pay way more for that kind of targeting than they are “women in their 40s who ‘like’ shopping”. Facebook supports both, but facebooks real success comes from letting companies leverage their own data on the Facebook platform.
That and the fact that they have a billion users log in per day. If any other site had that kind of traffic they’d be killing it regardless of the personal data they possessed.
What killed Facebook for me was their creepy targeted advertising. About a year ago I saw an ad buried in my facebook feed to "buy Gallium and Bismuth metal in Australia" I thought it was an oddly specific ad so I made a joking post about it - turned out several of my friends were seeing the same ad.
A common friend we all shared who is a high school science teacher spoke up. He explained his class was studying the periodic table and he had purchased samples of Bismuth and Gallium online to show to the class. That was the eye opening moment for me if my friends and I can see ads based on another friend's online purchases what can my friends see about me? I've rarely used the site since then.
To be honest, I think I got lucky that I first became a frequent web user pre-Facebook, as I can see that I might have ended up a similar way. A decent sized chunk of my early web usage had some form of social element, but chat rooms, web forums and news aggregators don't have the same stigma as social media, perhaps because the discussions revolve around subjects someone is interested in, rather than discussions surrounding the people using that service. HN is a continuation of that early online behaviour, for me at least. However, if I'd grown up in the post-Facebook era, would my Internet activity gravitate towards social media for online discussions? I don't know for sure, but it seems plausible.
I'd say the larger question is, how do we let people know there's online communities outside Facebook that they might enjoy just as much or even more? Perhaps Reddit is a useful gateway out of the social media bubble, there's a huge range of communities on Reddit, it'd be hard not to find something someone was interested in.
Lately I have forwarded obviously wrong WhatsApp messages to my closest ones, just to impart in then that every forwarded messages is NOT true or happening in real world.
I feel FB is kind of dying anyway. All the trendy/cool/geek people have left FB. Users don't share much stuff anymore. There is this ego fatigue too: people are tired of playing this game of making their life looks cool on FB.
It's no coincidence that FB buy WhatsApp, Instagram, Occulus: they have the metrics and they need to move on to something else.
Facebook vectored away from trendy/cool/geek when it stopped requiring an email with .edu. It just took awhile for people to get used to the internet. Old people use it to see photos of their grandkids. It's a bigger and more sustainable user base than young people. Young people turn over. The first person I knew with Facebook is 33 with three children and a mortgage.
Facebook itself seems to be quite comfortable with the world it has created; a place where every vulnerable (and even perhaps, not so vulnerable) mind can be influenced by information regardless of factual accuracy!
I can't really tell if any particular influence floats from FB to me through my friends, most of my friends don't use FB either or their not really the type to care about things Facebook tends to influence.
If you're not one for consumerism in the first place, it's not hard to fight away advertising influence, and if you're not that active politically (although you should be!) then again, FB can't really influence you that much. You could argue that what FB doesn't show you is also a form of influence, but if you weren't on FB at all then that argument wouldn't hold.
I guess the world it's created definitely has affected me though. If it could be argued that political leaders are elected based on influence fostered on social media, I can't avoid being affected by that.
Social conversation is how you spread influence, and that's not really a bad thing in itself. What's clearly not great about Facebook however, is how you can pay for that same influence to be injected into normal social conversation.
> For a long time, Silicon Valley espoused a dogma of information neutrality — claiming, falsely, that search engines and social networks were only impartial tools.
"Why do you close your eyes?", Sussman asked his teacher.
As much as I dislike Facebook. We have to stop to tell them they are responsible for anything. It's fully our fault whatever one can say is Facebooks. Everything else is just censorship and just as bad. There is no such thing as responsible censorship. One side will always get pissed and who are we or Facebook to decide what's right.
If it's a fully private entity like Facebook, is it truly "censorship" in the derisive sense that we associate with it?
If a user on Facebook posts aggressive and distasteful white identity propaganda which begins alienating other users, do Facebook and its advertisers not have the right to use the platform they created to exclude such content should they desire to do so? Does a fictitious "right to be heard" from a privately-administered entity supersede Facebook's actual right to property and self-determination over that property?
If "censorship" is truly a problem here, only another free-market alternative to Facebook is the correct solution. Gab did it with Twitter. Now, whether or not Gab will ever be as successful as Twitter because of the userbase it curates is an entirely different story. One is not entitled to a specific outcome, and if rejecting racism is what makes big business big, there's not much left that the other side of this debate can complain about.
This is exactly censorship. If people want to listen They listen. If not than not. Pretty selecting based on YOUR opinion is and always will be censorship
I strongly disagree that there is no such thing as responsible censorship. Furthermore censorship is not always, "...just as bad."
As an extreme example of good censorship look at child pornography. We have laws that prohibit speech in the form of harassment. Some countries prohibit anti-abortion protestors from harassing women trying to get an abortion. There are lots of cases where censorship is good.
Anti harassment is not censorship. 'in your face' is different to having the info somewhere when you want it. (which is the case with Facebook because you can simply ignore or unfollow users and groups with opinions you don't like)
Those are people who are vocal about their choices.
Based on what you're reading here, the average HNer uses Fastmail, has no Facebook account, writes code in Rust, exercises a lot, wants a self-driving car and works at or owns a start-up.
While most people reading this site are most likely Java, JS or .Net programmers; work at a large enterprise, use Gmail and Facebook.
I quit a little less than a year ago but honestly it’s a drop in the bucket as compared to the rest of the people in my social circle. I’m resigned to the fact that millions of people don’t care about whatever Facebook does, that Facebook not only won the social network wars but they are astoundingly profitable and lucrative, and that it kind of doesn’t matter what Facebook does people will still use them.
Seeing the recent wave of discourse surrounding social media platforms has been interesting.
We had similar problems before them. Tabloids, conspiracy theory subculture, clickbait, etc, are nothing new.
And we had "feed readers" that functioned to aggregate news sources that we subscribed/followed/liked. Which, in theory, should have catered to us in the way that people now label an "echo chamber".
But there are a few differences that stick out to me:
- More people producing content (back in the day anyone could create a blog or personal site, but the barrier of entry is lowered by social media platforms)
- These platforms are promoting some content above other content (which goes beyond aggregation)
- There's more data being collected and the tech to analyze it is increasing (targeting segments is easier)
Facebook didn't create that world -- it has always existed. Facebook only exposed some previously underrated features of the human society. It's very naive to expect that it can fix it. A social network (or any other technology) can't fix humans just like telephone or television couldn't. Like any massive technology shift it requires new social education. For instance, be very mindful about what you post and who can read/see it, beware of social bubbles and bots, employ critical thinking. The process of education will take time but eventually new online social norms will be formed and the early adopter problems will fade away.
I know this comment might attract a lot of hate, but I will put it here anyway.
The main reason why Facebook is not in China is that Chinese government wishes to regulate Facebook while Facebook (like Google) refuses to obey.
In some ways, Chinese government was right in the judgement that unfiltered and unregulated freedom of speech bring its own problems. And now the Western free world is starting to see these problems and impose regulations.
There's still a big difference: for us, privately owned companies with whose publications we struggle have been with us for a very long time.
Germany, where I'm from, has a more nuanced concept of free speech than the US, legally banning a greater (but overall still very small) number of topics or expressions.
However, outright banning of a media company is essentially impossible: prosecutors can only go after individual publications. They are using the same regulations to go after Facebook posts as they use to go after newspaper articles, As you might imagine it's not working out, because of sheer volume among other factors. So they've recently created a new law to simplify oversight over social media.
However this is not really a surprising new development. We're not "starting to see these problems", we've struggled with these problems throughout the existence of the German Republic. Technology is currently changing their shape, as it has before (e.g with the advent of private TV), and we're changing our approach to match.
>And now the Western free world is starting to see these problems and impose regulations.
This has not happened in the US, and it is only the authoritarian types that you are hearing from even discussing the idea. In fact, the current administration is dismantling the somewhat light-handed regulation imposed by the previous admin.
It's quite eye opening the way that this article paints Facebook as a clueless giant, unknowingly enabling revolutions, swinging elections and upending societies.
The role of media -- the press, cheap paper, high-speed printing, widespread literacy, pamphlets, telegraph, radio, cinema, audio tape, public address systems, mass media, advertising, television, cable, talk radio, BBS systems, Usenet, the Internet, the WWW (versions 1.0, 2.0, and mobile), etc., in political and social matters is absolutely profound. And often quite unexpected at least by the initial creators.
Elizabeth Eisenstein's The Printing Press as an Agent of Change is one introduction to this, though there are many others. I'm going through Robert W. McChesney's work presently.
It's a depressing field if you go to deep I feel, especially if you read the most critical and pessimistic critical theorists in the field. The Work of Art in the Age of Mechanical Reproduction by Benjamin and The Society of the Spectacle by Debord forever changed my vision on society. "My thoughts have been replaced by moving images..."
I am still unconvinced Facebook played any role in the outcome of the 2016 election. It played a role in the the form of the discussion but the actual outcome it didn't.
Most people vote based on some very personal and everyday issues. Things they experience in their lives not discussions on facebook no matter how many times they like it.
> Most people vote based on some very personal and everyday issues. Things they experience in their lives not discussions on facebook [...]
I'd say exactly the opposite. The politically empassioned folks I meet lately (on both sides) are always quicker to talk about a meme than a concrete issue.
The borderline conspiracy theories that seem popular are pretty different than "the garbage needs to be picked up on time" locally important issues that I thought you were talking about.
Even if it's Facebooks fault somehow. It's still the peoples fault who think Facebook is news. Nobody ever said that still people assume it (appearantly. My social circle never did)
On an earnings call earlier last week, Zuckerberg told investors and reporters “how upset I am that the Russians tried to use our tools to sow mistrust,” adding that he was “dead serious” about findings ways to tackle the problem.
I am more concern with the power he is wielding. He doesn't seem to think things through from other perspectives. The fact he dismissed the possibility of social hacking Facebook to influence others make me think he is surround by yes-men that only whistle to the tune he likes. Unfortunately, I have seen that too many times in startups.
If it wouldn't be facebook it would be something else. The bigger issue here is that the completely-decentralized or completely-centralized civil society of the earth that the internet creates is incompatible with the way the world is run for the past 2 centuries.
I prefer reading bullshit on FB than having some kind of free speech censorship. It's like bombing countries for world peace, there is a mismatch doing that.
That's what keeps Zuck awake, I'm sure (sarcasm.) Buy ads if you want to get your point across. The fact that "everyone" uses FB is a badge of honor, a goal everyone tries to reach.
That sentence doesn't make sense. Do something about what? How those things collide? It's very unclear what the author thinks Facebook is supposed to do, or what he thinks Facebook is actually responsible for. Is this a call to regulate posts so there's no fake news, or more, to prevent all speech that can incite bad actions? In perilous regions of the world, sometimes people are arrested for their posts, and can be seen to be connected with others on Facebook, so that they also fall under suspicion. Is Facebook supposed to keep those users safe somehow? I mean, what's the author's gripe?
A better article would have mentioned that free speech, a "good thing," albeit sometimes dangerous, is powerfully fostered by Facebook. Give it some credit. It would also have been fairer if it had mentioned how difficult it is to detect and thwart bad actors who find ways to abuse any platform. (Removing trolls from small forums is hard enough.) Instead, it insinuates that Facebook is willfully indifferent. At least it isn't accused of colluding with the bad actors, as sometimes happens when US companies do business overseas.