Ug, never proposals for education, grants for preferred alternative implementations, publicly funded communication platforms, recognition that the status quo is people's preference, data transparency encouragement (not user transparency...not "who", but "what" wrt internal analytics), enforcement of existing advertisement/fraud/collusion statutes, etc. The first step always has to be more legislation, very sad.
I'd argue in the scheme of things, it's not really that broken, and definitely not deserving of the type of fix we're in for. It's funny to watch the masses effectively manipulated into thinking there is effective mass manipulation occurring. I wish they'd just leave things alone personally, but since they won't and everyone is convinced this is a dire problem, I can only hope for restraint.
>recognition that the status quo is people's preference...I'd argue in the scheme of things, it's not really that broken
Continuing the status quo means the people in power stay in power. It therefore isn't surprising that the people in power prefer preserving the status quo. It also isn't surprising that the people who believe the system is broken and are most against the status quo are those that belong to historically oppressed groups. If you really think the system isn't broken, I suggest you look through the Twitter mentions of politically active famous women, POC, or LGBTQ members. The harassment they receive on a regular basis wouldn't be allowed in the physical world. Why do we allow it in the digital world?
We have to separate the words from the action of saying those words. The words on Twitter do sometimes include slander and threats which would make them illegal. However, often the words themselves are perfectly legal. That doesn't mean the action of speaking those words are legal. Using your example, imagine someone standing just outside your property and yelling at you 24/7. The words themselves might not be a problem but it doesn't mean they can't be charged with any number of crimes from disturbing the peace to harassment. Social media bots can function in a similar way.
> The words on Twitter do sometimes include slander and threats which would make them illegal.
The idea of a nebulous "illegality" is simplistic and wrong, in the US. They might open an individual to civil liability, but they aren't in violation of a criminal code, nor tort law.
> That doesn't mean the action of speaking those words are legal
Speech is always legal, in the US. What you think "legal" means is up for debate.
Note that the cases where speech is not legal are significantly more narrow than most people think they are. The category you link to has only 11 pages, and half of them are court cases where the Supreme Court decided against limiting free speech. (In particular, the famous "shouting fire in a crowded theater" quote is from a 1919 case that was overturned in 1969, and shouting fire in a crowded theater is not, in fact, illegal. Inadvisable, probably, and likely to get you banned from that theater for life, but not illegal.)
Protesters are routinely arrested (in the US) for things like blocking traffic, unlawfully carrying weapons or other safety issues, not because they are voicing disagreement with "the man".
Blocking someone is neither policing nor punishing, it's simply ignoring; actual censorship still falls to the platform itself.
I don't want to make a value judgement by using a word like "behind", but I do think most Americans are simply not aware of how extreme our country's stance on free speech is compared to the rest of the world. Countries like the UK, France, and Germany seem to function just fine with their more restrictive laws on speech.
There were a lot of hate-speech laws when Hitler came to power. He even used the fact he was prevented from speaking publicly as propaganda. And yes, a lot of legislation of Weimar to censor spoken and written word has been extended by the Nazis after they came to power.
People arguing for more moderation and more censorship generally lack perspective when they refer to fascism in the same breath. Free speech is a most effective disinfectant against fascism.
And yes, he was banned from speaking due to hate speech laws.
I really know that. I live in Germany and have been briefed extensively. Our last emperor was the first to introduce legislation against hate-speech. But that was in the 19th century. Of course you do not find references to laws from the Weimar Republic and before in modern legislation.
Cite the laws, cite the history of the dates implemented. Civil remedies did not exist.
I have provided you with actual legally traceable citations to law and their implementations. You cited a propaganda poster.
You are conflating a regime of arbitrary censorship with specific and nuanced legislation that was implemented as a direct result of Nazism. To argue that the basis of Germany's contemporary hate speech laws, including the most recent legislation regarding social media, is not a direct result of Germany's experience with Nazism is delusional.
> there was no equivalent of section 18C of the Racial Discrimination Act in the Weimar Germany. The only laws against hate speech were criminal offences, not civil remedies. Weimar Germany had nothing equivalent to the framework which currently exists under the Racial Discrimination Act within which complaints of racial vilification have, in the vast majority of cases, been successfully conciliated through the Australian Human Rights Commission or resolved by direct negotiations between the parties.
> In Weimar Germany, the absence of civil remedies was made worse by the fact that the relevant criminal offences were honeycombed with immunities for members of the Reichstag, the German Parliament. Nazi members of Parliament became the nominal publishers of single or multiple antisemitic publications. This facade meant that no one could be prosecuted for the hate crimes perpetrated by the publications. The Reichstag could waive immunity for its members, but did so rarely.
§ 130 StGB lautete in der Urfassung des Strafgesetzbuchs von 1871:
„Wer in einer den öffentlichen Frieden gefährdenden Weise verschiedene Klassen der Bevölkerung zu Gewalttätigkeiten gegen einander öffentlich anreizt, wird mit Geldstrafe bis zu zweihundert Thalern oder mit Gefängniß bis zu zwei Jahren bestraft.“
The article you linked even explained that Nazis were prosecuted. That some of them had parliamentary immunity is another topic. Distinction of criminal and civil law is also secondary.
1960 a second paragraph was added the said that human dignity is indisputable. It was a reaction to anti-semitic attacks.
But it is pretty factual that Germany had hate-speech laws before the Nazis came to power.
> including the most recent legislation regarding social media
most legal experts think this law is not constitutional and it was responsible for banning satire on the first day of its inception. Future rulings will decide its fate. Not giving you a quote on that, let's just wait.
Section 130 was introduced in 1946 with the new legislation. Beyond that, you missed the part where they discuss the lack of civil recourse / remedies for hate speech prior to the war.
> most legal experts think this law is not constitutional and it was responsible for banning satire on the first day of its inception. Future rulings will decide its fate. Not giving you a quote on that, let's just wait.
Let's skip your "most" and at least agree that the issue is finding the fuzzy line between proper discourse and hate?
Whether you agree or not, the existing edicts in Weimar Germany were ineffective and neutered, in addition to being undermined / ignored by the ruling party, which essentially calls into question of the status of the rules being "laws" at all. The laws changed quite drastically after the war, so much so that the new legal construct was entirely different, including civil remedies.
Look to Germany. I said "behind" because it requires a certain national hubris to not acknowledge the horrific turn of events that came out of a country, and in turn, acknowledge what they have learned from it.
So you are suggesting that a country that birthed one of the worst fascist regimes, and that implemented strong hate speech laws after such, is behind the USA?
Yes, I'm stating that Germany is behind the USA when it comes to protecting individual freedoms. Censorship and hate speech are an affront to individual liberty and freedom. In fact if you want, I'll say they're not only behind but driving in the wrong direction.
They were NOT just implemented after the fascist regime. They were already in place. Ironically these laws were used against Hitler to prevent him from speaking publicly and he was very proficient to use the fact he was barred to speak freely.
"Einer allein von 2000 Million Menschen der Erde darf in Deutschland nicht reden" => "He alone from 2000 million humans on earth isn't allowed to speak in Germany." - propaganda of the NS.
We had hate speech laws since we had an emperor. Freedom of opinion is one of the most fundamental rights in modern Germany.
Have you researched the history of section 130 and 131? It doesn't appear you have any awareness as to the history of the laws, nor what constitutes a "hate" law. Basis for 130 131 laid in 1946.
> On May 23, 1949, the Basic Law of the Federal Republic of Germany was promulgated to serve as the cornerstone of a democratic state designed to protect individual liberty. Accordingly, the Grundgesetz, or Basic Law, is patterned upon the tradition of Western constitutionalism and is a direct historical reaction to the disastrous Weimar Republic and Third Reich." The Basic Law regulates two types of rights and relations. The first type, the Basic Laws, pertain to the rights of individuals and the state. The Basic Laws are characterized by many of the classic human rights found in the U.S. Bill of Rights. The second type, the Rights to a Free Democratic Order, concern the rights of governmental organs, the organization of the state, and the relationships between the various state organs.
The status quo today is diversity programs pushed by top corporations. "Whiteness Studies" courses aiming to "deconstruct White people" in top universities. Generous affirmative action. Language policing by internet hate mobs (who primarily police the language of people who are not PoC). And a popular culture that celebrates LGBTQ people and PoC pretty much non-stop year around.
People have been politically mobilized by the idea of "change" since the beginning of democracy. But that's just because the idea of "change" has a certain energy to it that is important for campaigning. Most people don't like the idea of supporting the existing status quo because it seems ... low energy.
So anyway. You'll find a lot of professional activists working at big tech companies (a lot of companies now have 'racial equity' orgs which essentially exist to do political activism), having their concerns heard by people in power, and all the while talking about smashing the status quo. But they aren't really trying to smash the status quo. They are just taking the existing status quo step by step to its logical conclusion.
The people trying to smash the status quo are usually the ones having laws passed against them. Who don't have corporate leaders pandering to them. Who don't get celebrated by the media.
It's the anime avatar having people on Twitter, who constantly break taboos with their speech, who are against the status quo and that's why the current power structure wants to shut them down.
(I'm not saying you are wrong for supporting the existing power structure, but it _is_ wrong to pretend that the current power structure isn't pro-LGBTQ and PoC because it is emphatically so.)
To be fair, the only power people in Congress have is to pass legislation, so people in Congress will probably propose legislation if people with the power to do the other things you mention don't act to change things. The status quo you favor is most definitely _not_ what the vast majority prefers. And while it's unlikely that Congress will take the right tack here, Facebook and Twitter are at best willfully negligent actors. Their continued unwillingness to sincerely address the problems their platforms have created is shameful at this point. They have tremendous power and are using it poorly and for the wrong purposes. No one is in a better position to change the storyline than they are at this moment.
Well, to most of them, sure...I guess I wasn't clear. I meant legislation as in regulation and laws, i.e. negative/pessimistic approaches adding more rules/punishments as opposed to positive/optimistic approaches without imposing more restrictions. I personally don't want to see government involvement at all (i.e. any legislation), but I am just dismayed at which legislation since we're sadly past the point, in many people's eyes, of whether there should be any at all.
They will get it wrong till they get it right. The law isn't too different from software. It just deals with a lot more ambiguity. And people are conditioned to think things are black and white.
Social media and mainstream media will get three sets of regulation wrt
Privacy, Individual Manipulation, Group Manipulation. It doesn't matter which country you pick, the free for all that currently exists is not being tolerated.
If it makes you feel any better, nobody really gives a shit about these stupid regulations and nobody is going to stop folks from saying whatever they want here, or anywhere else if here becomes impractical. My personal website (not updated since forever) is my username dot com. And if ICANN turns against free speech there’s always pirate radio, every citizen who even considers “guns to protect us from the guvmint” should get trained as an amateur radio operator first.
(By the way, even as a supporter of the Second Amendment it is remarkable that in America one needs a license to operate a radio but not a rifle.)
Of course they won’t propose those other things (which would require “more legislation” too fwiw).
This move reinforces the location of the goal posts that define what is “acceptable discourse.” Along with supporting unfettered corporate control of the means of production. The economy must run on big business and nothing else, amirite?
This leak looks like a trial balloon. I'm fascinated by the notion that requiring disclosure of people's physical location, as well as demanding that they prove their identity, would be conducive to an online culture of free and healthy expression.
These proposals seem to be aimed at intimidation and breaching the privacy of each and everyone, rather than at the surveillance-like operations and propaganda-enabling structures of the Big Ad/Social Media companies.
> I'm fascinated by the notion that requiring disclosure of people's physical location, as well as demanding that they prove their identity, would be conducive to an online culture of free and healthy expression.
"Fascinated" isn't the word I would choose to describe it..
So, my local paper had a comment section that was dominated by the typical hysterical, unhinged, anonymous arguing between far right and far left partisans that we see in a lot of online discourse.
They changed to a "real name" policy and now the comments, while much reduced, are also much more thoughtful and tend to stay on-topic.
> They changed to a "real name" policy and now the comments, while much reduced, are also much more thoughtful and tend to stay on-topic.
Real name policies eliminate specific categories of speech.
One of those categories is trolling, which naturally makes everybody very happy. The problem is that some of the other categories are really important.
If you require real names then people with minority views are afraid to present them, even when they can make a major contribution. People won't disclose relevant inside information, including malfeasance, because they face being fired or abused until they quit. People won't say anything against the interests of anyone in a position of power over them.
It's basically a mechanism that gets rid of the Nazis and SJWs by getting rid of anyone critical of the government or existing power structures. The resulting debate is unquestionably a lot more polite, but politeness is not the only consideration.
I'm in favor of deanonymizing social media. I don't think it will affect whistleblowing, because so far mainstream social media has not been such a great source of whistleblowing. Anonymous leaks are still best presented by a journalist that's willing to vet the info and put their name on it.
What anonymity does do to social media is muddy the waters to a great degree. Online discourse is now built on bad faith, because of the lack of consequence and reputation.
I've enjoyed anonymity as much as the next person, but I believe it is a net detriment to communication.
> I don't think it will affect whistleblowing, because so far mainstream social media has not been such a great source of whistleblowing.
It's not just about The Pentagon Papers and Wikileaks. It's about people being able to point out banal incompetence and small scale corruption so it can be corrected without being retaliated against by petty members of the city government with egg on their face, even when there are no journalists willing to take the small story.
It's not even just about whistleblowing. When you have to write everything knowing your mother, your boss and every busybody in the PTA will probably read it, people are going to self-censor.
> Anonymous leaks are still best presented by a journalist that's willing to vet the info and put their name on it.
Journalist is not a title of nobility, it's anyone engaged in that specific activity.
To do what you're asking, a source would need to anonymously send the information to different prospective journalists until one has the resources and willingness to vet and publish it, in parallel if it's at all time sensitive. In other words, the source needs the ability to send information anonymously to an arbitrarily large number of people with no formal credentials. Who then themselves need the ability to anonymously pass it on to arbitrarily many others who they think might have more resources or be more willing to formally attach themselves to something with risk of retaliation by powerful forces.
Which is basically a description of anonymous social media.
> What anonymity does do to social media is muddy the waters to a great degree. Online discourse is now built on bad faith, because of the lack of consequence and reputation.
We already have the ability to verify identities. We know that @ggreenwald on Twitter is Glenn Greenwald of The Intercept because it says so on the official web page of The Intercept.
Nothing stops anyone from only reading things written by people with verified identities. If you don't like anonymous or pseudonymous speech then why are you listening to it? Instead what you're proposing is to prohibit everyone else from hearing it. Which is never something anyone should get to decide for everyone else.
Okay, but what about political dissidents, people in abusive relationships, people in backwards areas who aren't heterosexual, just play shy people, and many others?
I don't have a clear cut answer myself, because on the other hand, anonymity opens the floodgates to so much manipulation that also ends up hurting real people, and that will only get worse as technology gets refined in those directions. Those that want to control and manipulate won't give up easily either way, there's totalitarian "victory" possible in either direction:
Tie everything anywhere to a real name, and people will ultimately police their own thoughts, because wanting to say things you can't say sucks, so they'll stop wanting that, the one thing they have control over. Keeping foggy and dynamic what is and what isn't allowed to say will make people shrink to the smallest possible attack surface that still allows basic physical survival.
Go the other route and make everything accessible to (super advanced) bots, make it impossible to confirm anyone's identity or follow any money trails, and with enough of that people can be be manipulated by an overwhelming, yet fabricated social consensus, and herded into bubbles.
Of course, the worst of both worlds is also possible.
Anyway, anonymity online gets a very onesided rap. Some of my fondest "internet memories" were exchanges were with total strangers. Maybe I just remember those differently (I don't really think of online interactions with real life frieds as online interactions), because they always came out of the blue, or because they seemed to mean more, since we both know we'd never meet again and were being open or helpful just for the sake of it, with the reward being purely personal gain (even joy is gain), not social capital.
> Why do we travel? Among other things, to meet people who don't think they know us once and for all; so we may experience once more what is possible for us in this life.
I think policing your own thoughts and communication in a social setting is an essential element to human discourse, and the lack of self policing is exactly why online discourse is substantially lower quality and is based on bad faith.
There's a difference between self-control and a chilling effect on speech, though. I see no problem with someone being mindful about what they are saying and how they are saying it. I do see a problem with someone self-censoring a discussion or expression of an idea because of fear, especially fear of the government, or fear of other damage to their lives.
If people are afraid to express ideas simply because those ideas aren't popular (even if they aren't very controversial) I think the quality of discourse is lowered.
When a not-out member of the LGBTQ community can't enter a discussion or express something because doing so would tie that to their real name, having to self-censor because of a lack of anonymity becomes a problem.
I don't know that there is a perfect solution, either. Having full anonymity introduces other problems.
Although that is conceptually sound, how far can we go when we suggest the Internet invented dissent? The biggest social changes in modern times happened before the Internet.
I think we have to be careful about making companies liable for everything their users upload. While companies like Google and Facebook have the computational capacity to scan everything that they consume, what about startups?
It shouldn't mean companies aren't liable for failing to remove content deemed illegal, but having it be illegal for any single thing to slip between the cracks seems harsh.
The Congressional hearings involving Mark Zuckerberg showed that Congress is not equipt to deal with the problems social media has brought to society. I am highly skepticle of anything the senate convinces themselves is the right way to deal with social media; it seems like an attempt to set the stage to grab more power and control than anything else.
Marketplaces don't self-regulate. Providing a level playing field, such as explicitly detailing a fiduciary duty to keep a subscriber's information confidential, is the role of government.
after returning to the United States as a citizen, I was detained briefly for visiting a trifecta of middle eastern countries. I'd informed them I was filming a cooking show for a broadcast network. I was asked for my social media accounts and passwords, and after confessing that I did not maintain any social media accounts I was kept for another 25 minutes for the same question by three different people.
Ah yes - empower politicians with regulatory powers, and expect them to not use it to their advantage while in power. Lest we forget, everything from gerrymandering to supposedly non-partisan agencies like the FCC have taken giant craps on the general public, and sided with the party in power.
When content got posted by people on their own servers, and you just linked to other people's web pages instead of commenting on them, it was more of a public square. When it's a social media site, it's easier for everyone non-technical to use, but it's now run by a private company.
At this point it seems clear that individual companies have disincentives that prevent them from acting against these issue. Most people in the tech community are generally opposed to the government getting involved since the government is often slow to understand and react to technological changes. I would generally agree with them. However we have to stop sticking our heads in the sand and pretending that things that are crimes in the analog world aren't crimes in the digital world (this includes campaign finance law). The only real option seems to be for the industry to come together and self regulate. Are any of the big tech companies actually working to make that happen? If no, it is only a matter of time before the government gets involved.
They will only have a disincentive to act when people are willing to leave toxic platforms, and people will only leave toxic platforms when there are viable alternatives.
I would agree with labeling automated bots but I think stopping anonymous accounts is going too far. What is wrong with anonymity? You can label the account as being intentionally anonymous but I don't think we need to go past that.
If people failed to see past the trolls this past election cycle, hopefully it will be a lesson learned. Most people here probably learned to spot/ignore trolls when they were 14 but a lot of voters are still new arrivals to the digital landscape.
If we need our government to confirm the information we use to make a decision, is it still a democracy? I still think that if you want a population that is responsible, you have to allow them the responsibility of figuring out what is true and what is not on their own.
I think the faster we make companies liable for the content on their servers the better. Why? Because I want distributed/federated data to become an actual priority to these companies. I want it to hit their bottom line when they get sued for content uploaded to their servers so they consider keeping the data owned by the user who generated it.
Surely you realize that making data a liability reduces the ability for you and I to host it, not increases it. I understand you are against centralization of data, but you can't magically commoditize and distribute it via punishments. You'll only strengthen those that can take a hit. While it seems mathematically logical to say "well, if we make it less financially beneficial to have data, they won't have it" what happens in practice is "well, if we make it less financially beneficial to have data, nobody will have it". There are alternatives to decentralization that have less effect on smaller orgs.
Yeah I don't want to be hosting data I have not vetted either. I think being aware of the content any device you own is serving on the net is a good thing, even if it reduces the amount of people willing to do it for others.
TBQH if small companies cannot host user data that is good too. It should be very difficult for companies of any size to house data they are not willing to defend legally.
> TBQH if small companies cannot host user data that is good too. It should be very difficult for companies of any size to house data they are not willing to defend legally.
We just disagree here. We know who can and who can't defend things legally. On a personal level, I don't like my options further reduced for who I can willingly give my data to. Legislate and determine what constitutes private data, data misuse, and/or surreptitious gathering if you want, but non-private information freely and consentually given should want to be freely proliferated.
My argument is more that forcing tech companies to invest in distributed data ownership tech would push that forward in a way that would enable more competition of services because your data would be portable, not locked into vendors like it is now.
In the short term it might result in less variety, but only until the tech proliferated and reached maturity. Then services could simply spin up on top of the data that you already host yourself. So you would have to take on the responsibility of finding a good way to host the data you want available (probably from your phone), but after that you would be able to use any service willing to operate on your data, but your data would be in public standards, not proprietary storage inside a facebook/google/amazon data center.
Also, I am quite aware of how disruptive this ideal would be to most of our current large internet companies. I don't think it will realistically change any time soon, and am sure it would have side effects which would have to be balanced. But the direction we are headed now seems much worse to me than the alternatives.
I’ve been working on that. It’s a hard problem, but emule works quite well. I propose something based on that: when you start a program, you advertise all the sha256’s of all the files you share. Whenever you need a resource, you send a request to the dht for its sha256. Anyone who has the file will opportunistically try to send it to you.
Since it’s a dht, there’s nothing to take down. And since it operates on sha256s of data, it’s implicitly secure. And it uses tor for the rendezvous, so it’s not possible to track who is requesting what, except by unique ID (which can just be a bitcoin wallet address you control).
The hard part is, what do you do about abuse? What if someone spams the network with bots that try to fulfill every request with bad data?
If anyone has research refs in this direction, I’d be grateful to read them. I’d also like to avoid a blockchain if possible, since it seems unnecessary for simple federated distributed data.
I think a cool layer on top of this would be to create a P2P network where you can 'friend' certain peers and produce a reddit-like aggregation of new content from various sources.
- Friend another user by adding their key and IP to a trust group.
- When your client boots up, it connects to friend nodes and downloads their recent DHT.
- DHTs from all friends are summed to sort the hashes by descending frequency, perhaps with a time decay factor.
- Display reddit-like UI of content to User: see which files are recently most popular among the set of all your friends.
Possible improvements:
- User could customize time vs vote weighting, script custom sort rules
- Give certain peers greater vote weighting ('best friend' vs 'acquaintance')
- Allow peers to 'tag' hashes to create subreddit like collections of things.
- Allow attachment of metadata to hashes - title, description, etc - figuring out how to handle discrepancies between sources here could be tricky.
- Make client automatically re-host content in your DHT: hosting IS the upvote
You and I must have like minds. I have even created an anon DHT PoC [0], conceptualized how the messaging might work [1], made a lib to use Tor easier since it's the best anon nat buster these days [2], began toying with a superset of reddit/slack/forums/etc (some grpc files at [3] and some impl in that same repo), and a bunch of other small things in order to arrive at this final destination.
Check out ethereum swarm [1]. Optionally private (hosting and accessing), decentralized, p2p data hosting protocol with baked in payments if you want to pay the network to host your data for you.
It seemed too simple of an idea not to already be done. Hopefully it conceals your IP.
Arg. Nope. It does not. It leaves it up to the client to decide how best to request the content, and by default that almost certainly guarantees no privacy.
It was not intended to provide privacy, the the distributed data store. If you want privacy you probably just want to have a private swarm, which is perfectly fine to setup. Just implement a client with a whitelist or another mechanism for the privacy you want.
If there is no privacy, why not host the data on s3 or Dropbox? Data storage is dirt cheap. The privacy concern seems like the prime reason to build such a service.
It's funny that their solution is removing privacy as opposed to combating gathering of data and sales of hypertargeted ads.
Here's the problem with the US legal system. This regulation (if passed) will likely fail to solve the problem but it will stay on the book and it won't be removed until idk...ever?
Does anyone truly believe that regulation is going to fix that in a way that leaves people the ability to freely express their thoughts -- something that is and should be a human right?
I think the real issue here is that social media has moved us to a place where people's real thoughts are visible to all, and the problem we need to solve is how society will work now that the genie is out of the bottle. Social media isn't the cause, it's helped us see the world more clearly.
Solving that problem is going to mean empowering people with the knowledge and tools to assess the quality of information they read and make decisions backed by critical thinking. This is a problem that needs to be solved by society, and expecting a governing body to prescribe to us how our society is meant to think is no solution.
The anonymous factor does allow people to voice their own thoughts without fear. It also allows people to lie without having their physical reputation damaged. They create new fake accounts if old ones become known to spread information. People take so much information that they read as gospel. Selective reporting and the use of loose "anonymous" sources by media outlets (who have become so partisan and biased in one direction or the other) have gone on to perpetuate this problem even further.
So, I am picturing the whiplash when a large part of the American left, that has been in favor of some kind of governmental control on companies like Facebook and Google, actually faces the possibility of the current administration being in charge of that regulation.
There's “large a part of the left” that supported legislation under which the executive branch would have discretionary regulatory power over Facebook and Google?
Online discourse has been the wild west since Usenet. This was alright 10 years ago, back when it only mostly a certain kind of person accessing the internet. Now social media more accurately reflects the world demographics. I've long believed some government-issued, OpenID-like identity platform would go a long way to enable a lot of other online activities (like voting), but saying government will do a better job solving problems than corporations is misunderstanding the issue. Before we run off trying to problems, we need to figure out what kind of online society we want. I'm not yet convinced anyone has really thought that through.
watch for the term "weaponized" when applied to speech as this is the next go to. by equating speech they don't like to warfare, arms dealers, and more, they can try to suppress it under the connotation of violence which is key because censorship of direct incitement of violence is permitted. The difference will be to twist it enough to fall under it, think of it as "for the children" but now suppressing any opinion they don't like.
as far as the big tech giants, well twitter's system was proven discriminatory as substituting white/Caucasian with any other race or religion will flag the comments; this was done based on the NYT hiring a tech writer whose twitter comments would be considered hateful but are excused because of the target and the person making them
it is not hard to prey on people's prejudices to take their rights from them. they will gladly accept restrictions if they think it only hurts people they don't like but may find one day they get in some other persons or algorithm's cross hairs
Anything coming from the party without power is dead in the water for now. Maybe it will inspire someone but this is just a waste of a cycle until then.
It is funny to think a capitalist government could regulate anything. If it makes money then it is good. If it doesn't make money then it is bad. If it challenges the current way we make money it is bad, unless it makes more money then the old way then it is good. The problem with social media is it sells its user base to the highest bidder. No sort of regulation is going to stop that. Social media not going to give the government a key to there land. They have lobbyist, secret ninja assassins, black mail, and a endless supply of money to throw at this problem to keep them going strong. What happen when the government try to regulate sugar? Soda companies paid for studies to prove sugar is good for you and is part of a balance diet. What will these social media companies do? For starters they will lie like they currently are stating the technology cannot keep up with fake news. They know when a user gets home they should entice him/her to look at there phone as soon as they walk through there door because they will score a minimal 15 min of eye ball time even if a baby is screaming bloodie murder for them. Machine learning can automate propaganda for entire country, but you telling me it cannot figure out a 1$ bought domain linking to a story for the first time might be fake news? The government will "regulate" social media so they look like they are doing something. The companies will make backdoor deals to benefit them. The american people will cheer with joy knowing there is no more fake news on there social media sites. And when some russian hackers get bored they will buy 10$ worth of ad space on facebook targeting extremist in america. Just so they can gamble what is the best way to get a moron to wave a assault rifle in front of the white house.
I doubt this would help. Judging by my Facebook feed Americans are perfectly capable of vicious, polarized discourse without foreign influence. In fact this may backfire as it would allow the hateful in society to have more of a right to preach on these platforms as no one would be able to accuse them of being foreign trolls. Its probable that this level of discourse is here to stay and that the previous elevated, comparably "civilized" discourse was a result of broadcast media and its top-down structure.
It is probably a job for the next generation rather than this one; this generation wasn't ready for this boom of communication we are living in; all these social platforms where created for the economical benefit of their creators (FB, twitter) and not any sort of societal benefit; so yeah I think that the solution must start from early school years where children should be taught to doubt all sources of information -internet and peers alike-, investigate better, know that extraordinary claims need extraordinary evidence, they should also learn to discuss important topics with people with completely opposite views and to stay calm while doing so. A world where a black kid can discuss with a nazi kid about racism and neither of them lose their heads over it (just as an extreme unlikely example to state my point).
> "this generation wasn't ready for this boom of communication we are living in"
I definitely think we will look back on my generation (Generation Y) as the guinea pigs for this whole "connected social world" thing. So far it's been too much, too fast, and we have done an abysmal job handling it.
First stop blaming the Russians and then tell your parties to work for the country and for themselves. I think best would be if people stopped calling themselves "Republicans" or "Democrats" and stopped identifying with a party.
I wish there were a way to dispel the tricks the media and politicians use to divide us such as religion and abortion. There are countless other issues that affect us, but many voters only care about 1 or 2 issues. The "polarizing issues" are then just proxies that won't have a material effect on most people anyway. But people are so susceptible to emotion, and I doubt there's a way around this.
"I see no reason for people not to identify with the major party that best represents them,"
But they have to keep their party accountable. I see it so many times that people complain about something when the other party is in party and as soon as their party has the power it's perfectly fine. Sometimes this reminds me of "Doublethink" in 1984.
It would be more accurate to say 'the Russian government and affiliated parties' (or just 'Russian government', if you're into the whole brevity thing).
Most Russians don't care what happens with American politics. We don't blame 'the Arabs' for 9/11, right?
That's pretty pedantic. With "Russians" it's pretty clear in this context that it means the government. When somebody in another country complains about Americans it's most likely about the government and not all people.
If 9/11 had been planned by a government we would also say the "Iraqis" and not the "Iraqi government".
> It would be more accurate to say 'the Russian government and affiliated parties' (or just 'Russian government', if you're into the whole brevity thing).
I was taking, based on what seemed to me to be exquisitely clear contextual clues, “the Russians” in the post I was responding to as “the Russians to whom blame has widely been attached”, who, yes, are the Russian government and certain regime-linked business figures.
“Stop blaming the Russians”, it seems to me, makes clear that the focus is those Russians who are currently being blamed, which is not the Russian population in general.
> “Stop blaming the Russians”, it seems to me, makes clear that the focus is those Russians who are currently being blamed
Yes, if you were to stop and think about it, this is the intended meaning. However, when people constantly hear 'The Russians' repeated over and over..
#notallrussians is an argument (like #notallmen) that contributes nothing to anyone because nobody has at any point said "literally all Russians". You're bringing nothing but noise.
> Is there something that can be done about our polarized discourse without trampling over our civil rights?
Actively and without consequence? No. Simply maintain self control and realize that the more dialog is open, instant, and global, the more it is subject to the loudest and most manipulative. Not necessarily ideal, but better than alternatives.
Having journalistic standards that were enforced by Law.
It's a can of worms, especially in a country that is as suspicious of it's government as America, but in my view, one of the biggest changes in people's views of the media (and "truth" in general) is the steady erosion of Journalistic standards - putting infotainment shows and opinion pieces on the same level of journalistic rigour as actually well written, high standard news stories.
News corps will continue to chase money, and money is tied to viewership - which gains from people being engaged through polarising, emotion driven content.
If there are no standards for this kind of thing, people will gravitate towards the stuff that gives you that hit of emotion, and they'll forgo and bend facts to present a story in a way that keeps their viewers at the expense of fair representation of facts, and bias.
In the context of social media, it's tricky, because robo websites and opinion pieces run rife through them, with pieces debunking them and/or apologising for incorrect facts not propogating as hard - so in my view there needs to be some level of targeted/intelligent regulation of social media as a news source. (Content aggregators perhaps taking some level of responsibility of curating what is being shared? Admittedly difficult to do this without getting Orwellian...)
But that's just my opinion, interested to hear if people think it's a load of BS or not.
It's not clear to me what the role of Congress should be, perhaps the appropriation of funding to explore/build technologies and legal structures that are more distributed and less subject to concentrations of power.
Paul Frazee's 2018 JSConf EU talk on "Formalizing User Rights on the Web" makes an important observation: technology doesn't just interact with civics, it actually drives civics; it defines what the civic structure of a community is going to be (https://www.youtube.com/watch?v=x-ffpAkviM0&t=920).
I think the proper "role" of congress is to send a message to silicon valley: fix the mess you've created, or we're going to fix it for you.
That there's a serious problem has been obvious for quite some time now, and Silicon Valley has hardly even tried to do anything about it. More likely, they've tended to tune their algorithms to best profit from it.
I'm a small government guy, but if they continue with this disingenuous "we had no idea!" song and dance for much longer, I'd fully support bringing a very big hammer down on the bigger players.
Judging by my Facebook feed Americans are perfectly capable of vicious, polarized discourse without foreign influence.
It's gotten quite bad in the U.S. Common political discourse has devolved into a back and forth of insults, hyperbolic accusations, and sometimes just plain insanity. I agree with you that this will probably be something that will last a while.
If only we could get people to realize that believing something to be true merely because it reinforces a pre-conceived idea or because it's plausibly true is not a path toward finding the truth. How do you have a discussion with someone who believes the political equivalent of a flat Earth?
This problem can cut both ways. Ie. if one dismisses the other person as the political equivalent of a "flat earther," they're not questioning their own assumptions or seeking to understand why the other person could feel something so strongly.
Having lived in both deeply red and deeply blue states in the US, I think most people aren't as crazy or as far apart as they think they are. But many of the people who have lived their whole life in one camp have been trained to think that the other side is literally crazy, and that's a convenient excuse to be afraid of them and not bother seeking to understand them.
" Intractable conflicts feed upon themselves. The more we try to stop the conflict, the worse it gets. These feuds “seem to have a power of their own that is inexplicable and total, driving people and groups to act in ways that go against their best interests and sow the seeds of their ruin,” Coleman writes. “We often think we understand these conflicts and can choose how to react to them, that we have options. We are usually mistaken, however.” "
" Once we get drawn in, the conflict takes control. Complexity collapses, and the us-versus-them narrative sucks the oxygen from the room. “Over time, people grow increasingly certain of the obvious rightness of their views and increasingly baffled by what seems like unreasonable, malicious, extreme or crazy beliefs and actions of others,” according to training literature from Resetting the Table, an organization that helps people talk across profound differences in the Middle East and the U.S. "
If you're looking for it, you'll see this behavior everywhere online, including HN, and it's getting worse.
Some ideas are so idiotic - like flat Earth belief - that one should not waste their time engaging with said ideas. There are people who deny that Sandy Hook happened. I see no need to understand such fools.
> How do you have a discussion with someone who believes the political equivalent of a flat Earth?
How can you decide if someone is the political equivalent of a flat Earther unless you actually attempt to have a discussion with them? All I really see online is people trying to collect the most virtual points (retweets and likes) within their own echo chamber rather than actually discuss things with people.
I don't need to have a discussion with the person sharing a photo of Nancy Pelosi "quoting" her as saying that a border will violate the rights of millions of illegal immigrants in order to know that this person's politics aren't connected to reality.
> Common political discourse has devolved into a back and forth of insults, hyperbolic accusations, and sometimes just plain insanity.
if i may: this example, and my own experience, is consistent with the newspapers from the american civil war, which i read lots of many years ago while my parents dragged me through the american South Eastern states and the myriad covil war national parks an monuments.
further:
> flat earth
just dont. let them vote. and relax. and remember you arent in charge. if you want Texans (for instance) to vote differently, well then... move to Tecas, y'all!
I am still hopeful that the hand wringing about social media is overblown. Flat earthers for example I believe are in two camps: the genuine loonies that have always been with us in small numbers and the large number of people who just want to pretend to have whacky beliefs for the entertainment value of holding them (think people who believe in crystal healing and such). As dissatisfying as it is, freaking out about people who hold beliefs for entertainment is counter productive.
We also have to remember that discourse waxes and wanes in tone. Throughout the 1800s for example it wasn't uncommon for massive brawls to take place between what were essentially political gangs. Think of the brawls we see today between antifa and right wing groups. Also the 70s saw literally thousands of bombings in NYC by far left groups. Its probable that due to political realignment and inevitable shifts in technology we're going to see this going forward. The most important thing we can all do is keep faith in our fundamental freedoms, like freedom of speech.
It seems to me that the problem now versus 30 years ago is one of scale. It's much easier sequester oneself from sources of information we don't like for political reasons. It's easier in terms of time and money to get masses of people to believe easily disprovable claims. The adage in politics has been, "All politics is local." Is that true anymore? Will it be true in 10 years?
With demographic and economic changes and the power/scale of social media the U.S. is facing, I think the intermediate future does not look good for political discourse.
I'm thinking that a lot of this perceived difference is that taking a political stance is much more natural on social media than previously in real life. Before you would have to care enough to find some place to set your soap box or buy a printing press. The crazies definitely were there but there were also a large segment of the population that would have gladly espoused whacky beliefs but didn't have the motivation and/or resources to do so. Now, proclaiming that the earth is flat takes about 10 minutes of account setup online. I'm just not convinced that it really matters as much as people think it does.
This is true, but it's amplified by the ugliness that you yourself don't realize is being pushed by malicious actors. Hateful, untruthful memes, conspiracy theories, Reddit comments, Twitter posts, and more are being constantly generated by entire buildings full of people who barely know what role they are playing in providing fodder for Infowars, Fox News, and the rabid extremists looking for hateful storylines to retweet, share, and like. All of that filters down to regular innocent people and distorts what seems like normal, what seems fringe, and destroys trust in everything a person sees or reads online. Sure, a lot of this is just people at their worst, and Americans being themselves, but your reluctance to admit that it's partially fueled by organized forces that don't have any particular mission beyond sowing discord and chaos is part of the problem here.
I find it far fetched that all of this is down to a massive conspiracy. Certainly there are actors that do this but they seem to just be amplifying by some small amount preexisting trends.
What about freedom of speech, or freedom of press? The government shall make no law?
I guess the convenient exception made by the FCC for regulating content shown on TV (OMG nudity!! censor this!!) will prove useful as a precedent. Sad.
If you don't have a concept like obscenity -- even one that is as ill-defined as obscenity is after Miller -- you can't ban child pornography. Or the Human Sacrifice Channel.
So? You can't produce either, at least not with real victims, without doing far worse than mere "obscenity". There is absolutely no need to infringe on the freedom of speech by banning the possession or distribution of evidence that real crimes have been committed, which is what these obscenity laws amount to.
I happen to agree with you, but it’s not our opinion that determines the law, it’s the Supreme Court’s. And they have clearly and consistently held that obscenity is not protected by the first amendment, for the lifetime of the Republic.
> but it’s not our opinion that determines the law, it’s the Supreme Court’s
The Supreme Court's authority comes from the Constitution. As such, while the Court can rule that a law is unconstitutional, they have no power to permit something which the Constitution specifically prohibits the government from doing (such as infringing on the freedom of speech). All their consistency on this issue implies is they have been failing to do their job and uphold the freedom of speech as plainly written in the First Amendment for a very long time.
They also held that slavery was fine, until it wasn't. We have major problems in this country and the failures of our legal system are prime among them.
The Constitution was amended to explicitly prohibit Congress from making laws abridging the freedom of speech as well.
What's relevant is the Supreme Court reconsidering a previous decision without any change to the Constitution, but that too has happened before, e.g. Brown v. Board of Education.
We have a comment complaining about the Court reading beyond what’s written, and another complaining about them failing to do so. I’m just pointing out the inconsistency. If you want total free speech based on a plain reading of the first amendment without considering context, then there’s no reasonable way for the Court to find slavery to be unconstitutional prior to the 13th amendment.
I’m not taking a position here, just pointing out the implications of the slavery thing.
Social media companies are self regulating( see: alex jones).
Tech gaints have been fusing their role into in form identical to publicly elected govt. Ultimately becoming indistinguishable from elected govt, except they aren't answerable to general public and can make up rules as they go.
The financial industry created self-regulatory organizations (SROs) to police themselves (in theory) more effectively than the government. They set up rules of ethics and policies and have the power to fine companies and bar individuals. Agencies like the SEC let them take the lead, but will step in to investigate fraud and crimes. The industry may need to mature, but this would probably be the most sensible solution.
The financial industry collapses and needs to be rescued every ~10 years (S&L crisis in the 80s, Long Term Capital Management in the 90s, Great Recession in the 00's). Last time they crippled the world economy. Massive fraud is commonplace, based only on the fines they pay. I don't see them as a model of success!
The financial industry is enormous and diverse. What I was referring to specifically was the US equity markets and FINRA. They have been very stable and reliable through the years. Commercial and mortgage lending is a whole other ball of wax.
I'd argue in the scheme of things, it's not really that broken, and definitely not deserving of the type of fix we're in for. It's funny to watch the masses effectively manipulated into thinking there is effective mass manipulation occurring. I wish they'd just leave things alone personally, but since they won't and everyone is convinced this is a dire problem, I can only hope for restraint.