Hacker News new | past | comments | ask | show | jobs | submit | sebow's comments login

"Climate change deniers"

Like holocaust deniers, like nazis, etc.? More and more buzz-words and labels broadly and freely used in order to dilute their respective meanings.

Our parents, grand-parents, etc. did not teach us good virtues "by shaming" (i.e by telling us we're X/Y/Z buzzwords), but either through example(of what not to do) or through rationality (explanations until comprehension). It's no wonder we have rising "X/Y/Z" sentiments when, ironically, the people who seemingly advocate against "these bad boys" resume to just label and categorize individuals instead putting the effort to either educate, explain, reference. Low-effort means low results, and virtuous traits are definitely not gained through complacency.

You solve things with dialogue (not monologue*). And if those people who use these buzzwords don't like dialogue "because some guy proved it" [it's actually irrelevant if the referenced fact is true or not] and dismiss discourse shamelessly, then they're doing more damage to their own narrative. Nobody likes being told what to do(this includes knowing/believing/etc.), and facts matter as much as they're being understood.


> Our parents, grand-parents, etc. did not teach us good virtues "by shaming"...

Oh, they absolutely did.

https://en.wikipedia.org/wiki/The_Scarlet_Letter was written in 1850.


I expect this "red meat inherently bad" paper to be rebuked faster than the previous one, which lasted quite a while. Of course even retracted if there are still people with integrity in academia.(which I'm sure exist but as a tiny minority when we're talking industries of mass consumption: tobacco, foods, pharma, etc.)


The framing makes it sound like it's a "bug" or something. From my understanding it's not, because it's hardly a reliable reasoning tool: whether using statements or using data. Unless we come up with or advance a better architecture, similar "panic porn" is useless, not to mention this reeks of a hit piece. Just verify everything and stop with the blind trust.


The most interesting snippet from the article:

> Early Friday, Musk said that xAI would release its first AI system — presumably Grok — to a “select group” on Saturday, November 4. But in a follow-up tweet tonight, Musk said all subscribers to X’s recently launched Premium Plus plan, which costs $16 per month for ad-free access to X, will get access to Grok “once it’s out of early beta.”

The lower tiers getting in the deal as well sounded too good to be. But still, at 16$/month if grok is anything like cgpt4 it's worth it, imo. What I'm gonna be interested in is Musk's "promise" that it will not be censored/lobotomized, like chatgpt was(and is). I'm not sure what technology they're using such that grok uses realtime X data (if anyone has ideas, feel free to share), though i'm assuming this is something like Bard(which from my experience does a similar thing) and the knowledge isn't "trained" per se in the model.


Wasn't aware tooling for whisper came out. Whishper looks neat, definitely better than what i'm using now( whisper locally/collab and then editing with SubtitleEdit/gaupol). Thanks


I feel the opposite is true. You're right about this, if we were to resume the internet to the arab world/middle eastern world/etc. But the internet de facto is a global medium of discussion(for all intends and purposes, even though it's still somewhat segmented). This means it's more likely that people outside of this conflict, with no ties and partisanship, can have a 'more objective' look on this. This however is with a drawback: it's also easier to influence the "entire audience" through the power of peer-pressure (or straight out propaganda/agitprop).


Yeah agreed, the stakes are immense for those with propaganda goals.

To expand... I'm commenting from personal experience. I have no stake, besides an ever intensifying weltschmerz every time I turn on the news. But I see friends and acquaintances take super harsh positions: "they got it coming!", "we must fight! We have the right!", "all bystanders must pick a side (ours) or else you are complicit too!", "the other side is inhuman and has no right to exist!"

And so few people I know are reacting in a de-escalatory manner. We collectively gaze into a bottomless pit of despair.


>And so few people I know are reacting in a de-escalatory manner. We collectively gaze into a bottomless pit of despair.

Well said. That's where I'm at. I'm very confused why anyone would be working against a "pause" (is there a synonym that's less controversial?).


I'm assuming they're gonna work on ROCm(or maybe a new attempt by AMD?). Anyways good luck to them, we desperately need some competition to CUDA. AMD's hardware is definitely potent, but yeah you can't catch up to CUDA overnight (the release discrepancy between the two is ~10 years).


Who defines what illegal and misinformation mean? If the EU actually cared about this topic, they would hire a bunch of people to use the already_existing community notes feature. Of course this is implying they care about the topic, which is probably irrelevant to them. They just want control of information because both sides in this war act scummy and inhumane. Of course if they were to do this they would get challenged by every other user who wants to "fact check", which rightfully dilutes their appearance of authority of truth.

It's 2023 and people still don't understand the premise of government censorship. Guess what: deleting misinformation is still censorship, it does not attack the issue itself; it only probably antagonizes the people who put said (false or not) information, and 'downgrades' the filter of the people who are actually interested in the truth. The information being false or not is actually irrelevant, it's the process of censorship that's damaging. We've been over this with covid already. You don't even have to be an optimist to notice that, most often than not, truth eventually comes out. And what's more interesting is that the more open and free a medium of information/discussion is, the >faster< it gets out. This is because even though humans are impulsive and emotional, we're still rational beings. Fabricating things not rooted in reality is harder and impossible to maintain.


> Who defines what illegal and misinformation mean?

EU regulators.

> The information being false or not is actually irrelevant, it's the process of censorship that's damaging.

> This is because even though humans are impulsive and emotional, we're still rational beings.

uh nope:

"The policy expands Facebook’s rules about what type of false information it will remove, and is largely a response to episodes in Sri Lanka, Myanmar and India in which rumors that spread on Facebook led to real-world attacks on ethnic minorities."

- https://www.nytimes.com/2018/07/18/technology/facebook-to-re...


> "The policy expands Facebook’s rules about what type of false information it will remove, and is largely a response to episodes in Sri Lanka, Myanmar and India in which rumors that spread on Facebook led to real-world attacks on ethnic minorities." >- https://www.nytimes.com/2018/07/18/technology/facebook-to-re...

I'm aware of those frankly tragic episodes. But ask yourself this: is the problem the fact that simply misinfo was put on FB or the fact that there wasn't enough counter-balancing information available? I'm not trying to be sarcastic or downplay the situation, but the fact is that a lot of places suffer "from misinfo" because the only info available is misinfo. The human mind is inherently a curious mind. And as i've stated in my first comment, people tend to follow info from trains of thought that make sense(are logical)[yes, even if we're talking about religion/etc.): the contents of the information matters, how cohesive it is, how it attacks the false claims. I would argue that a big lie does not even need a "big truth" to be debunked, it only needs a well-thought question to instill the skepticism required for the lie to not be taken as plain truth.

Closing thoughts: in third-world regions: the ones you've mentioned, russia, "china"(on the freedom of information/liberty front), etc. the issue of misinformation is not really that misinformation exists; but actually that it is mainly the only information that exists, and there's not enough balancing "truth"/skepticism. You might think i'm silly or that i'm arguing for misinfo(that wouldn't suit me by the way), but I live in an ex-communist country: I've had some relative contact on how the centralized government used to push actual misinformation and how people took it as blind truth. After the said government(s) failed, people usually became more skeptical about all kinds of information(a good thing to do). Those people are the generation of my parents for example, a dying breed, and it should not come as a surprise that every subsequent generation that came after and took all news as "facts" became usually dumber, less rational, and more naive.


the propagation of misinformation is asymmetrical. a single person or group of people will not have the resources to counterbalance misinfo when its bankrolled by state actors or malicious groups who use it as a weapon. additionally, it's a lot harder to deconstruct a heavily-propagated lie and to educate people than it is to manufacture outrage (eg https://cognitiveresearchjournal.springeropen.com/articles/1...).

what you're describing is an ideal world where everyone thinks and acts rationally and is willing to change their views when new information is presented. that is far from reality. in short, propaganda works for a reason.


Your position is incoherent. You say that the EU itself should hire people to point out misinformation, and then rail against government censorship.

The EU is asking Twitter to define their own moderation policies and enforce them.

So which is it? You want government to do this or private companies?


My position is not incoherent at all, or at least I don't think it is. Let me be more explicit: if EU deems the problem of misinfo is actually such an important thing on X, then it should itself go ahead and "fact-check" the said misinfo on X through Community Notes. This is not government censorship, because other people can also get involved in the process. I don't see how these 2 things are contradicting at all. Granted we're not talking about removing content, which realistically should only be done in very few scenarios: illegal content(CP, that sort of stuff). I'm pretty sure if actual misinfo were to be posted and debunked through a system like community notes, the outcome in the eyes of the public would be better than simply removing said material(which reinforces skepticism and negative attitude towards the authorities/companies among the skeptics).

>The EU is asking Twitter to define their own moderation policies and enforce them. The EU wants a lot of things that aren't feasible. This is one of them. Elon stated plenty of times that the platform should not censor or moderate more speech than necessary. This position turns out to be "harmful" in the eyes of EU, because like the bureaucrats they are, they need everything under control or determined to be under scrutiny a priori(by labeling speech by certain criterias). Imagine a new form of content that does not fall under any current regulation/moderation policies. By default it should not be problematic, because it's not "illegal". Thus my point: what EU deems misinfo/illegal is a slippery slope and will never, ever be satisfied. Unless of course we include a whole lot of nothing-speech.

>So which is it? You want government to do this or private companies?

Neither, or both. Depends if you think my answer is actual "moderation"(or censorship/removal of speech) or providing context in the form of fact-checking(community notes or whatever). If governments want to "fact-check" they should go ahead and do it. Otherwise if they simply just demand that the private company or they themselves(through some intermediaries) want speech removed, that's plain text censorship.[A little note: it's worse than that, you de facto have an actual fascist collusion between the state and private enterprises].


I see your point, and I guess it's not incoherent after all.

For myself, I see a grey area of information, which, while it may not be illegal, can be considered as harmful to people.

If the EU wanted to censor everything they disagree with, they would declare it illegal. That would be overreach in my opinion.

I however don't see it as unreasonable to ask that companies providing a platform take steps to protect their users from some kinds of information and are clear about what steps they will take to do so.

Maybe a slippery slope there, but equally having no moderation carries its own real harms.


I feel like some of the discussion about anonimity here is kind of misplaced. Just because illegal activies can be done under anonimity shouldn't mean anonimity should be banned aswell(in order to "prevent illegal activities"). That's one of the worst things that can happen(and it's somewhat happening already), and if I'm not mistaken this could also be interpreted as illegal and unconstitutional in countries/places where there is such thing as a "right to (>and not<) associate"(and it's various forms).

And I'm sorry for the upcoming little rant, but whoever thinks they're anonymous while using a Meta(or any Big Tech platform, really) product is an idiot, tech literate or not. Not even places like 4chan have true anonimity, depending on the place & jurisdiction we're talking about[remember the case of the guy making a call to violence(illegal) that got arrested]. The 'traditional' web is not anonymous at all:not only the underlying protocol(s) is/are inherently not anonymous by design, but you add insane surveillance and you can eventually crack anything. Even things like TOR/others are not truly anonymous, and the US regime proved that if they want to find you, they will, assuming they have jurisdiction.

Coming back: I don't quite get why people talk about free speech in this context. Not only S230 is a broken f&ckfest but we're also talking about a non-US place. What's more hilarious is that even if we would have talked about the US, defamation (w/ calls to violence & other speech not protected by 1A) is still illegal.


As I already discussed in my own thread, there has to be limits to people's anonymity online, because otherwise you are just allowing the bad actors to control the flow of information, and thereby also shift opinions simply by the sheer volume of information they post. This is the classical behaviour of conspiracy theorists. E.g. The "evidence" presented in Pizzagate. It is bassily a flood of non-evidence intended to overwhelm and drown meaningful facts and discussion.

Anonymous accounts should not be disallowed entirely, but they should be observed more actively for misbehaviour, including things such as spreading of miss- and disinformation and manipulative content. Sometimes individual posts does not really spread misinformation, but when you look at the bulk of the content it becomes clear that they are actually engaging in the active spreading of disinformation. This brings me to a very important point: anonymous accounts should be clearly marked as being anonymous. They should therefore not allow a profile picture.

Disinformation can also be in the form of suggestive or questioning material. E.g. Sharing a piece of misinformation and writing "interesting?" or "I really hope this is not real?". If such behaviour is consistent, then it is usually because that account is used to re-share disinformation, and if the account has nothing else of relevance. E.g. Does not have any authentic connections outside of this "conspiracy" network, then obviously it has no authentic purpose on social media.

So while anonymity is important to defend, we also need to identify the bad actors that abuse it. For this there are some behavioral patterns that are easy to identify, and this could, to some extent probably be automated already now.


Yeah, sure. But in my honest opinion even if you were to outlaw anonimity you would still have these problems. I would go as far as to say that things would be actually worse, because those bad actors would actually confuse and mis/disinform people even more.

In the last 15-20 years the internet became less and less anonymous, and yet those problems still exist and they're a central issue. While it's mostly a correlation and definitely not a causal factor (because internet adoption was non-existent back then compared to now, amongst others), it still begs the (rhetorical) question of why the pressure against anonimity.(See past and current abuses in this regard by governments/empires/etc). I'm semi-jokingly talking about a conspiracy here, because i've used both anonymous and 'very verified' platforms, and most of the time the misinformation happens on the latter. This is especially true since the facebook days, because the platform itself gives the vibe of credibility (alongside the user/entity posting it).

Trying to combat misinformation in this way is and will remain a cat&mouse game because there will always be actual bad actors which will try to impersonate/immitate the good ones. Put it like this: you have the same people walking on 2 streets: on the first one they hear Biden/Trump/Macron/etc. saying a fake thing, spreading misinformation; on the next: a random hobo saying the same thing. Which one will have the worse impact? While I'm not sure there have been done such studies/experiments, past "anecdata" tells me the influential person successfully fools a higher percentage of those people. While you could say "but once exposed, he's recognized as a fraud" and that's entirely true: we then return to my point of people trying to impersonate/fake credibility or grift the issue by saying unquantifiable or things that just cannot be entirely fact-checked (without projecting or speculation): those actors do more damage because they appear credible.

I fully agree though that there are certain aspects that need to have a 0 tolerance policy (CP and similar things) even when anonymous. And with regards to flagging anonymous users as such: would be interesting if any social network tries to make the experiment of having semi/fully anonymous modes: because honestly that would be just one of the few actual solutions to combat polariation on social media: by encouraging more free & honest discussion (even if there's 90% chance it becomes less civil).


Wrote a little 300 word essay on this, and I was prepared to add the comment until i realized it does not really matter. The situaton(or at least my opinion/response) can be explained in a simple phrase: Good, 10-20 years of failed policies get you this. Especially everything post 2015. Not 'happy' per se at this analysis/realization, but the fact is that Germany and Europe (or at least the western big european powers, wink) reaped what they sow: detrimental policies NOT willed by the european citizens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: