That is debatable, Reddit has been conned a lot in the past.
But one error when trying to con the site seems to be too much information: many users apparently love snooping around and digging (and are pretty good at it), so giving more information initially makes claims more credible, but also gives more strings on which users can pull risking unraveling the whole thing.
Furthermore, there's a high level of dependency on the subreddit in which things go "viral", some subreddits are far more credulous than others (and credulity depends on the subject matter as well, you can get both /r/atheism and /r/libertarian to lap up claims unchecked, but it's unlikely the same claim will work on both) (this also means hoaxes are far more likely to unravel as they spread outside of a given subreddit and get wider exposure in the community)
Also every once and a while a hoax reaches the front page, usually some highly suspect family situation or someone finding a treasure chest in their basement and asking reddit what to do.
Speaking of Reddit, People on HN often jab them when they discipline others for violating community norms, as in "This isn't Reddit. Please keep on topic."
Not once have I seen the Reddit jab helpful. I've only seen it superfluous and petty. I like people saying "Please keep on topic," but why jab another community? I don't even use Reddit.
I say this hoping it helps people simply remind others of this community's norms without needlessly jabbing others.
Well, one of HN's aims is to avoid the eternal september that is very much affecting reddit at the moment. Given that, superfluous jabs at reddit might actually be effective in maintaining a barrier between the communities, or at least strongly remind people of the very specific conversational norms which apply here.
Although I started using HN a while ago, I would definitely count myself as someone who uses HN to get interesting stories after going through digg and then reddit (and I'm sure there are a lot of us here) but I'm strongly aware of the particular rules here and do find myself making more of an effort to converse in good faith / be less snarky.
Your comment is beside the point. The issue isn't the quality of the comment the "This isn't reddit" comment is in reply to. Rather, it's the "This isn't Reddit" comment itself. Unless things have changed, you cannot upvote or downvote a post you reply to, or that replies to you (at least, your vote doesn't count). Replying means you are doing nothing more than calling attention to the comment.
If the comment is not worthy of being on HN, then it's also reasonable to conclude that most comments it sparks will be of equal or lesser value. Read that part again.
Understand that if a comment sparks intelligent discussion, it's generally a good comment. And comments that spark intelligent discussion should be supported.
So, either you are part of the problem, or you are wrong, by posting "This isn't Reddit. Please keep on topic." comments. The best solution? Vote accordingly and contribute.
I can't see that your argument engages with anything I've said. I've spelt out a plausible mechanism by which some comments would add value to the community (by preventing deterioration of standards of discourse). If you want to say otherwise, you'll have to take issue with that argument. As it stands, you've not done so.
Regarding this:
> If the comment is not worthy of being on HN, then it's also reasonable to conclude that most comments it sparks will be of equal or lesser value.
I don't see any reason to accept it, but even if it did we're talking about a specific subset of responses, and so I've presented an argument for why that subset would add value. This is not inconsistent with the general claim, and so your general claim does not do much in the way of response - it is beside the point.
Finally, 'adding value' is more than just 'sparking valuable comments'. Comments can add value without sparking valuable comments by, e.g. re-enforcing valuable community norms.
> If you want to say otherwise, you'll have to take issue with that argument.
Meta is death. Some meta discussion can be interesting and lead to useful discussion, but in general meta discussion sucks.
Imagine a content free meme-spew post.
Either it gets downvoted by the community (down vote ability appears after 500, maybe 750, maybe higher karma), and the poster knows what they did wrong.
Or it gets downvoted and the baffled new user asks why, and someone then posts "TINR; but here's our guidelines so that's why you got the downvotes". I'd suggest that the TINR in this case is useless, and the rest of the post is of dubious value. If a person is unable to grok the style of HN from the faqs and existing posts without someone having to handhold them then maybe they need to lurk more. But at least it's welcoming and helping people become productive. Notice that this is a tiny subset of TINR posts.
Imagine the content-free post gets a TINR comment before it gets a downvote. It antagonises other communities, it adds no value to the thread, it makes people think that TINR posts are acceptable, it clutters threads, it encourages meta-discussion, etc.
> I can't see that your argument engages with anything I've said.
The GP remarked that comments that amounted to "This isn't Reddit. Please keep on topic." (TIR) are worthless. You disagreed, and tried to explain why by suggesting that these comments discourage people from commenting.
My assertion is that not only does this not work within the system of HN, but that the TIR comment itself is of equal or less quality to that of the comment it's in reply to.
> I've spelt out a plausible mechanism by which some comments would add value to the community (by preventing deterioration of standards of discourse).
Yes: discouragement through an off-topic comment that contributes nothing to the discussion. By replying, your vote does not count (should up or downvote), and you also highlight the discussion. In essence, rather than using the existing tools and protocol of HN (voting or flagging, ignoring trolls, and contributing posts that add to the discussion), your simply adding a TIR post.
You've made this claim (thought without specifying why it's superior to the existing methods), and yet ignored the simple fact that a TIR post is just as bad, if not worse, then the post it replies to. At the very least, the post it replies to is at least, usually, on topic (Topicality is, of course, not the sole contributing factor to whether a comment is worthwhile).
> If you want to say otherwise, you'll have to take issue with that argument. As it stands, you've not done so.
Your inability to understand, or simply your attempt to dismiss my argument, does not negate my argument. It's clear to any reasonable person the arguments I made. If you are interested in an actual discussion, make an attempt. Otherwise, you just come off as a troll. Judging by your profile (which does matter, considering the nature of this discussion), I wouldn't be surprised.
> I don't see any reason to accept it, but even if it did we're talking about a specific subset of responses, and so I've presented an argument for why that subset would add value.
You've done so while ignoring the harm that the a TIR reply causes. Maybe you aren't aware of the ability to upvote and downvote (I forget the threshold for those features), or flagging. This is highlighted by your final comment:
> Finally, 'adding value' is more than just 'sparking valuable comments'. Comments can add value without sparking valuable comments by, e.g. re-enforcing valuable community norms.
Yes. Adding value is more than just sparking valuable commentary or discussion. What you fail to grasp is that your support of TIR is in direct contrast to valued community norms. We have tools (upvotes, downvotes, and flagging) to handle these issues already. A TIR does not enforce these norms, and in fact, flies in the face of what the community wishes to be.
The two that are important to this discussion are here:
"Please don't submit comments complaining that a submission is inappropriate for the site. If you think something is spam or offtopic, flag it by going to its page and clicking on the "flag" link. (Not all users will see this; there is a karma threshold.) If you flag something, please don't also comment that you did."
And:
"If your account is less than a year old, please don't submit comments saying that HN is turning into Reddit. (It's a common semi-noob illusion.)"
TIR very much falls into both those categories. Submissions are not just news stories, but comments as well. If something is worth of a TIR comment, it also means you should just flag it.
Yes, public shaming (what TIR amounts to) can be effective in other environments. HN chooses not to work that way. It's what makes the community so mature and effective.
>You've made this claim (thought without specifying why it's superior to the existing methods), and yet ignored the simple fact that a TIR post is just as bad, if not worse, then the post it replies to. At the very least, the post it replies to is at least, usually, on topic (Topicality is, of course, not the sole contributing factor to whether a comment is worthwhile).
I'm not ignoring that fact - it's just that I haven't accepted it. I agree that TIR posts do not derive value from adding valuable on-topic content. But I've suggested a way for them to derive value from another source. Given that, I don't accept the claim that they are just as bad as the posts they reply to. I thought (and still do) that the argument you were offering takes that as a premise. That's the point I made in the last comment, and I took it to be sufficient to show why your argument doesn't go through. I'm very much happy to be corrected if that's not the argument you had in mind.
The point about using existing tools/protocols is a different argument, and I think a good one. However, it doesn't bear on whether TIR posts are in themselves valuable or not - it only bears on the comparative value they have. Evaluation of that argument would require more knowledge than I have of the efficacy of different ways online communities enforce norms. Re the other points - I think there are interesting things to say about them, but I suspect that would take us off the central disagreement. I am however familiar with the other points about comment policy and downvotes etc... - I've used HN far longer than I've had this account. Whilst we're on issues of civility, I would say that suggesting comments come off as trolling, claiming people 'don't grasp' (which can mean 'don't understand'), and judging them on two sentence profiles don’t seem to me very useful. I was slightly brusque in claiming your comment was 'beside the point' - but did so to echo your comment.
I think I should point out that I don't commit to the claim that TIR posts actually are valuable - I just think it's plausible that they are.
> I think I should point out that I don't commit to the claim that TIR posts actually are valuable - I just think it's plausible that they are.
Not in this community, which is the context we are discussing this in, after all. Given the other tools available, and given what is valued here, TIR are no better then the comments they reply to. Something you seem to be ignoring is value. Value in terms of respect, and of time, both yours, and others.
A TIR post is not respectful. It's used as an insult. It's an open insults. Given that their are other, more effective means to communicate disapproval, and that be replying, you are removing those means from your arsenal, it's clear that your intent is merely to insult.
A TIR is not respectful to others. By highlighting the post with further discussion, it wastes others time. Not only are you not operating to remove the parent comment from the discussion, but you are asking people to read your comment in reply, and to judge the merits of what is now a discussion. See, if the TIR post is left as is, then I must read the parent in order to understand context. If the TIR post is left as is, it's also a symbol that the comment is a good comment, and worthy of reading.
Do not confuse the intended goal of a TIR. I agree, the intent is good. But even if it accomplishes it's goal of having the parent retract his statement, it does so in a manner that is immature and rude. Being able to bully people into a position is not valuable.
I guess we'll continue to disagree on it's value. That being said, HN has it's policies, and I tend to agree with them, and will continue to down vote both TIR posts and the posts they reply to.
Well, we can probably agree on downvoting them, even if we don't agree on their value (I don't think I'm ignore value - I just think there are more potential sources of it)! - the faq does ask people to refrain from posting them, and I feel duty bound by the requests of the site owner.
It's not a jab. It's just different community norms. HN wouldn't be HN if people were allowed to open up 10-level nested comedy threads. If I want to read those - and sometimes I do - I know I can go to Reddit.
If your comment amounts to "This isn't Reddit", then you're part of the problem. There are several ways to show your displeasure without having to resort to posting pointless and wasteful comments.
In truth, I find the posts whining about such things worse then the posts they are in reply to.
There are no "community norms" for reddit, each subreddit has its own norms and things which work in one may get all your comments mod-deleted in an other.
To be specific, jokes are much more welcome in /r/programming than they are here, which invites Eternal September.
I think the pro-business mindset helps as well. People start complaining when the depth that they remember isn't present in the discussions of the internet community (sometimes the gap is only perception/nostalgia).
I am pretty much a poster child for someone who threatens HN culture. While I am deeply interested in the intersection of computers and human enterprise, I came over from Reddit, and I don't have any practical experience to speak of. There are many times here (or in /r/askscience) when someone leaves an opening for a joke and I feel like I just have to make it. I'm sure that originally the impulse was tied to the possibility that a lot of people would like it, but now it's merely a conditioned response.
At any rate, my tendency to temporarily believe that my youngish impulsiveness would be a positive thing for the community make me an outsider here, but I don't want to change that.
I know that that having a clearly defined purpose helps a community to be strong and it isn't worth making HN (or /r/askscience) more like me if it jeopardizes their existence.
What's more, once in a great while, when it really is time to make a joke I've watched them go by unmolested.
Edit: I'm sorry about the double post. I was having trouble typing on my phone.
Edit2: In response to your comment below, I'd say that people who are pro-business are probably less introverted than I am. Given that assumption, I'd say they'd be less motivated to make those kneejerk comments because they're probably getting enough social validation elsewhere. Granted, those are assumptions layered on assumptions, but at least that was what I meant
By "jab" do you mean derogatory remark? Does that not depend on the poster's perception of Reddit? If one considers them equal but different, then the remark is merely one of contrast.
> each subreddit has its own norms
Granted...
> There are no "community norms" for reddit
From what I've observed, there most definitely are. Their forms of humor are often upvoted across subreddits (e.g., inside jokes, meme's, animated gif's in lieu of discourse).
This is how I interpret the reddit "jab" comparison: go there for funny, come here for info; neither is better than the other.
If you personally feel HN is superior to reddit, than I can see why you'd consider it a jab.
I don't see it as a jab, but a helpful reminder. At least, personally I often find myself starting to compose a funny reply that would be fine for reddit, before remembering that this is Hacker News with different standards. Like many others, I'm active both places, and don't want to dilute the difference.
I posted some comments below laying out the reasons why I don't like this. I got heavily downvoted and the majority opinion is clearly against me, saying essentially that he is a great teacher with the white hat for teaching everyone not to believe what they read on the internet. I guess I ought to gracefully admit defeat and bow out to oblivion.
Still, I cannot resist making one final farewell appeal. Consider carefully the near future, when prof. Kelly will have produced a flood of busy graduates, all experts at disinformation. What is the likely outcome of his program if it succeeds? It is not that hard to guess. You will get what you applaud: you will no longer be able to trust anything you read, ever. You seem to consider this to be of some great educational value but I must admit that the value escapes me.
Let me extrapolate one step further. What is the point of reading anything if it is all likely to be false? Exactly. Stop reading and sit on your dumb asses watching TV.
Snipping some insults in return and focusing on the issue here: "graduates, all experts at disinformation" — snort!?!?
The world is full of experts in disinformation, and they're not the graduates of a short University course on History that couldn't even fool reddit (no offense redditors!)
They are politicians, lobbyists, campaigners of all political stripes, bureaucrats, journalists, historians etc. Knowing how to sort the wheat from the chaff (and knowing it necessary to do so) is half the point of education, especially in History. And that's what the professor is helping achieve, not a systematic destruction of the written word through a single demonstration as part of a 3 credit University course. Experts at disinformation my hat!
>Still, I cannot resist making one final farewell appeal. Consider carefully the near future, when prof. Kelly will have produced a flood of busy graduates, all experts at disinformation.
As others have pointed out, masters of disinformation have existed long before this class existed. To take just one example, if you're left-leaning, Fox News (to liberals, Faux News). If you agree with Fox News, then you'll consider everything from the the "Mainstream Media" (or "Lamestream Media") to be disinformation.
Furthermore, there is a grand tradition in explaining about how the "darks arts" are done so you can defend against it. In order to devise defenses to nerve gas, you need to understand how to effectively deploy nerve gas. If you want to understand how politicians and other masters of disinformation use statistics, the book "How To Lie Using Statistics" is an invaluable guide.
> What is the likely outcome of his program if it succeeds? It is not that hard to guess. You will get what you applaud: you will no longer be able to trust anything you read, ever.
Yes, this. You should never blindly trust anything that you read. That's what critical thinking is all about. The point of reading is to find alternate points of view and information that you might not have known before, but everything should be cross-checked to some degree. In some cases you can trust the source because you've previously found the source to be authoritative (say, a professor teaching a class).
But the point of what people are trying to point out with respect to wikipedia is that just because one Wikipedia article has been proven to have a reliable secondary source of information, aggregating multiple primary sources into a handy reference article, is absolutely no guarantee that that particular article will continue to be reliable, or that any other Wikipedia article is guaranteed to be reliable. Using Wikipedia to expose yourself to something that you didn't know before: Good. Not cross-checking anything you read at Wikipedia with primary source material to learn more and to verify the facts that you found in the Wikipedia article? Foolhardy.
>In some cases you can trust the source because you've previously found the source to be authoritative (say, a professor teaching a class)
Allow me to laugh. This is a professor teaching a class (how to lie)! I guess that makes him an authority on lying but can someone like that be said to be authoritative? Or all other professors, by association?
I must say that I accept many of the valid responses here, thank you. His classes no doubt are educational and one should, of course, check the sources.
However, he is taking practical assignments too far. You can teach students to beware of liers without them first having to lie themselves. It does not seem like the skill of a great teacher to make them actually lie to the public. More like a desperate stunt from a teacher running out of ideas and morals.
Understading is one thing and I agree with all that but in terms of your examples, he is demanding the equivalent of his students actually deploying the nerve gas against unsuspecting people. (Thank you for suggesting that hyperbola)
> you will no longer be able to trust anything you read, ever.
Were you born yesterday? You already can't trust anything you read, at least not 100%. Even if you had actually shown (as opposed to simply declaring repeatedly) that he's making the problem worse, it would be a drop in the ocean.
I'm ashamed to admit in my youth I created a fake Wikipedia page in a couple hours (between 3 and 5AM) that lasted for months before a friend of mine accidentally exposed it as fake by being silly in the talk section.
It was flagged as dubious because the date that the fictional piece the fictional composer I created had supposedly been performed the day of a new Pope's election didn't match the date on that Pope's wikipedia article.
Turns out we had the right date and the Pope's wikipedia article the wrong one. So our hoax ended up resulting in another article being corrected. From then on, anyone who looked at the talk page saw that someone had already flagged the page as dubious but that it had been resolved in our favour. This gave the hoax quite a bit of credibility. Made me realize how easy it is to create convincing hoaxes.
Or the person who had an extra name added by a Wikipedia editor, which was then copied by mainstream media. The hoaxers then used that as a citation, creating "truth". Wow!
Well, wikipedia doesn't talk about truth, but about verifiability. That's a bit of a problem because most people (even most wp editors) don't get the difference. Certainly people visiting wp expect it to be both truthful and verifiable.
Well "verifiability" means the ability to show that it is true. Something verifiable is demonstrably true. WP verifiability means something different. 'being cited as true by a trusted source' seems close but you can't blame users for Wikipedia's misuse of a word. Of course that just shifts the focus to what is a trusted source ... presumably the answer is one that is true ...
I wonder if you also find this professor really really objectionable?
I certainly do. First off, any bona-fide academic ought to have respect for the truth, not deliberately set out to build his carreer on lies. One has to wonder: does he also encourage lies in academic publications? Does the publishing outlet, i.e. traditional paper journal versus the internet justify that much difference to the principle of being honest in research? I think anyone this confused about the necessity of the truth should not be an academic. If I was his colleague, I would support steps to have his tenure terminated forthwith.
Secondly, the techniques prof. Kelly teaches and employs are classical disinformation tools, as practiced by Hitler, KGB and other such illustrious preceptors of his. Presumably some of his college funding is intended 'for the public good'. I really question prof. Kelly's idea that expanding the skills base in the disinformation field is for the public good.
Thirdly, and most importantly, one has to ask: what is his real agenda? Clearly it is to disable/discredit/damage/destroy the free exchange of valid information on the internet. If there ever was a real 'information terrorist' it must be him.
I don't agree. I think the professor is just trying to teach people that you can't trust what you read to be true, and test the rigour by which fact-checking actually takes place on the internet (rarely). He's not trying to say 'truth isn't necessary' or anything of the sort.
His cons are generally harmless, and he has revealed them so wheres the net loss? There is a long way between this and someone who actually subversively tries to push his views on others (eg. https://en.wikipedia.org/wiki/Taxil_hoax )
I agree that this behaviour is quite objectionable, though for a subtly different reason.
In particular it's not a case of malicious spreading of misinformation, it seems that in the case that the hoax worked the lies were deleted.
Rather it's a case of abusing a public resource. It feels to me somewhat like training a group of school-children to play Rugby on a village bowling green. Sure the damage to the grass will be undone over time, and sure the chance of knocking over any real old ladies is very small; but it's still an irreverent gesture that shows a dangerous lack of respect.
In the real world we have checks to make sure this kind of behaviour doesn't get too out of hand; on the internet we've so far managed to avoid the need. I hope that continues.
That certainly makes sense. But hopefully, since he is in some sense a public figure, we rely on him to make sure the good done is in excess of any damage caused, since he would be censured if that were not the cases. For example, if he had got students to break into museums and make actual changes to historical documents with bleach and ink, he'd make the cons more believable but the harm would be too great.
I don't think he did cross that line though. Some solutions if he did: what if he escrowed (and set a time to release) the pranks in case they aren't caught? What about donating to wikipedia, since it's their green they are mainly playing rugby on?
Yes, I can relate to that but I think the quaint imagery of a village green does not nearly capture the insiduousness of this. He enourages his students to do their best to make the hoax work. Deleting the lies later, while laudable, does not undo the real (intended?) damage to the reputation of the internet.
I really struggle to think of positive educational outcomes from teaching students to pull off successful disinformation exploits.
I suspect this professor is one of those people with an axe to grind against Wikipedia and other resources for not being trustworthy enough. Problem is, these kind of tactics would apply to old-fashioned print publications just as well. In the end it all comes down to a matter of trust: the reader has to trust a publication, the publisher has to trust an author, &c. Teaching people to violate that trust can only do harm. His exercises are probably more exciting and sensational this way, but if you ask me it would be more constructive & useful if he asked students to track down and correct surprising inaccuracies for example.
You're setting up a false dichotomy. You seem to feel that either the Internet is a place that is 100% trust-worthy (so much so that you can take off your critical-thinking cap), or it's a place where everything is a lie. The point here is that people should learn that they can't blindly believe things just because they read it on the web, not that lies on the web will render it completely useless.
> There's also an interesting coda to this convoluted tale. The group researching Brown's Brewery discovered that the placard in front of the Star-Spangled Banner at the National Museum of American History lists an anachronistic name for the building in which it was sewn. They have written to the museum to correct the mistake.
Firstly, I think you are confusing different kinds of lies. Lying is bad for society when it is used to get an unfair advantage, i.e. to commit fraud. But lying in other contexts, for other reasons? If he tells his wife she looks great when, objectively, she's no Helen of Troy, should he also be stripped of tenure?
Secondly, teaching defense against the dark arts generally requires giving the students a working knowledge of the dark arts. Would you object to the publication of Robert Cialdini's Influence on the same basis?
Thirdly, by illustrating how easy it is to damage the free exchange of information, he helps individuals become more resistant to lies and helps the internet develop new ways to fight misinformation. Think of him as a vaccine to the actual 'information terrorists.'
You might be right that lying in research is not legally fraud. It just breaks the academic contract and so people do get stripped of tenure for it alone, not for making money out of it, and rightly so. If you think that this kind of lying is cost-less, you will hardly find anyone who had ever been to a university agreeing with you.
This guy is not teaching defence against the dark arts. He is teaching the dark arts.
I do not object to the publication of anything at all, except of deliberately fabricated lies. Anywhere, with whatever artificially constructed post-justification.
And yes, it is easy. That is exactly why I am concerned and it should not be encouraged.
That's not what I was trying to say :) Generally lying in research isn't okay and I would consider it fraud, since the liar stands to gain by it. But he wasn't lying in order to publish a paper about 19th century serial killers, he was educating some students and some of the internet about not believing everything you read, albeit through deliberate deception.
I don't believe this is wrong, because the end justifies the means. I get the impression we'll therefore have to agree to disagree on most of these points, since our ethical systems are basically incompatible †.
† From your statement "I do not object to the publication of anything at all, except of deliberately fabricated lies. Anywhere, with whatever artificially constructed post-justification."‡ you make it clear that the action is what is important, whereas I would judge on both the motivation and the consequences.
‡ One clarification though, if I may: Do you agree that in this case there is a simple and logical pre-justification (or that one could argue there is one, even if you disagree with it)? Does that make any difference to you, since you specifically call out "artificial" and "post-"?
The end does not justify the means. The means determine the ends.
The professor was not "educating some...about not believing everything you read"; he could do that without creating new hoaxes. Certainly there are plenty to go around.
I don't know what his real motivation is. Who does? There are certainly powerful interests about that would love to see the internet, as a free information source, discredited and destroyed. Is he explicitly connected with them? I don't know. His actions certainly don't look to me like coming from someone who means well. All I know is that he is requiring his students to produce fraudulent publications, specifically on the internet.
Even if he means well, I believe that he is mistaken and so I reject your justification. So does society and courts who have always (rightly) condemned liars. Because lying is easy, dangerous and damaging. It does not strengthen its targets, it more likely destroys them. In this context, the target is the internet.
Look, let me bring up a simple analogy. Suppose you concoct some lies on e-bay to get your 'marks' to send you money. Suppose you are eventually hauled up to court of law and you come up with: 'your honour, I was just teaching those shmucks not to believe everything they read (your words).
It is just not going to wash, is it?
OK, he might not be getting paid by those whom he fools but he is getting paid by his college for teaching wrong history on the internet.
I hear what you are saying - there is a lot of rubbish out there on the internet and you should not believe it. However, most of it is produced in good faith by enthusiastic (and sometimes mistaken) individuals exercising their free speech. This guy is a professional setting up a university program of studies to generate lies professionally and deliberately. That is not the same thing.
I disagree. He's not teaching "wrong history", he's teaching students to think outside the box and identify the weak points in the fabric of the internet knowledge base. What they choose to do with these skills is up to them - perhaps they will invent a more powerful citation system for fact checking, or even enhance the security of vulnerable financial processes.
"most of it is produced in good faith by enthusiastic (and sometimes mistaken) individuals exercising their free speech" - citation needed. By definition, you will never know what the biggest lies out there are because they're never exposed, so your assumption that "all" lies are childish mistakes is wrong.
What about those who consult (wikipedia, redit, and wherever prof. Kelly is going to strike next) casually? Do you always go back to see that the information you quickly looked up has not been deleted later by some professor playing games with your credulity?
PS. I never said or assumed that 'all lies are childish mistakes'.
There will always be hackers, and there will always be security experts whose job is to keep the hackers out of the system. But you need to think like a hacker to do a good job.
You may wish to turn down the hyperbole, it's obscuring whatever point you have. He's not "building a career" here, he's teaching a class about historical hoaxes. What better way to teach than to demonstrate?
> does he also encourage lies in academic publications?
He's not encouraging lies at all in the first place. He's using carefully crafted lies as, like one of your repliers said, an intellectual vaccine against actual hoaxes.
He's helping inoculate his students and those aware of the hoax against gullibility in the future. How is that not a net benefit?
> I really question prof. Kelly's idea that expanding the skills base in the disinformation field is for the public good.
I think teaching people magic tricks enables them to deconstruct the magic show in front of them. Teaching them how hoaxes are constructed teaches them how to recognize and call them out in the future.
> Clearly it is to disable/discredit/damage/destroy the free exchange of valid information on the internet.
That's not clear at all.
It's interesting, you say "valid information" as if there's a clear way of recognizing information out there as valid. But there isn't. THAT's what his agenda is: how do we tell the real from the fake?
Teaching students about being careful about sources -- online or not -- is good of course. But he should just have shared a paper about or something like that.
I agree with Wales; deliberately putting untruths on Wikipedia is really anti-social behavior. Wikipedia, however imperfect, is a really great asset -- people using it for their own little experiments like this is really doing everyone a great disservice, and I can only imagine how many others he has inspired to do similar things.
I don't think he is objectionable. In fact, I think he's a great teacher. He manages to not only teach his students, but educate a good portion of internet communities and users.
He seems to me much akin to a whitehat security hacker.
No, he fucks with innocent reader's knowledge of history and with his students' moral values.
But maybe you are right. I am about to start writing erroneous and deeply misleading programming manuals to teach you schmucks not to believe what you read. Wheee, I am on my way to become a great teacher with a bloody big white hat!
The best way to teach someone to be a security expert is to teach them to crack a system; he is doing an analogous thing, I think.
Where this project goes astray (and I agree that it does) is that his academic and professional community don't seem to have the well-developed concept of "white hat" vs "black hat" that we have here. Translating this into an analogous situation in our primary domain, we're upset because he's teaching them security by leading them through a significant black-hat operation.
The larger problem is, although we can train white-hats by constructing isolated networks and systems to practice on, it's not really possible to construct an isolated system when your "system" is and has to be a social network of actual people.
So these students failed. A new batch will be back next year, and they'll not slip up this way - they could well be creating wikipedia aliases already in preparation for taking the course later, for example, or using other collaborative sources.
The key thing is that reddit hasn't just saved us. It hasn't just demonstrated you cannot put out bias and falsehood on the Internet. Its just whacked a mole because the students were, well, students.
What's more interesting is how the different communities responded to the attempted hoaxes. This can help us understand how different hoaxes, memes etc. are spread and how they are, or aren't, debunked.
I think directly linking to the print view is dishonest: telling the Atlantic you plan to print an article, and instead posting it to HN is essentially stealing their ad revenue.
PS: Even if you claim ads are distracting and this is actually a service to the community, I'd rather install ad blockers and make such decisions myself.
I often prefer print pages for BI-type "articles" where 300 works are split across 5 different pages, but yeah in this case the article is a single page originally, and the site's own chrome isn't overbearing. There was no reason to link to the print version.
The top of the page says, "...helps reveal the shifting nature of the truth on the Internet". It makes it sound like the ►internet is a different reality. The internet is just a reflection of what is already going on in the world. What people do offline is what people do online.
It's not hard. A friend of mine nicknamed a very rare disease on Wikipedia. A few months to a year later a news organization used the Wikipedia page's nickname for the disease when writing an article for it. He then cited the use of that nickname on the Wikipedia page with the news article, and thus, the nickname had a validated citation, and will possibly never go away.
That is debatable, Reddit has been conned a lot in the past.
But one error when trying to con the site seems to be too much information: many users apparently love snooping around and digging (and are pretty good at it), so giving more information initially makes claims more credible, but also gives more strings on which users can pull risking unraveling the whole thing.
Furthermore, there's a high level of dependency on the subreddit in which things go "viral", some subreddits are far more credulous than others (and credulity depends on the subject matter as well, you can get both /r/atheism and /r/libertarian to lap up claims unchecked, but it's unlikely the same claim will work on both) (this also means hoaxes are far more likely to unravel as they spread outside of a given subreddit and get wider exposure in the community)