Good luck to them if course, but I am not sure this will work. Couple of reasons come to my mind:
1. How will I know that news have been reviewed by qualified people? Technically this should be news agency/source's job.
2. By the time the score will depict the news' quality, the damage may be already done.
3. RT is based on people's opinions, but news should be fact checked.
What I think it should do, is to score the publishers instead... some kind of publishing performance history. Although the most effective would be to have some kind of liability for spreading/posting fake news, this would make publishers to do more fact checking before fast publishing.
Yep. Rating news is a terrible idea. The news should ideally be as objective as possible (or otherwise it's called "Fake News"), they don't always come out as a popular / most highly rated one. Movies, on the other hand, are highly subjective.
Elon Must has a similar idea where he wanted to rate journalists but later he gave up.[1]
Ultimately, you as an individual have to decide, do you trust a startup / news rating site to tell you what it is to believe ? My answer to that is absolutely not! I want to do my own research before believing anyone. (Yes I understand some people will be different than that.)
I would like to see journalists be more transparent about what in their statement is fact vs. opinion, since lately I've seen pretty clear examples of the opinion as fact on both sides of the aisle. But this sort of a solution sounds terrible - what would prevent the overseas bot farms from downvoting articles that hurt their agenda and upvoting divisive articles?, and why do you, as a company, want to adopt responsibility for trying to police this knowing full well that it'll be on your heads if your efforts fail (or work too well and prevent outspoken parties from being able to boost their favorable news).
This just seems like the absolute worst place to position yourself in the modern world.
PS: Also, aside from all the politics... think about the swarms of bots ad companies would hire to upvote their sponsored content...
I'm a life-long (but intermittent) NYT reader. Coverage of SpaceX (in NYT, WSJ, and WaPo) changed how I think of journalism.
Faced with the usual issue of good newspapers getting every topic you know about not-quite right, variously wrong, I used to explain it by an analogy. It was like when newspapers closed their foreign offices. Loss of ground truth, of sanity checking, of insight and nuance. An abundance of misinterpretation and missing the point. My analogy was that for science, and tech policy, and diplomacy, and engineering, and manufacturing, and so much else, the foreign offices were closed. Generalist journalists, not embedded in the various professional communities of expertise, writing from afar, and getting it variously wrong.
What struck me about SpaceX coverage was the process failure. Not the getting things wrong - that was unsurprising. But there was no sign of learning, of process improvement. For instance, there was a successful short-term disinformation PR campaign targeting the press (around Zuma satellite failure attribution). But afterwards, there was little sign of recognition, reflection, remediation. No sign that they wouldn't fail just as hard if the same thing was done again a month later. And journalistic devices to pursue quality, seemed not just poorly executed, but inadequate to need.
So my new analogy is with medicine. Like journalism, medicine has long had folk practices for pursuing quality. But medicine has had it's "oh, shit" moment. The realization that medical error was a leading cause of mortality. That even easy and important standards of care were failing to be consistently executed. That there were decades of experience in other fields, on how to engineer process to systemically achieve quality and avoid failure, to which medicine had been oblivious. Journalism seems still oblivious.
So I suggest that journalism as a profession, and even the best of newspapers as institutions, have not yet realized just how badly they are failing. Or recognized the need for, or even the possibility of, systemic process improvement.
Perhaps the existence of the OP project might help with that.
Feel free to write your own content - publish your own knowledge by contributing to the media outlets - NYT has tons of contributing writers who's full time job is not in journalism!
Nifty. This suggestion seems to nicely illustrate what journalism seems to be missing. I see it as analogous to...
Think you can operate our press without losing fingers? Great - we're often looking for replacement operators. Does the machine have guards? Well, we have both a day and a night watchman, why?
Think you can operate without ever leaving sponges and instruments behind in patients? Great, set yourself up as a barber surgeon and go for it. Do we count sponges? You'll have to talk to accounting, but they're not that expensive, why?
And so on.
Responding to operator errors with "well, just stop doing that". Responding to pervasive systemic failures with "well, maybe we need better operators". That's being oblivious to a half century of progress, to minimum standards and best practices, across multiple industries.
Some years back, someone said their first-tier newspaper didn't even have a process for tracking errors. Despite an institutional self-image of caring about quality. ISO 9000 has not hit journalism yet?
Certainly, Stackoverflow-type systems do have their benefits, but those benefits are for a very narrow use-case and there's an awful flip-side that the rhetoric and rules in stackoverflow have ever-increasing perverse incentives to reward strange, persnickety behavior. It's great if you're missing some pieces of information here-and-there, but it's awful to use as a tutorial or as a way to understand subject-matter in a holistic fashion.
The idea that you can have any kind of self-policing or crowd-sourced mediation of journalistic "facts" vs "baloney" is a pipe-dream.
IMHO, the problem is not whether or not a piece of information is "a fact" but rather what other facts are missing, and context of these facts as a whole, and the intention behind the entities brokering facts on your behalf. It's all very easy to point out horrendous bias, and outright lying (just watch "Fox and Friends" for a few minutes), but what about a set of facts which is incomplete? That's just as bad as no facts and it has a name: "disinformation".
What we need as consumers of journalism is not necessarily "just the facts" but intelligent and rational analysis that can function even with incomplete information, that draws on not just "available facts" but also a historical basis, that is brave enough to adjust itself as reality warrants. This is necessarily and intrinsically subjective and it will often NOT be recognized as true and be, at times, incredibly unpopular.
You're all almost never purely doing your own research, though. There isn't enough time for that. It's a sliding scale of deciding where to stop and how much to take for granted and who to take it from.
This is why the problem isn't "doing your own research" vs "trusting someone". It's always both; the magic is in choosing which at each particular step when finding things out.
Although the most effective would be to have some kind of liability for spreading/posting fake news
The market cannot do this, at least as currently constituted. Right now, fake news which pleases the sensibility of whatever group gets rewarded by that group. The same group also fails to punish the fake news. The surprising fact, which shouldn't be surprising, since Noam Chomsky and Edward S. Herman have been calling this out since the 80's, is that the mainstream does this too. The differences in 2019, are that pushing narratives and differences in emotional language play a much larger role, and factual accuracy is much less important to the public now than it was then.
Rottentomatoes is extremely biased and subject to the so called "Hype Train" effect where popular movies have massively inflated scores regardless of content (Don't mean to sound like a hater but The last star wars films have been critically acclaimed despite most of the fans disliking them for various reasons).
I can imagine this happening with news but along party line with critics upvoting sites that fit their world view regardless of actual content.
IMO this obsession with "Fact-checking" and "Journalistic Integrity" is just people still trying to deal with the fact that conservatist populism has been surging lately and the opposing sides just cant get their brain around it so they are just trying to say "No you are wrong and full of liars; See this fact-check proves it!". But neither side really cares since they are just trying to push their agendas.
I believe that the theory is that having "skin in the game" in the form of money would help optimize the news for something other than clickworthiness.
You’re assuming that a publisher would be willing to put enough money into a prediction market to make a profit from the market machinations rather than advertising or subscriptions.
This is also assuming that prediction markets are actually useful in general, and useful in this specific case. Both of which are open to discussion, but even for this specific case, it’s not even clear what the the prediction market would even be predicting. The Politifact meter? What about all the stuff Poltifact doesn’t bother reporting on? Poltifxt (or any other org) aren’t perfect, and can be played as well.
> You’re assuming that a publisher would be willing to put enough money into a prediction market to make a profit from the market machinations rather than advertising or subscriptions.
The publisher doesn't (need to) interact with the prediction market at all. The idea isn't that the publisher isn't directly financially punished for inaccurate reporting. That doesn't happen in the RT analogue, isn't what's proposed by the OP service, and can't really happen in any constitutional system (plus, subscribers may be looking for content that isn't factual but feeds their biases).
The conversation is about a third party for those who want to know about the factual accuracy of a publication.
That being said, I agree with you that prediction markets aren't quite a good fit. When the NYT publishes some clearly inaccurate nonsense and leans on its reputation as "the paper of record", the issue generally isn't predictive analysis which is later uncontroversially falsified. The issue is usually lying about statistics in ways that are clear with close attention but that escape the attention of the casual reader and thus shape the conversation. Prediction markets wouldn't really help much here: if the outcome the market was predicting was uncontroversial, then you wouldn't need a system for checking the article in the first place.
If the publisher isn’t interacting with the prediction market, then it doesn’t, as jerkstate said, “have skin in the game” beyond the traditional revenue and reputational metrics (eg Pulitzer Prizes).
Even if they were interacting with the market (even for just braggadocio reasons), then they’d just be matching the biases of those that self-select for participation in what is essentially a small value and meaningless game. I could easily see the equivalents of pump-and-dumps and other market manipulation just for political / tribal reasons.
These days, it seems like news media outlets suffer very little from publishing unsourced claims that turn out to be false. If media consumers would/could participate in these betting markets and put their money where their eyes and ears are, they would use more critical thinking when deciding what to believe and how strongly to believe it (or else they would be punished financially). If popularized, this would have a feedback effect to the media outlets and they might start using better journalistic practices.
It would provide a common prior for any empirical claims. Scott Alexander puts it very well here:
A democracy provides a Schelling point, … an option which might or might not be the best, but which is not too bad and which everyone agrees on in order to stop fighting. … In the six hundred fifty years between the Norman Conquest and the neutering of the English monarchy, Wikipedia lists about twenty revolts and civil wars. … In the three hundred years since the neutering of the English monarchy and the switch to a more Parliamentary system, there have been exactly zero. … Democracy doesn’t always perform optimally, but it always performs fairly, … and that is enough to prevent people from starting civil wars.
Academia is different. Its state resembles that of pre-democratic governments, when anyone could choose a side, claim it was legitimate, and then get into endless protracted fights with the partisans of other sides. If you believe ObamaCare will destroy the economy, you will have no trouble finding a prestigious academic who agrees with you. Then all you need to do is accuse the other academics of bias, or cherry-picking, or using the wrong statistical test, or any of the other ways to discredit scientists you don’t like. …
A democratic vote among the scientific establishment is insufficient to settle these topics. The most important problem is that it gives massive power to the people who determine who gets to be part of “the scientific establishment”. … So not having any Schelling point – being hopelessly confused about the legitimacy of academic ideas – sucks. But a straight democratic vote of academics would also suck and be potentially unfair.
Prediction markets avoid these problems. There is no question of who the experts are: anyone can invest in a prediction market. There’s no question of special interests taking it over; this just distributes free money to more honest investors. Not only do they escape real bias, but more importantly they escape perceived bias. It is breathtakingly beautiful how impossible it is to rail that a prediction market is the tool of the liberal media or whatever. …
Nate Silver might do better than a prediction market, I don’t know. But Nate Silver is not a Schelling point. Nobody chose him as Official Statistics Guy via a fair process. And if someone objected to his beliefs, they could accuse him of bias and he would have no recourse until it was too late. If a prediction market is almost as good as Nate, and it is also unbiased and impossible to accuse of bias, we have our Schelling point. …
Just as democracy made it harder to fight over leadership, prediction markets make it harder to fight over beliefs. We can still fight over values, of course – if you hate teenagers having sex, and I don’t care about it, we can debate that all day long. But if we want to know whether a certain law will raise the pregnancy rate, there will be only one correct answer, and it will only be a mouse-click away.
I think this would have more positive effects than anyone anticipates. If people took it seriously, not only would the gun control debate be over in an hour, but it would end on the objectively right side, whichever side that was. If single-payer would be better than Obamacare, we could implement single-payer and anyone who tried to make up horror stories about how it would destroy health care would be laughed out of the room. And once these issues have gone away, maybe we can reach the point where half the country stops hating the other half because of disagreements which are largely over factual issues.
The problem is that there isn't a universally trusted arbiter for the final result. Prediction markets can be useful when what is being predicted can be objectively verified in the future but if something like "Will ObamaCare destroy the economy?" was the question in 2009, in 2019 half the country will tell you it did destroy the economy, the other half will say it didn't.
Some actors may also be willing to spend out-sized effort propping up a losing opinion to sway popular perception too.
Perhaps you're a Koch brother and decide to inject billions of dollars into such a question due to the financial payoff you could reap if it comes true. I think such a market suffers from the efficiency created by a price point difference when the result of the prediction can effect your income - if a particular policy discussion will cost you 5 billion dollars then you can, internally, play with that cost and the likelihood of being able to tip a market. If you can guarantee a positive outcome by investing 4.5 billion then there is a rational motivation to do so.
Facts are outside the realm of humans, unreachable. Humans can only have observations, hearsay, and opinions about what the facts are. That's why we have scientific opinions and judicial opinions. A statistical analysis with a confidence interval or a p-value is a measurement of how strongly someone holds an opinion based on their observations.
These are important limitations, but in the end, if their scoring system helps readers find good, in-depth news articles to read (even if they're not the freshest) then they'll be worth visiting.
None if these are unsolveable. For example, RT gives a critics score in addition to a popular score, so this could do that to solve 1 and 3, and 2 is a non starter because there's always more damage to be done/prevented.
Scoring publishers just puts an extra layer of abstraction on the problem without actually handling any of the issues you brought up.
The fact is, this is a good idea. So good that it already exists in a number of places with carrying degrees of success.
I think your doubts are all extremely well-founded, but I don't think scoring publishers is a viable alternative.
There are already a bunch of attempts to list 'trustworthy' news sources, sometimes paired with attempts to offer a "balanced media diet" while avoiding inaccuracy. Unfortunately, publisher-level judgements are fundamentally too broad for story-level relevance. Accuracy varies wildly, and there's a significant difference between publisher-level quality (e.g. biased topic choice) and story-level quality (i.e. if I read it, is it factually correct?) Worse, publishers benefit from keeping it that way, by extending the aegis of a reputable name to content that's fast and cheap instead of diligent, or to inflammatory clickbait that monetizes better online.
Fox is infamous, but their online site actually has a lot of good investigative reporting. Unfortunately, we can't just split Fox.com from television Fox News, because Fox.com also hosts old Glenn Beck videos and other comparable rubbish.
The Huffington Post has hired some good reporters over the years, but their model of saving money with repackaged (or stolen) content effectively launders inaccurate stories from a host of other sites. The difference between their pedigreed staff writers and their news-roundup commentary is immense.
Buzzfeed News moved to its own domain last summer, but before that it was a subdomain of Buzzfeed that produced a whole bunch of award-winning investigative stories from top reporters. Unfortunately, Buzzfeed also does news, and has a history of things like blackholing stories at advertiser request or committing serial plagiarism.
The Washington Post website, like many other news sites, has a section for blog posts by non-regular contributors. A story in their PostEverything section doesn't warrant anything like the same faith their main content would, but its under their 'news' heading and isn't demarcated any differently than staff blogs.
The list of similar practices is painfully long, to the point that I'm struggling to think of sources which don't have major quality discrepancies within their brand. The WSJ, NYT, and ProPublica come to mind; it's no accident that they all trade extremely heavily on their reputations. Television sources, with their numerous distinct shows, are of course disastrous for this - PBS is the only channel I can think of now that History and National Geographic have surrendered.
I don't have a good answer here, but it looks to me like there are strong incentives for publishers not to offer a stable level of quality in their work.
Kind of the problem with ever increasing editorializing of news, and the monetizing of news in general. It's gotten so far from anything based on fact for a long while now.
> 3. RT is based on people's opinions, but news should be fact checked.
Except when RT doesn't like to hear people's opinion when it differs from the mainstream. Unfortunately it's easy to box it as "trolling" and hide it under the rug.
If you want to go on a rant, is it too much to ask that you list examples and how they apply, rather than assume everyone here is as on top of Internet drama like you might be? Your examples might even make it clear why a RT for news would be even more difficult than the one that exists for movies.
I guess it might be better to repackage this as something that isn't a rant.
RT is a company that is currently owned by Warner Bros. (through a chain of ownership). This alone is enough for someone to raise the possibility they may not be completely unbiased in how they treat each movie despite having any evidence of such behavior. Conflict of interest, in my own experience, have never required evidence of any further problem outside of the conflict of interest itself. Since the end result is merely movie ratings which is a fully subjective experienced, it isn't worth time digging any further into.
When it comes to a news rating site that will likely be deciding things of a more factual nature (news is true or not, or at least somewhat based on true events), I think a potential conflict of interest has larger implications. Whom ever owns the site will have a conflict of interest with regards to at least some news, and the ability of that site to be used by other groups like non-profits, NGOs, or even governments will need to be limited with respect to where the conflict of interest will occur.
I don't think the difference is in the difficulty of making a RT for news, but on the downstream impacts of the conflict of interest that exists between owner and purpose.
While that's true for the possibility of conflict of interests, do you have any proof/data showing they've actually flexed that conflict for their own gain? Undermining the trust in the authority of the aggregate values is a huge issue I'm sure they'd much rather avoid as much as possible in order to extract as much value out of the platform as possible.
I know a bunch of folks recently complained that they were trying to manipulate the scores for Captain Marvel, but that was also under a huge organized attack to torpedo the audience scores even before it was released for viewing. So they had a legitimate reason to go in and try to "fix" the scores. Whether the new values is "accurate" is of course up to debate (for philosophical values of "accurate") but most of the criticisms I've seen have had reasonable explanations behind them.
When avoiding a conflict of interest matters, normally there isn't a standard of requiring evidence of any impropriety for behavior to be deemed inappropriate. Merely not avoiding the conflict of interest is enough. Granted, for something like civil lawsuit or criminal charges this alone isn't enough and evidence is required, but for matters such as violating professional ethics not disclosing a conflict of interest is enough.
Like I said, with movies reviews it does not matter (at least in my opinion). The issue I have is that I don't think such a carefree attitude can be taken when it comes to rating the truthfulness or applicability of news.
As for any examples of actions, I do not have access to the sort of data to determine if any such actions were taken, nor do I have the resources nor desire necessary to even begin investigating. Also, especially in relation to recent events that touch on a more political scale, I rather not deal with the issue of evidence being trusted based on the extent it confirms existing views (a cognitive bias common to humanity that isn't particularly worse in this case, but which I have seen result in flame wars on other forums).
The conflict of ownership also doesn’t work since Captain Marvel’s studio, Disney-Fox, are the biggest competitors to Rotten Tomatoes’s AT&T/Warner and Comcast/NBC Universal owners.
Pretty unimportant but just clearing up ownership info. Warner Bros/AT&T are minority owners of Fandango which owns Rotten Tomatoes. Comcast/NBC Universal are majority owners.
Hah I wasn’t blasting you or anything! Sorry if I made it that way. I tried saying it was unimportant to the overall convo. If anything it reinforces your argument since it’s two of the big 5 movie studios.
That's a good question. I'm not sure what the other poster had in mind, and I suspect it's not the same, but RT has a bunch of rating problems that would probably become far more egregious for news than they are for movies. (Leaving aside the tricky question of how to translate the formats: is a movie like a news story, or a news source? Does news really have an existing aggregation-worthy source like movie reviews?) RT provides percentage scores for both audience and critic reviews. Those percentages are not averaged scores, but percentage of positive reviews. There's good reason for this in both cases, but it causes a lot of weird side effects.
- For audience reviews, reviewers rate out of 5 stars, and the score is the percentage of reviews at least 3.5 stars.
The goal is to insulate scores from extreme opinions and odd voting patterns. On IMDB, which doesn't do this, a few low scores can destroy a movie's rating, and movies that invite careful analysis are effectively penalized. The Dark Knight ends up rated higher than Schindler's List, but the latter has lots of slightly-imperfect scores with incredibly positive review text.
The basic side effect is the loss of all meaningful score data. That punishes daring movies: 100 4-star reviews equals 100% while 50 3-star and 50 5-star reviews equals 50%, but I would much rather see the second movie. It also hides deep flaws: The Boondock Saints outscores 2001, but a look at the reviews shows that its "rotten" ratings are often 1 and 2-star reviews calling it garbage, while 2001 is full of 3-star reviews like "slow and confusing, but fascinating". Another problem here is that naive binary scores suffer from reviewer skew and vote-bombing campaigns. It's less bad than IMDB because extreme scores are 'softened', but still worse than systems which relate individual scores to the reviewer's other opinions. BeerAdvocate, of all things, takes on this problem with interesting meta-info like "reviewer's average distance from consensus".
- For critical reviews, each review is reduced to a binary opinion and the score is a weighted average of positive review count.
The goal there is to condense varied or absent rating schemes into an aggregation-friendly score and avoid reviewer skew. Metacritic, which doesn't do this, ends up systematically distorted by which outlets review things. Unscored reviews like Ars Technica's (on video games) are skipped, while sites with extremely skewed scores are included by some unspecified weighting system. As a result, Metacritic lists game reviews averaging 75-89 as "generally favorable", but there's a widespread understanding that any major-release product below 85 is terrible. (GameSpot once infamously fired a reviewer for scoring a game 6/10.) It's largely the same problem as Uber and Amazon ratings: does a perfect score mean "exceptional" or "nothing wrong"? RT tries to dodge the problem by saying "good" or "bad", with a weighting to accommodate a reviewer's 'signal strength'.
The side effect here is, again, the total loss of secondary data. RT sometimes gets reviews actively wrong; something like "despite lovely action sequences, this movie is a meandering waste of time" will get parsed as "lovely action sequences". That's probably an automation error, but lots of other reviews can't be clearly declared positive or negative even by humans. And within a categorization, there's a great deal more lost data. Ebert produces 'fresh' for The Avengers because "It provides its fans with exactly what they desire," but he produces 'fresh' for There Will Be Blood by declaring it "A force beyond categories."
- The reason all of this works alright for Rotten Tomatoes is that it's fundamentally answering one question: "will I enjoy watching this?" The Avengers isn't on par with There Will Be Blood, but if you like the description and it's rated fresh, you'll enjoy either one. If you see that critics hate The Boondock Saints and audiences don't, well, you probably know whether you're somebody who likes that kind of cult classic.
But what of news? People already shred Snopes whenever it tries to deal with assessments like "factually correct but misleading" or "incorrect statistic but the real number also supports the same thesis". The entire point of news reviews is to not rely on "will I enjoy this story?", so all the things RT quietly glosses over will become critical problems.
Presenting crowdsourced ratings as mere averages -- or really any collapse onto a single scalar -- is a big step up from no info at all, but it's hardly the best one can do even with well-known statistical techniques.
Yep, I really wish sites would either give me a histogram of raw scores, provide more thoughtful interpretations, or both.
Histograms are a screamingly obvious way of distinguishing "mediocre" from "some good some bad", which is one of the most common needs with things like Amazon products. But beyond that, there's so much more to be done. You can weight or shift scores by reviewer's average, reviewer's average distance from consensus, or a dozen other things. A one-star review from someone who uses Yelp exclusively to call out bad experiences is relevant, but a one-star review from someone who often gives 4-5 is far more interesting.
Maybe the weirdest thing is that a lot of this is done to catch fake/paid reviewers, but it's not extended to providing clearer info overall. Even the fight against fake reviews would be much achievable if it was shifted from a binary "take down or don't" to a more flexible approach to maximizing review value.
This is why my hackles are raised the instant somebody starts a "people should be exposed to various viewpoints discussion." I want to give the good faith interpretation, which would be, maybe, a healthy exposure to different economic system philosophies or something. But, nowadays, I find this argument, or people "just promoting the right to free speech," are really exactly the kind of people that casually toss around "sjw" as an accusation.
Like, downvoted because HN is a hive of sjws? As if. Progressive, sure, but a hive of SJW as you undoubtedly mean by the word is a laughable accusation.
Nowadays it seems when someone wants "multiple viewpoints" to be presented what they really want is the right to continue to bully people. This would have been an extremely presumptive thing to say but, crististm, you've gone and made this obvious for me by trotting out the old sjw accusation.
Some young woman was on her own on a flight and was getting creeped on by an older man. The journalist stepped in, flight attendants stepped in, the man was berated and eventually forced to move.
Now those words - the man was berated and forced to move. If a group of schoolkids did that to some random nerdy kid on the playground, and we used those words to describe the situation, that's bullying. In the case of the flight, it obviously isn't.
So, I don't think it's so much about whether I agree with the crowd's opinion, but a greater context.
At an LGBT parade there's a fire and brimstone pastor shouting about how everyone's going to hell. The crowd is shouting back. Is that bullying? I don't really think so. I think it's the pastor that came to do the bullying.
Just to be clear, I should have said swarm. This is not about the whole HN and I've been here for a long time to see how it adjusted its directions from the top.
I should probably know better than to voice my opinion to a group of strangers, but no, I will not apologize for what I think.
I voiced my opinion and that part of HN crowd that heard me considered that I should shut up. Such is life
You're getting downvoted and told to "shut up" primarily because of your use of the word "sjw", because it's a dumb meaningless slur used to drag anybody with socially progressive values. It's not a criticism, it's just a petty insult.
If you have a genuine problem with the social direction of HN, there's ways to raise those concerns without regurgitating offensive alt right memes
This, and many other concepts based on "vetting" news assumes that the audience cares. A significant portion of the audience does not care, and is instead looking for news that validates their worldview, and furthers their (and their chosen political party's) goals. So basically these services will cannibalize the part of the audience that does care about truth, turning people against each other over marginally skewed articles. Meanwhile, the other site is happy to share falsehoods, because it helps them ... win.
For most people, news is just a social object. Something to talk about with family, co-workers, etc. Whether or not any particular story is true doesn't matter - what matters is that everyone in their circle is aware of the described event (whether they read the same story, or a different one).
Humans are pretty good at fact-checking and critical thinking when getting it wrong means some immediate loss. But most news stories are irrelevant to almost everyone's lives, and those few that are, don't have a direct and immediate impact.
That's what I find so disgusting about this tendency of many people. If I don't care enough about an issue to be rigorous about it, then by definition I don't care enough about it to 1) vote based off of it or 2) spread my uninformed opinion about it (when I do talk about it, I'm clear that I have only a tentative, uninformed model of the situation).
People have often defended the average voter with "people have busy lives and don't have time to think about this stuff" (and as you point out, don't really have the will to), but I've never understood how it's defensible to treat policies that affect other people's lives as idle fun.
IMHO what it comes down to is that "Democracy" has become our national religion. Not in the general sense of rule by the people, but the specific implementation wherein everybody picks and votes their yes/no option. Poor results can never be attributed to the abstract ideal itself, but rather can only be due to a lack of adherence to the process. So it is paramount that everybody adopts an opinion, no matter how ill-informed.
Refraining to take part in that (or even worse being a spoil sport) is met with the same disdain as naysaying at any celebration. It's the difference between someone saying they root for a different team or saying they don't care much for baseball - the latter is going to sour the conversation pretty quick. At least with sports it's clear that the outcome is essentially frivolous.
I agree. Another thing that I failed to understand from a pretty young age was this frenzy to turn out every single vote, and the treating of turnout as a per se good thing. As far as I can tell, an uninformed vote, determined entirely by fashion and social pressure, is _worse_ than a missing vote.
An individual may not be that well informed when voting but the collective of all votes is supposed to average out to something more reasonable. There was a story about a bunch of people who were guessing the weight of an ox in a state fair (https://en.wikipedia.org/wiki/Francis_Galton#Variance_and_st...) where people were generally not very well informed but the average weight guessed was pretty accurate.
There are a lot of reasons why democracy as it stands today doesn't work as well but I don't think ill informed voters is one of them.
Sure, and that's the core value proposition of democracy, but it doesn't address my complaint. This over-simplified model isn't quite relevant in the face of the huge incentives to screw with voters' epistemology that voting creates. There was no small-ox constituency trying to push voters towards estimating lower, because it didn't matter, while modern culture is suffused with politics, and not symmetrically with respect to every issue. There's a reason that my original comment was disgusted by the tendency I'm describing, instead of modeling it as stupidity: a voter would have to be unbelievably dumb to think that their gut feeling is even remotely useful when it comes to 1) complex topics 2) that they haven't bothered to inform themselves about, 3) for which there are a huge amount of resources dedicated to skewing their views on the issue.
> There are a lot of reasons why democracy as it stands today doesn't work as well but I don't think ill informed voters is one of them.
I agree with you in that I don't think it's a "voters these days" problems as much as a "most people's moral reasoning is horrifically stunted, in any population and time period". This is a baseline flaw of democracy, and one of the things that made Churchill call it "the worst form of government, except for all others that have been tried". But the inevitability of it across a population doesn't mean it can't be criticized at the individual level.
> But most news stories are irrelevant to almost everyone's lives, and those few that are, don't have a direct and immediate impact.
Seemingly yes.
A murder in the closest largest city/town has NO IMPACT on my life (unless I know the victim) but yet tax or health policy changes on the national level do in fact have a direct impact. Guess which one gets run on the evening news? Which one does your family talk about at the dinner table?
Nightly network news should be focusing (and condensing unlike "cable news") on national/state and local laws, policies etc. not fire/murder/theft.
Agreed. Anybody who has read Neil Postman and Aldus Huxley will probably agree, too.
In fact, I suspect that even Hacker News and this very story itself (now I'm getting meta) are mostly the same: interesting, but often doesn't have any immediate impact and not actionable. I read biology/physics articles on HN because they're cool, but I don't work in those domains. We can hold some discussion about it, but there isn't anything that I can really do about it.
Lame joke but it highlights how these topics are condensed and discussed at the dinner table.
---
How about this crazy cold weather?
So much for global warming huh! har har har
That's not how it works Dad!
I know I know but still...
---
Harmless maybe, but it doesn't stop Dad from sharing a climate change denial piece or voting for a candidate who doesn't believe (or pretends to not believe) the science.
The past few years have certainly provided mountains of coal to shovel into my cynicism train's engine, but I am not quite so far gone to believe this entirely.
Your point is true of many people, but I think even more do want good information and there is the possibility to improve journalism going forward. Snopes is a good example that fact checking is useful, desired, and can help drive consensus. I can't count the number of stupid Facebook threads I've seen get resolved by a Snopes link.
Your concern about the first-order effects of this are valid. If we're very lucky, the second-order effects may prevail. Yes, shady news sites won't want to be validated by Credder at all, leaving only better news to be critically examined and thus potentially undermined. But, if the system works, the absence of something like Credder on an article may itself become a signal of disreputability.
The odds of this panning out are slim, but I think it's important to keep trying until something works. Democracy cannot function without citizens having accurate, relevant information in their heads.
I have been kicking this idea of a news subreddit with paid contributors who are vetted. Essentially a news lobsters.rs. My main thought is that changing the news industry seems hard given all the broken incentives, but changing the way we discuss news to be less hive mind-y should be easier. Outside accounts would be unable to upvote/downvote/submit only read. I was initially imagining that grad-students could use the platform to make money while in school. Patreon would be used to collect a donation and money would be distributed equally. Accounts below a certain level of monthly karma would be audited. The accounts on the subreddit would be badged with their specialization (international politics, US domestic politics, economics etc.) similar to quora.
This could even be made off reddit using lobste.rs source code as it’s open source, thoughts?
You're not wrong. But for those of us who want facts and insight instead of entertainment and opinions it would be nice for it to be more obvious where to find that.
I find it pretty easy to find reputable news sources, I don't really need a tool but I guess it could help. It's everyone else I'm worried about. Solving this problem for only people who want the truth narrows the scope dramatically and may in fact make the situation worse, by attacking and discrediting legitimate news sources for a perceived bias.
"I find it pretty easy to find reputable news sources,"
Were your "reputable news sources" promising you up, down, left, right, backwards and forwards that the Mueller report was going to end Trump and conclusively prove that his Presidency was the result of Russian collusion?
Maybe not for you as an individual. Mine were a great deal more mixed, for instance. But I suspect for a lot of people, the reason they find it easy to find "reputable news sources" is that they never hold their news sources to account for being wrong.
I use this example merely as the most temporally-handy example of when a lot of "reputable sources" got it very, very wrong. There are many.
> Were your "reputable news sources" promising you
No? News shouldn't "promise you" anything. Maybe you're reading too many opinion pieces? 50-75% of the news I see shared online seems to be opinion pieces attempting to "synthesize" the current news of the day. If you focus on journalists (including following them directly), and just pure news you won't be promised anything.
One of the most important aspects of the scientific mindset is to attempt to disprove one's own beliefs... I'd argue that that alone is enough to derive the most important aspects of scientific practice from.
Your challenge for the next couple of months is to read your own sources and actively seek out reasons why they are trying to lead you to certain things.
In particular, the question to be focusing on is "What conclusions am I expected to be drawing from this story?"
If you don't find them, I'd like to hear about your sources. They must be something exotic and small, because every name brand I've ever examined is pretty busy telling me what to think, not neutrally reporting news.
> This, and many other concepts based on "vetting" news assumes that the audience cares.
Exactly. It's not as if there haven't been any high-quality news sources around; quite the contrary. But quality has steadily been losing to whatever the consumers want to here.
Democracy is not the best system for getting the best result. People are fickle. People tend to vote for what they like instead of whats best. That's why there has never been a bald president since the advent of TV even though most white males above 50 are balding.
And, as you said, most people don't care. Its why cat videos or toy review videos get more views than educational or philosophical videos.
Expert curation is better than a democratized system. The problem is how the curators are curated.
To a point, you are correct, but most people don't "not care". Most people are unaware and only assume a knowledgeable, experienced editor vets and edits the news. Today, most news has a marketing angle applied to it to attract more viewers and visitors.
The number of people who shared and continue to share "fake news" is a sign (to me) that most don't care about truth and are in fact complicit. These people are selfish, narcissistic, naive maybe all of the above.
> Most people are unaware
I just don't buy this based on the content that I see shared.
When corruption (for example) is exposed it should be a slam dunk for both sides of the political debate. However, this is no longer happening, with each side digging in deeper.
Pessimistic and jaded I will admit to, but how will this help with intentional misinformation campaigns? How will this help with naive/malicious people who knowingly share false information because it aligns with their world views? How will this help stop mainstream media, from taking a "rumor" (aka fabricated / militarized news) and reporting on it therefore making it seemingly legitimate?
I think there are people who care. But the real problem, is the growing cynical world view that we cannot trust any authorities, and therefor have no way of assessing truth or reliability of anything we read or here.
The first step is to educate people this is not inevitable. No, don't trust authorities, but trust your ability to research and verify and spot faulty or unsupported arguments and assertions.
I'm pretty sure the reason this is being prominently discussed on Hacker News today is because of the criticism of "the media" around the Mueller report. But an equal emphasis needs to be placed on praising Mueller for the thorough job he performed, and the credibility and integrity which he invested into this task as an act of service to the nation. I think it will be very difficult now for anyone to argue there was not a massive effort from the Russians to help elect Trump, that there was a disregard and contempt for the law from many people in the Trump administration who have been found guilty of crimes or await trial, but also that there is insufficient evidence to prove Trump personally committed a crime with regard to the Russian interference.
The investigation is a shining example it is still possible to establish facts and evidence with some level of objectivity, if one is willing to be patient and wait for the process to play out before coming to conclusions.
> I think it will be very difficult now for anyone to argue there was not a massive effort from the Russians to help elect Trump
Statements like this are exactly why people are cynical of the mainstream news. There is no evidence of "massive" efforts from Russia to get Trump elected, yet you state it as a fact and that we should trust mainstream news.
The tighter people cling to false narratives pushed by the media, the more you'll see trust drop through the floor. Since not only is the media lying to people, but there's are large groups of people that will believe anything that fits their political view regardless of information that comes to light that contradicts it.
"yet you state it as a fact and that we should trust mainstream news."
This is the opposite of what I said, I said we should trust the process Mueller used to reach his conclusions, NOT the media, who got a lot wrong.
Please refute the indictments Mueller issued for Russian nationals working to influence the outcome of the election, to back your claim Russia did not make an effort to influence the election.
Unless you are hung up on how massive counts as "massive", which is a pointless and stupid debate.
The media clearly got a lot wrong, but rejecting the Mueller report's conclusions is a much higher bar, and I will want to know how you have more information about the topic or more credibility than Mueller and his team and specifics about which parts of the indictments were incorrect.
Taking a look at the Rotten Tomatoes “Top 100 Films of All Time” list [1], I’m not exactly sure why it’s a model to be replicated. It’s extremely obvious that “top” means “popular and/or most advertised.”
It is important to understand what RT actually are rating here. It is not an aggregate of reviewers selections of best movies ever. Neither is it some aggregate of how much reviewers liked a certain movie.
RT is more like a list of the movies which a lot of critics rated but fewest critics disliked. So you get popular and uncontroversial. These are good choices for movies to show on airplanes.
It is actually an interesting rating, as long as you understand what is being rated.
I agree with you, but I don't know if that's the right way to describe them. Looking in the top 10 I can see, to name a few, Moonlight, BlacKkKlansman and Get Out, all items that SHOULD be discussed. I think these also include movies that have a social message that critics agree need to be discussed.
By "uncontroversial" I mean there is broad agreement among critics that this movie is worth watching. This does not necessarily mean the subject matter of the movie is uncontroversial.
Which is a line that changes over time, I might add! It's interesting to think about how each movie on that list got there in a way that's relative to the era it was produced in.
You could choose a cutoff, sort by date, and have a pretty neat timeline of what's considered "safe".
It’s extremely obvious that “top” means “popular and/or most advertised.”
In RTs case, "Top" means films no one dislikes it and everybody thought it was at the very least OK. The "problem" with genuinely great films is that they can often be divisive and challenging and generate a lot of strong opinions in both directions, and thus they'll never rate as well on RT.
I think that's exactly the case the OP has: there's no lack of divisive news at the moment. It's a calm, balanced, quality journalism that's we all are missing.
I don't think there's any lack of calm, balanced, quality journalism. Rather, there's a lot of politicized rejection of calm, balanced, quality news. You don't have to look too far to find people who think AP (much less BBC or Al Jazeera) is part of some conspiracy or another.
As Cobert (I think) said, "Facts have a known liberal bias".
Honestly? Yeah, I think so. I'm not saying RT's rankings necessarily reflects this, but I think modern movies in general are a lot less daring than what hollywood used to produce. Probably after 2008 or so, but near enough to 2010, it seems they stopped taking risks on movies and really started to search out the lowest common denominator to have the greatest shot at mass market appeal, even when that meant gutting the plot, degrading the sophistication of dialogue. SFX budgets have gone through the roof, movies are filled with explosions and vivid colors, but little that stimulates the mind. Even the exceptions that most would hold up as poignant movies are generally quite mundane, almost always appealing to the most popular public sentiments.
Consider Mad Max: Fury Road (2015) at number 7 on that list, and Mad Max (1979). 2015 has no shortage of explosions and vivid colors to keep anybody glued to their seat and eating popcorn; the movie watcher doesn't have to put forth any amount of effort to enjoy it, they could walk in half way through the movie, look at their smartphone 90% of the movie, and still enjoy the small slivers of the movie they did see because the movie was made to be as widely accessible as possible, even to philistines who don't pay attention to movies they pay to watch.
Mad Max (1979) on the other hand doesn't make it onto this Top 100 list. RT gives it 90% from critics and 70% from audiences, which hints at why. 1979 is not as easy for the casual modern tech-induced attention deficit disorder afflicted movie consumer to enjoy. It's very far from non-stop action; it asks the viewer to pay attention even when things aren't exploding on screen. It's excellent, but the industry won't make another Mad Max like that again in the foreseeable future. They don't take risks like that anymore; not when littering the movie with 500% more explosions is so cheap and is so effective at shoring up the profit potential of a movie.
If you look at the percentages you will notice the older movies have higher approval scores. But the algorithm also weighs number of reviews, and newer have more.
RT's model prioritizes consensus in quality instead of brilliance. Using RT for a best of list will give you good safe movies instead of outstanding ones.
> RT's model prioritizes consensus in quality instead of brilliance. Using RT for a best of list will give you good safe movies instead of outstanding ones.
> I'd say it's a good model for news.
It's not, but then the whole concept isn't good for news. Particularly, a model with the effects you describe would favor news that, irrespective of the ground facts, avoids reaching conclusions that are going to be unwelcome to any significant group.
If "news" involves reaching conclusions, isn't that really "editorial" content?
For example, it's one thing to report what AG Barr's letter on the Mueller investigation says, and maybe also what it doesn't say. It's also "news" to report what various players say about the report.
It's another thing for reporters to speculate what it means in terms of future investigations, or whether the report is "trustworthy," or to reach similar conclusions.
I suspect most people either can't or don't care to tell the difference.
Fox News, for instance is widely regarded to be biased to the right. And their opinion or editorial content absolutely is, often to the point of absurdity. But their actual news content is very well done so far as I can see. Do their viewers distinguish the two? I doubt it. Do their detractors distinguish the two? I doubt that also.
There is a significant amount of respect for Shep Smith and Chris Wallace in predominantly liberal news discussions (like r/politics subreddit) for being willing to criticize Trump and Republicans.
Was there similar respect for these two men before 2015, when they were presumably being no less sincere but circumstances were such that the targets of their ire were not targets shared by predominantly liberal news discussion forums?
That might sound like a rhetorical question, but it's not. I don't know the answer.
If that's the yardstick, one could just as well open the NYT, the WP, the WSJ, or [your favorite reputable newspaper] and call it a day -- no RT clone needed.
A large percentage of the population doesn't consider NYT, WP, WSJ reputable. They are all extremely biased by their politics.
I can't even name a single news source that I'd call reputable and that I trust. That's why I like reading multiple sources on the subject plus comments on Reddit/HN.
Actually, if you just read the news stories, and not the opinion articles from all three of those, you are probably doing pretty well. They can get something wrong once in a while, sure, but it's mostly good.
Television news is going to probably be not quite as good, just because they can't cover as much as a newspaper in only 30-60 minutes. Again, just the news, not the opinion stuff.
If you just take all of it in uncritically, and especially if you don't pay attention to what is labeled news vs opinion, you're going to end up believing a lot of things which probably aren't true.
If you believe this, then rotten tomatoes is absolutely the wrong model to follow, because all you're getting is the score that tallys up what NYT, WP, WSJ are saying. (or the average of the average ...)
I think you're pointing to the wrong "they". The "biased by their politics" problem isn't with the mainstream news sources so much as the people who assume that anything that doesn't fit their worldview is a Deep State Lie.
Actually, most of your news consumption should be of the 'good and safe' variety. That is an absolutely necessary requirement for you to properly evaluate 'oustanding and unsafe' news.
If I'm not mistaking, parent was meaning major investigative journalism efforts and high profile scoops, like the Snowden revelations, the Panama Papers, etc.
Note that this list is based on the tomato score which doesn't directly refers to the movie quality, it simply says if the movie is worth seeing.
There is also a rating which is usually hidden by default (click the more info link on a film's page). If you check that then you see, for example, that the number 1 Black Panther's rating is 8.26, while number 2 Lady Bird has a rating of 8.74, so it's a better rated movie.
There should be a top list based on ratings too. Maybe there is somewhere.
A group of my friends who are real film enthusiasts bashed IMDB top 250 in a similar way so they decided to create their own top 250. They had a big spreadsheet, argued and discussed for a long time. The result? Pretty much exactly the same as IMDB top 250.
Really? Their results ended up being close to IMDB? I’ve done similar lists with enthusiast friends. Ours was different enough. There’a no Shawshank Redemption anywhere close to the top, etc.
I’m trying to remember which service it was on. On second thought looking over IMDB top movies. The list probably is sort of similar. It really is hard to get a conesensus for top movies. Sure my own personal list as a media geek will be different. But no one else is really going to agree with my esoteric specific films.
IMDB does have some films that stick out in the top of their list though. Star Wars being so high, LOTR being so high, Inception, Seven, Green Mile, Gladiator.
No matter though you’re right that consensus is hard to get. And most of the films that I listed as out of place are part of fanboy beloved franchises or headed by fanboy beloved directors.
RT is optimized to filter out bad movies (hence the name). The system doesn't work well for "best movies", but does great at telling me what movies shouldn't be watched on a date.
This is because films with more ratings are rated higher. Older films have far fewer ratings by critics, same with smaller films. Add on the fact that each critic's review is distilled to a positive/negative boolean value, and it's clear to see why RT's best films is really just a list of films they are confident you'll like.
"How to Train Your Dragon" (2010), 99% with 206 reviews does not make the list, but many older movies, with lower scores and lower number of reviews do. E.g., "The Wizard of Oz" (1939), 98% with 111 reviews, and "It Happened One Night" (1934), 98% with 56 reviews, and "Casablanca" (1942), 97% with 79 reviews, and "Alien" (1979), 97% with 115 reviews.
It looks like the older a film is, the fewer reviews it needs backing its score to make the list.
It's also possible that the age of the review is taken into account, so that a modern review looking back on a film counts more than an older review, because that would probably be more relevant to someone today deciding what older movies to watch.
For example, take "The Good, the Bad, and the Ugly" (1966). It wasn't very well received by critics in 1966. Now, though, it's at 97% on RT, and is regular included in lists of top films (it used to be on the RT top 100, but looks like it has been pushed off).
Above all else, RT needs their ranking to be perceived as legitimate. The general public knows there are a handful of old movies that should certainly be in any legitimate Top-100 list. Casablanca, Citizen Kane, The Wizard of Oz, etc. These are movies recognized by the general public as movies that are considered masterpieces. If those movies were not in the list, then the general public would not consider the list legitimate. I'm talking about people who've never seen any of those movies, but are aware of their reputations and will dismiss any Top-100 list that doesn't include those movies.
So the consequence of that is clear. If you are tasked with creating a Top-100 movie list, you better tune whatever algorithm you use to include those movies. You'll butcher your beautiful algorithm to ensure those handful of classic movies always rank high. You might go so far as to put those movies in a special list and inject them arbitrarily into your Top-100 list, circumventing the normal ranking algorithm, because having those particular movies in the Top-100 is a design requirement. It's more important that those movies be in the list than it is for your ranking algorithm be fair and unbiased.
Fully half that list is movies from the 20th century, going back as far as 1920. I think it's a flawed list in many ways (especially the lack of non-English language films, missing masterpieces like Tarkovsky's Stalker), but it doesn't seem to be driven by mere popularity or advertisement.
Popularity and marketing is a factor to the extent that lesser known movies will not get enough reviews to count. Note that some movies gets ranked lower despite having a higher percentage. E.g Citizen Kane has 100% but is ranked several steps below Black Panther which has 97%. This is due to Panther having more reviews so the confidence is higher.
Well, we tend to review what's new, not something from decades ago. Different process. It's unsurprising that there are more software-countable reviews of contemporary movies.
Black Panther, to use the example some are protesting here, is a perfect storm of recent (max accessible reviews), critically acclaimed, and wildly successful commercially.
I think it's more telling that recent Star Wars movies like The Last Jedi are on the list, but A New Hope and The Empire Strikes Back (which are more important imho) are not.
This is one reason I like the standard set by the Cocaine and Rhinestones country music podcast. He won't talk about 21st century music because we don't have enough distance to tell what's truly great from what's merely popular at this point. A couple of decades of reflection really separates the great from the trendy.
Yes, Alien being missing is an odd sign, especially as it has 99%. (IIRC it used to be the only 100% film, until some critic posted a new review saying it hadn’t aged well)
I would have to agree. I have never taken RT evaluations seriously which is one of the reasons I was so disappointed to hear they would be involved in the project.
RT's value prop has never been "a good score indicates a technically/artistically good film". It's just "how'd the critics we track like the movie?"
If it's over 80%, I can reasonably assume I'll enjoy it. If it's below 30%, I should probably figure out if it fits my particular "critically panned but I'd love it" quirks before shelling out $15 for a theater outing.
I wrote something that was trying to do this a back in January that was called desational[1]. What I did was very basic but the process I did went as follows.
1. Break up news articles into sentences
2. Pay Amazon MTurkers to tag sentences as `news` or `not-news`
3. Train a text classifier based on the results for Mturk
4. Process new articles using the trained model to identify and strip out parts pf the article that are sensational.
5. Give the article a percent score and a letter grade about how much "news" is actually in the article based on the percentage of sentences that were identified as news.
I ran it for a few weeks on tech news, but then I figured that the news industry is not a business I really want to be in. The source code is all just sitting up on GitLab.
Very interesting idea. However, having dabbled with Bayesian classifiers myself, I suspect that your decision to break the article into sentences may be a fatal flaw...both the Mturkers and the classifier lose out on valuable context that way. I know because I took a a similar approach in one of my own projects, and the classifier could never gain enough 'confidence' to classify things, even after thousands of trainings.
The approach might work pretty well if you broke it down to paragraphs, where some context is preserved, but the more context you provide, the better the classifier will perform, often for surprising reasons. For example, I built one classifier that was performing poorly until I re-trained it without stripping HTML and headers; after that it performed splendidly. Upon examination I found that tokens such as domains and IP addresses in headers and links were hugely influential (and rightly so) in the classification.
Just something to consider. :) Apologies if I incorrectly assumed that your classifier is Bayesian.
No need to apologize. I'm actually not using a generative classifier like Bayes. I'm using Conditional Random Fields which is a discriminative classifier. I found that based on my training set, CRF produced higher accuracy.
That being said, I've also thought of way more ways to make the processing of the articles a lot better. I had an idea using an LSTM and another idea which would use an RNN to map the sensational article to a non-sensational article. The only problem with some of these approaches is that they would require someone to read an article and write a non-biased form of the article.
I do like your approach, do you have any results from what you've found tinkering around?
I want to bring a note of positivity here -- while most comments seem to be poking holes in the concept, all also seem to acknowledge the need for such a service to be successful.
This is a domain that I've also put a lot of thought into over the years, so I hope something succeeds. And some things do. Keep in mind that Wikipedia caught a lot of flak for being unreliable or impossible at first, but nowadays they have a robust editorial process for contentious subjects that I'd consider the gold standard for any such practical applications on the internet.
>while most comments seem to be poking holes in the concept, all also seem to acknowledge the need for such a service to be successful.
Speaking plainly: anyone who believes you can get better news through some kind of external meta-review process is a fool. Probably a fool who also thinks the review process should validate journalists that share their political biases, because that's what it's all about these days.
You cannot test quality into the product. News sources are supposed to cross-validate each other in the first place. Clearly, that have stopped working some time ago.
The only thing that can save news now is an overhaul of its financing and discovery models.
>Wikipedia caught a lot of flak for being unreliable or impossible at first, but nowadays they have a robust editorial process for contentious subjects
What will happen here is what happens with all sites which attempt to curate news or check facts: people who disagree with the site's conclusions will write it off as propaganda. Using "professional" journalists or "domain experts" at all will be a red flag for people who believe mainstream media and science only serves political and corporate interests.
I have not yet seen one that isn't just a news site that grades whether or not the news is in alignment with some ideology, with the occasional fig leaf tossed out for some particularly egregious error to try to appear even-handed.
The closest I've seen are some news grading sites that are fair for a couple of months, but because a pure ideology site.
As near as I've been able to tell, discarding all such sites (not just the ones that disagree with your ideology, but "all sites" is a superset of that) is the rational move.
> In order to be vetted as an individual critic, you should meet the following guidelines:
Written
> • Consistent output for a minimum of two years.
> • Demonstrated film/TV coverage at a publication outside of a self-published website. If you are a self-publishing critic, your reviews should reflect our key values.
Having journalist rate other journalists won't serve as a good filter on the actual quality of reporting. They may pick up on technical errors, but it doesn't address the large problem with modern reporting, which is inaccuracies or misstatements on technical topics. I'd rather have a review system of SME specific to the topic discussed.
For instance, an article written on some AI breakthrough would be critiqued by researchers in that field.
I understand that would be more difficult as it would greatly increase the number of reviewers required, but on the flip side a single reviewer would likely be more representative of her industry. I frankly don't care what other journalists think about the work of their peers since they all have the same biases
Yes, funding is the biggest one. "If it bleeds it leads" is a terrible way to give your readers or viewers a real picture of what is going on in the world.
The interesting thing about rotten tomatoes is that it generates a very different signal than critical review on average.
Your tastes (or worldview) may not match that of movie critics (or journalists) but you might find a surprising amount in common with the median of the set.
With Metacritic, it's been clear that the professionals do a much worse job than the regular users. Even the Oscars have poor correlation with movie quality, in my opinion. A lot of it ends up too pretentious.
What if Daily Mail suddenly has a great article like they occasionally do? Many professional critics still wouldn't give it a good review because their reputation and careers are at stake.
Metacritic exposes the corruption of the video games press as well as the utter lack of qualification (as in, no proper authority compared to any other gamer out there) to write about anything.
What value does Rotten Tomatoes provide? Is review aggregation really the model for truth we want to use? "A lot of people find this reliable/unreliable" tells you exactly that and nothing more. This is an attempt to cram the huge, complex enterprise of discourse into some kind of boolean true/false reliable/unreliable binary it cannot fit neatly into. Disinformation and tribalism are social problems that aren't waved away with technological solutions.
At the end of the day, I'm not interested in startups as truth gatekeepers. It's empty when it comes to movies--it's downright disingenuous when it comes to the news. It provides the veneer of objectivity by being a technological solution, but the enterprise's foundation is still ideological. I trust them as little as any other outlet that can't honestly assess and describe their own viewpoint.
At best, you're adding another layer that will itself become ideologically polarized. If this or similar platforms succeed, we'll see a proliferation of them by people who feel like it is too <ideology> or not enough <ideology>.
Someone’s always negative in these threads, so this time it might as well be me.
First of all: that headline really has an apples and oranges feel to it.
Basically Rotten tomatoes worked because it merely curated and aggregated something which already existed in abundance: movie reviews, deriving a consensus based on that.
Doing the same for news, will depend on something which has yet to exist at any meaningful scale: news reviews (and what would that even be?).
So not only must it prove its value (and neutrality) as an aggregator, but it will also have to encourage the creation of something entirely new (news reviews) at scale, because otherwise it won’t have anything to curate.
And if you have questions about the credibility of Credder, you can just check the Credder ratings on Verrit, which is itself verified by Truther and Accurit, which are backed up by Credder. Solved!
News trustworthiness is not a technical problem and cannot be solved by technical solutions. News: something you don't know. Verifying news: asking people to opine about the trustworthiness of something they don't know.
> "Here's an opaque cardboard box. In it is a banana. Now rate my news story on its accuracy. NO, you are not allowed to open or inspect the box in any way. Also, a lot of you have strong political opinions about bananas. Ready, GO!"
Garbage in, garbage out.
Far too gameable, a fundamentally insoluble problem to begin with.
A Crowd-News Watch Dog which basically works like a wikipedia + archive.org + IMDB for News Reporting. There you can have profile for journalists or any content writers. You can basically see what a journalist has wrote about in her/his entire life. Political inclinations deducted from writings by crowd.
The article written can be critiqued in the platform using a set of pre-defined set of rules. If the reporter is found employing strawman or fallacies or incorrectly reporting based on botched data/ statistical mumbo-jumbo, the commentary can be added by anyone and verified by other community members.
I think it can keep rampant fake news and/or politically-charged posts masquerading as neutral pieces in check.
This would very quickly devolve into either a hardcore bullying platform, or a banfest of everyone who criticizes the favorite people of the site's owner, his friends, and moderators.
If an argument put forward by the article doesn't abide by journalistic principles and is bad and crtiques offer why it is bad with coherent counter-argument, I believe there can be a middle ground somewhere.
It would be much more valuable to supply chains of evidence in news stories. So often, articles are treated as isolated islands with unresolved references and only loose links to previous events. News could be so much more credible if it had extensive underpinning of every exertion.
For example, in Europe, news stories almost always include a map of the local area. But one could do much better with timelines showing how each article is a step along an unfolding story, empowering readers. In theory, readers can go do all the research themselves, but the cost is quite high. That's almost exactly what the WWW was designed for! It could also includes links to wiki-like pages of every individual related to the story.
But obviously the big question is market. Especially these days, people don't want evidence, only emotional crack. People for whom the information is actually financial valuable pay for accurate information in their niche. So the real challenge is to motivate people to focus on reality instead of their outrage addiction. That's a cultural goal and more likely to be a non-profit enterprise. :(
Maybe this project will help the general public start reading reality based media.
But we do have this already. Done by experts. It's called the Pulitzer.
The people in a democracy must have access to information. Once a public is persuaded to reject reality based media, democracy becomes irrelevant. Thus the campaign to denigrate it in the dozen nascent autocracies. Again, I hope this project helps reverse that.
People in this thread do not seem to appreciate the real threat to society that censorship and propaganda pose.
A widely used news rating site will be used to suppress political dissent and alternate views.
Maybe take a look at history and the world. Do you think that Chinese authorities would like a tool installed on everyone's computer that could rate the trustworthiness of a news article or site? Of course they would love it because they could just downgrade any reporting that was critical of the party.
Based on the number of young people who seem to be begging for authorities to control their information, it appears that the China model is coming to the United States.
I get the methodology but, what is the demand for this type of product? RT makes sense because there are too many movies coming out at any one time, and you need something to build a shortlist from.
But news? Are users so spoiled for choice that they need a journalist's opinion on what's worth reading? This doesn't even seem like an anti fake-news play, since the people that consume and share that stuff aren't actually interested in news, just whatever can be used as a stick to beat the political opposition with.
presumably it's marketed as anti fake news but the reality is it will be politically biased towards one faction. That's usually how these things play out.
The thing is, even if it's not biased, as soon as something is published which is inconvenient for one side or the other all the partisans are going to claim it's biased. As they already do.
That's where providing the evidence comes in - you'll never convince your opponents but for skeptics and critical thinkers they can determine for themselves if the evidence supports the perspective.
People who want falsified narratives about reality don't care about these ratings. They will even go beyond and create their own "alternative facts" seal and go with that.
I like the idea but I don't know if I agree with the proposed approach to reviews. Personally, at a minimum, I would like to see the following from every reviewer:
A structured summary- background, objectives, data sources, synthesis methods used, results, limitations, conclusions and the implications of important findings drawn by the article.
In more detail:
An introduction that presents the problem and the issues dealt with. An assessment of the methods which describes the research and the evaluation process; specifies the number of studies references and gives a brief evaluation of their validity. A description of conclusions/results and their quality based on identifiable and preconceived academic/professional metrics. A strong summary of conclusions, limitations of those conclusions and the outcomes of the procedure used for the research.
I would really love to see reviews of reviews as well as a reviewers rating system (individual reviewers should have their own ratings) but given the limited number of reviewers that maybe asking too much.
Perhaps they should have used a better example than rotten tomatoes.
RT is owned by NBCUniversal and is terribly unreliable as a movie ratings aggregator. At one point it was a very useful site, but now it is useless. Might as well ask disney, nbc or studio execs on which movies to watch. We know that RT has lied and fudged ratings for financial or other purposes. It has the same problem as NewsGuard. It is controlled by industry insiders to benefit themselves and the industry. Would anyone trust an oil company ratings company controlled by oil companies?
Rather than a "rotten tomatoes" for news, I'd rather have a "meta" news or wiki site which has a running list of all the fake news that news companies have pushed out. Make it part curated and part user driven. Let users provide, comment and even contribute to it.
Make it listable by news companies and journalists. So we can see which news companies and which journalists have pushed the most fake news to the american public.
IMHO, journalism is slowly dying. These days you find more "opinion-based" or "make-it-viral" news pieces than proper investigative journalism. Services like this can help but at the same time they can create further segregation of true news vs poor news if most of the critics start growing to be biased.
I agree, I feel many times you can get more solid news from sources like Twitter. Especially how Western media covered Syria annoyed me to no end. To me the news coverage in my country seemed mainly aligned with Western political interests.
But I was happy to be able to follow at least 2 journalists [0][1] that were "on the ground" in Syria for some good solid investigative reporting.
Rather than create an RT for news, just create a better news outlet please. A new form of news would be the more revolutionary startup imo. I think any RT style news review site will end up being regarded much as Politifact is today -- valued only by those who agree with its ratings.
well right now in news we get stories with the different points of view, and just the points of view talking past each other. I want news that deals in raw hard to get data, and that monitors the data continually on a day to day basis. ie. for disaster relief in a country, to report neighbourhood by neighbourhood how much is being received and by who and in what quantity. basically it would require a ton of work, and would be much easier to achieve that level of coverage in a single city at first -- but basically try to have omniscience source of knowledge, and where knowledge is being blocked or obfuscated to identify that.
This is just going to descend into review-bombing chaos (look at what happens to games on Steam, and then think about how much more biased people are about news sources).
A much better idea would be a curated news source registry that does objective fact-checking and scores news sources based on various criteria, with the relevant research/scoring mechanisms shown openly so people can verify the results themselves. Like Snopes meets Charity Navigator (https://www.charitynavigator.org/).
In my opinion, this doesn't really offer any value as the only way to get a good understanding of the news is to read from lots of different places and come to a view. Using this strategy then consuming a truly "random" site is better than consuming pre reviewed site as there is no bias in selection or outcome.
It's not like films, where each one is independent (except marvel and dc comics obviously :)
A RT for news isn't going to fix the broken medium that is online journalism. Basic research, fact checking, editing/proofreading, journalistic integrity -- all things we take for granted with the news have been discarded in favor of more and more clicks/profits.
If anything, we need a new model for how news is manufactured/distributed in the age of the Internet.
I place no value in review aggregators. They don't tell you anything useful. It's much better to find your own individual sources that you can trust over time, and they can earn or lose reputation with you based on the quality of their reporting. This isn't limited to news.
I definitely don't trust a consensus of journalists to tell me a study is misinterpreted.
That's a step in the good direction, I think curating the news is going to be a very precious service. I'm looking forward to decentralised ways of achieving this though, such as with https://civil.co/
If you like this, then you might also like spidr[0]
"SPIDR is an AI-powered news aggregator that uses text analysis to group similar news together. The aim is to provide fast access to relevant news from different viewpoints."
Thanks for posting this. The news aggregator looks really good. I would love to know how you built it. Which text analysis AI did you use to group the similar news together?
Like academic publications, every news article should be published along with acknowledgement of funding, conflict of interest and maybe data availability statement. It is a lot easier to judge a news article when you know who own the publication and such.
How would this pick up on the problem where stories such as the Mueller investigation are hyped and mischaracterized for a long period of time, and in the end Breitbart ends up being more correct on the story than most other news outlets?
We're effectively creating the rotten tomatoes for science at scite.ai. We rely upon a deep learning model to classify citation statements as supporting, contradicting, or mentioning.
This seems like a broader, more ambitious version of Snopes. If successful, I wonder would it replace it? Or could this be something Snopes could expand its existing community to handle?
Snopes is a great model for why this will fail, though. They'll be dismissed as "fake news" by one side or another on anything controversial.
Hell, I've seen folks met with offended "did I ask for criticism?!" responses when they drop a "no, Bill Gates isn't going to give you a million dollars for forwarding that chain email" Snopes article in someone's Facebook comments.
Dear Credder.com: thank you for using a penultimate "e" before the final "r". Such a breath of fresh air! (Was it because Creddr.com was already taken? ;)
Isn't this the same crap Elon Musk was talking about when the media was roasting him for unsafe conditions at Tesla and calling rescue divers pedophiles?
> That even happened on RT for the Captain Marvel movie because of the female heroine, many "snowflakes" got offended and gave a movie that wasn't even out yet bad ratings.
Alternatively, they were using the "Want to See" feature on Rotten Tomatoes exactly as it was intended to be used. People didn't like aspects of the marketing campaign for Captain Marvel and made their opinions known using a tool Rotten Tomatoes provided specifically for that purpose.
There are many female-led films listed on Rotten Tomatoes that have very high approval fan ratings. By dismissing the public reaction to Captain Marvel merely as trolling by snowflakes, you're actually proving the point many people on this thread are making about partisanship and tribalism in news reporting.
It's strange though... there are so many movies coming out all the time. Way more than you could ever see, even if you wanted to. What drives a person to go to RT to explicitly say "I don't want to see this movie". They don't do that with all the other movies they don't want to see. It's all about politics, and has nothing to do with the quality of the movie.
I haven't seen it, and probably won't, but I have weird taste in movies anyway.
Is politics not a reason somebody might decide to not see a movie? If Mel Gibson produces a great new movie, then during the promotion gets drunk again and makes another racially charged rant to the police, wouldn't that dissuade people from wanting to see the movie? I know plenty of people who never forgave him for the first rant!
Why do movies use famous actors anyway? Not all movies do of course, but many do. Why? Because they're cashing in on the public liking and recognizing that person. Well what's going to happen if the public recognizes your celebrity actor, and dislikes them, for any reason? Isn't that just the other side of the same coin? It has nothing to do with the quality of the movie, but it's the nature of the industry nevertheless. Actors who are well liked for any reason are better for box office performance than actors who are disliked for any reason.
Putting Chris Pratt into your movie does not improve the artistic qualities of your movie, but it almost certainly will make the movie more successful in every measurable way.
Absolutely people take politics into account when deciding to see a movie. I don't think there is anything wrong with that at all.
I suspect that RT was surprised when that feature was used the way it was. Politics was probably not what they are thinking about. They are just trying to provide a review aggregator. I doubt they really though through the implication of what "I want to see this movie" might mean.
How could they fail to realize that a measure of whether people wanted to see a movie would not be reviews of the movie? I think the possibility they failed to consider is they might find themselves in a political disagreement with the users of their site. Still, it's a wonder how they could have failed to anticipate that after the debacle surrounding the recent Ghostbusters movie.
You present this idea, whilst using a racist (and sexist) perspecive in your own posit. Can you not see how that will actually hurt your position, not help it?
Your diatribe aside, brigading should be a primary concern for the OC's idea.
I think I posted this somewhere else but here it goes.
I want a system that rates journalists not news. I want to see my previous ratings of this journalist and be able to base it on that.
I want the ability to rate a pieces objectivity.
I want to see ratings from users who have been proven to know how to rate objectivity.
I want a system that uses Machine Learning to break me "out of" my information silos by promoting the opposite opinion, and saying as much. "Because you like, Capitalism here's an argument for Socialism."
I want a system that making ratings less of a popularity contest by limiting them in a meaningful way (this is pie in the sky I'll take the first two).
I don't think it's possible to create the things I want, but that's what I want. Not Reddit for news. I've seen how that works.
This seems far too complicated. The UX of Dissenter (boo, hiss, I know, control yourself) is simple by comparison, doesn’t involve another destination site, and doesn’t involve a committee deciding whose reviews are more worthy of consideration than others. What Id prefer is a ranking system of journalist merit: does this story cite anonymous sources? Is this story original reporting? Does this story simply repackage someone else’s original reporting with an editorial headline? Is it news, editorial, or opinion? What corporations advertise with the parent company? These are objective measures of newsworthiness and Credder would track none of them by default.
1. How will I know that news have been reviewed by qualified people? Technically this should be news agency/source's job.
2. By the time the score will depict the news' quality, the damage may be already done.
3. RT is based on people's opinions, but news should be fact checked.
What I think it should do, is to score the publishers instead... some kind of publishing performance history. Although the most effective would be to have some kind of liability for spreading/posting fake news, this would make publishers to do more fact checking before fast publishing.
All above is just my IMHO.