Hacker News new | past | comments | ask | show | jobs | submit login

It’s shameful that one of the worlds best journalistic sources didn’t even bother to reach out to Signal to get comment on a story they ran about them

I feel like a lot of today’s mistrust of news stems from publications not verifying sources, or checking evidence, or at least scrutinizing what others are saying.

Wish we could fix that




Related: the Gell-Mann effect. You read a newspaper story on a topic about which you are knowledgeable, and get mad at how wrong they've got everything. Then you turn the page and read the next story, on which you are not an expert, and take it at face value.


In my circles this gets me into arguments all the time. Everyone reads a book, everyone but me likes it. I point out how one section I know a lot about is deeply wrong, everyone else says versions of "well other than that part it's a great book!"

I wish LW-style rationalist circles didn't attract such obnoxious people, because I don't know of any other collection of people who recognize and try to adjust for things like this.


Rationalist circles make just as many horrific logical errors. If you want to dig into this stuff philosophy has dug deep, and frankly all they came up with is all sources should be treated as suspect. Aka, even if a source got the stuff you know about right, you should still be be skeptical. And yes, that includes purely logical reasoning about math.

It’s not a satisfactory answer, but ¯\_(ツ)_/¯


> Rationalist circles make just as many horrific logical errors. If you want to dig into this stuff philosophy has dug deep, and frankly all they came up with is all sources should be treated as suspect.

Sharing some anecdata, because I enjoy when others do.

I've got some very dear friends born in a highly prescriptive culture where credential-ism runs deep, being an intellectual is extremely valued -people are often denied leases due to their non-academic status- and authority figures and strangers get the "Mrs." and "Mr." treatment for years after you've met them. Early on in our friendship we would discuss and rant about everything and anything. We would often talk about my attitude of trusting no one and how I believe there are, truly, no experts, in the colloquial sense of the word. We've often had late-night discussions about my "stubborn and deeply misguided attitude", about how it's wrong that the only experts I trust are those who deeply distrust their own abilities, instincts and continuously attempt to disprove their own claims.

Shockingly, they defended the aforementioned views until many of their countries top virologists and they themselves, often and loudly, shared their opinion on covid-19 around March, April and May: covid-19 was nothing special, national mortality trends were unaffected, their country is rich and they had a high number of ICUs, it was basically just like the flu and nothing to worry about. Then, as we all know, shit hit the fan and we saw cooling trucks being turned into mortuary vehicles. I simply told them what I've always told them, and for the first time since we're friends they nodded with some sadness in their demeanour: trust no one, acquire evidence, make your own judgments, try to find hidden risks and the only so-called experts worth trusting are the ones who, in their own words, publicly and candidly express their skepticism towards their very own claims and try to help you reproduce their methods and conclusions.


If it's not too personal a question, I'm curious which culture you're talking about? My guess is French?


From a friend of mine who says she never had a problem renting a flat as soon as she's says she's a "Dr." This could be Germany :-)


I think people who are too in love with their own rationality leave themselves vulnerable to some pretty ridiculous beliefs that common sense would normally guard them against. A suspension of common sense is certainly important sometimes, because common sense can lead you astray, but at other times it will save you a lot of time and mental energy. A classic example from antiquity is Zeno's paradoxes of motion. It was not immediately clear to people trying to rationalize about these 'paradoxes' how they could be resolved. But somebody who is slightly less in love with their own rationality, who trusts their own common sense, would not waste their time wondering about a supposed paradox that can be refuted by standing up and walking around the room. Failing to balance rationalism with common sense can get you stuck in solipsist traps or worse.


I think the point with Zeno's paradox is less "motion can't happen" and "obviously this is wrong, so where is my mistake?"

Turns out the answer is math ;) more specifically stuff which didn't exist back then.


I'll rephrase my point then; I think Less Wrong people spend a lot of time trying to figure out how many angels can dance on the head of a pin. Or: They should stop sniffing their own farts, pull their heads out of the clouds, try not to get lost up their own asses, etc.

Anyway, according to Wikipedia: "Some mathematicians and historians, such as Carl Boyer, hold that Zeno's paradoxes are simply mathematical problems, for which modern calculus provides a mathematical solution.[6] Some philosophers, however, say that Zeno's paradoxes and their variations (see Thomson's lamp) remain relevant metaphysical problems.[7][8][9]"

So it seems like a few people are still wasting their time with this.


I have bad news about philosophy circles; I agree that some philosophers have dug deep, and most people in philosophy circles (even those who have read widely) are much worse at this than the LW folks.

(I found LW/rationalism through philosophy circles, incidentally)


There’s got to be a joke about the discovery that nobody has a clue failing to generate experts who realize they don’t a clue.

Sadly, I am not a comedian.


It can be perfectly rational to conclude a book is great despite there's a flaw in some part.

You can't simply extrapolate from finding one mistake in a book to declaring the whole book wrong. Likewise, you can't extrapolate from finding few bad publications to "everything is crap". I'm not trying to claim journalists are all good or even consistently good. But believing they are always bad is the same logical fallacy as believing they are always good.


It's not "believing they are always bad". It's "being unable to determine whether they are good". If you simply don't know enough to evaluate a source, it is correct as a matter of epistemology to view its contents with a large dose of suspicion. Any historian will tell you this; a pretty sizeable aspect of the study of history is the weighing of sources, working out why they said what they did, what they're not telling you, and what they're wrong about - because every source is incomplete, biased, and contains simple factual errors. I might sit back from a book and go "wow, that was good", and it's possible for a book to be "great but wrong" - but I can only reasonably conclude that a source's contents are correct if I have got more evidence for its thesis than just that one source.

So the situation is bad enough when I am encountering a new source. But if I'm already familiar with a source, and it's consistently wrong about the things I know, then I can only be even more suspicious of everything else they say; and sadly, an awful lot of publications do fall into that category. Finding a mistake in a book does make it more likely that it contains other mistakes, and it does make any given fact in the book more likely to be wrong.


I don't think it's rational unless and until you or people who are knowledgable in all other topics covered, can assert that the other parts all check out.

If you can only check one thing and it's wrong, then it is entirely and clearly irrational to assume that everything else is correct.

It's not fully defensible to assume that it's 100% wrong either. Just by plain statistics you may assume almost anything must have some correct portion.

What's rational is to make as few assumptions as possible, and where guessing or informed guessing is necessary, use only the information you actually have. That means back to the beginning, if you can only evaluate one part, and it has a many errors, then that is the only thing you can use to make any assumptions about the rest, unless and until you get actual credible assertions about the rest from someone else who are themselves credible in that domain.


They're not talking about extrapolating from one mistake though. They're talking about drawing one observation from a population of an unknown distribution. At that point that single observation the only estimate of the quality of the entire book. If you decide to not sample further (that is find other chapters in the book of which you are an expert) then the conclusion that the book is trash is the only rational one.

(If you want to read more about this then google "single observation unbiased estimator")


> If you decide to not sample further (that is find other chapters in the book of which you are an expert) then the conclusion that the book is trash is the only rational one.

That's exactly the wrong kind of extrapolation I'm pointing out. If you found a flaw in one chapter, the book can be trash, and the chance of it being trash is definitely higher than without any other data, but whether the absolute chance of it being trash is sufficiently high enough can only be determined based on the nature of the flaw, and even then, the confidence for that conclusion can't be really that high.


The claim isn't really that the book as a whole is necessarily good or bad. It may be expressed as such, but Crichton was talking about the habits of readers.

Determining whether a source is trustworthy is an ongoing process: you're seeing one portion where you have some expertise or evidence and then another portion where you don't. You have to extrapolate from what you can validate to whether what you can't is valid.

The complaint behind Gell-Mann amnesia is that we're too quick to dismiss clear evidence that a source is untrustworthy, that we have a bias towards trust.

To put it in perspective, let's imagine the opposite, call it Crank Awareness. If you see a document and it's laid out poorly, uses weird boldface, all caps, colors and blinking text, you'll, at the very least, get the impression it's written by a crank before you even start reading.

> But believing they are always bad is the same logical fallacy as believing they are always good.

Again, this comes back to the problem of strict logical reasoning vs. treating trust as a larger process. We have limited resources to evaluate sources, so we're stuck making fallacious generalizations when we need to make a decision based on our sources. What we want in the long run is to incentivize authors to exercise care and diligence.

That would indicate we should punish known bad information by deprecating the authors. So another way of reading Gell-Mann amnesia is that readers aren't doing this. They're seeing stuff that they know is wrong and continuing to patronize the publications regardless, thus authors can be untrustworthy and still collect a paycheck.


I'm not suggesting that one declare the whole book wrong. I'm suggesting the book should be treated very skeptically.


> I'm suggesting the book should be treated very skeptically.

Well then why did you write about liking the book and whether it's good as the cause of disagreement? That's a very very different thing from whether the book is an accurate source!

Even a specific section you prove to be wrong can still be good.


Perhaps we should reframe how we approach new knowledge?

"That was a super interesting book, it brought up lots of ideas and explanations, lets discuss what parts if any we think are true." !

Folks call out information and facts as fake news, not apply healthy skepticism but a rejection of all knowledge and then at the same time falling for hoaxes.

The whole country needs to take a gap year and learn the scientific method.


They are actually extremely rigorous about this in lower Swedish education. Source criticism is part of the curriculum. It becomes evident how much of a difference it makes when comparing the behavior of the elderly population at large (were only those who went to University have been thought it) to those who have had this from the start.


This seems like something different to me. Most books, TV shows, movies, etc that do anything with technology above the most basic level usually get at least one thing incredibly wrong. It's not necessarily a deal-breaker though - almost all stories are written to present and advance an interesting plotline, and almost all of them gloss over various inconvenient realities for the sake of a better story. Consumers can usually suspend their disbelief and accept the story for its own sake.

Problems only really come in when ignorant people read too many stories and start to think that how they present things is actually real. And some people who know a particular area very well may find whatever the story does too patently absurd to suspend disbelief.


> I point out how one section I know a lot about is deeply wrong, everyone else says versions of "well other than that part it's a great book!"

Sometimes books about magical properties of crystals get their geology right. Sometimes they get it wrong.

If I apply your superficial filter I essentially give up my ability to convey the difference to your friends.

If I ignore your filter and pay attention to the entirety of each book, it's trivial for me to help them separate wheat from chaff. (Or at least chaffy-wheat from pure chaff.)


Considering that Matticus there knows about LW and Rationalism, it seems obvious he isn't saying to zero out your coefficients.

Imagine I give you a dictionary purporting to contain descriptions of the referents of the following words: (I assume you know what a sprint is, but not a bilparyoti or a zambungar)

* sprint - to run at a rapid pace for a short distance

* bilparyoti - a kind of bright blue butterfly, found in Congo-Brazzaville

* zambungar - a muddy colour, specifically that created when a landslide enters a clear river

Now, based on knowing that 'sprint' is 'correct', what is your probability, posterior to being supplied the dictionary, that you know what a 'bilparyoti' references?

Now, imagine that I tell you that the 'bilparyoti' is wrong and you are able to be convinced that the 'bilparyoti' is wrong. Is your prior for the accuracy of the 'zambungar' reference the same as the posterior after being supplied the information about the 'bilparyoti'?

Matticus, there, presumably laments the fact that the zambungar accuracy probability has not moved. Rationally it should move towards zero. By varying amounts, certainly, but towards zero. With his friends, P(zambungar_correct | bilparyoti_wrong) = P(zambungar_correct|bilparyoti_unknown), truly a situation worthy of wailing and gnashing of teeth.


> With his friends, P(zambungar_correct | bilparyoti_wrong) = P(zambungar_correct|bilparyoti_unknown), truly a situation worthy of wailing and gnashing of teeth.

Granted. But OP seems to be at the other extreme which is equally problematic:

> Everyone reads a book, everyone but me likes it.

That strongly implies a boolean value judgment to me.


Nah, he's right. I often just end up with "I found parts of this persuasive and appealing, but I can't trust it because those parts are things I don't know much about, and in the parts I do know a lot about, there were serious issues." It's not that I think I know the rest is bad, too; it's that I know I don't know.


I think that's just a shortcoming of terse communication. I would wager he's not zeroing out the coefficients considering the context. Maybe $10 at 5:1 if there were a way to ensure fairness on the bet.


Thanks for putting waaay more effort into that response than I would have haha


Most definitely a great deal of akrasia involved there on my part.


LW seems to stand for Less Wrong: LessWrong (also written Less Wrong) is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.

https://en.wikipedia.org/wiki/LessWrong


What is an "LW-style rationalist circle"?


I assume LW here stands for Less Wrong as in https://www.lesswrong.com/


"LW"?


"Gell-Mann effect" I had never heard of it. Looked it up. Thank you. I especially love the way it is named...


Thank you for this! This happens to me all the time, e.g. when the label planes (eg mixing f-16 and f-18) in pictures wrong. And every time I wonder what they get wrong where I'm not knowledgeable.


I like this meme, it is fun and so on, but I have to admit, it is not really a thing: I am a professional physicist and journalists at respected outlets are pretty good. NPR, PBS, NYT all do a pretty great job at science journalism. More often than not the rare complaints from professional scientists are more self-aggrandizement lacking awareness of the pedagogical constraints of popular press.


You lost me at NPR do a pretty great job at science reporting. In fact they are a punchline to the joke about how incredibly bad science reporting can be.

https://statmodeling.stat.columbia.edu/?s=Npr


https://statmodeling.stat.columbia.edu/2016/05/05/npr-bites/

> In all seriousness . . .

> I have no problem with NPR. NPR is great. That’s why I’m bummed when it falls for junk science.


Right yeah. NPR great. NPR science reporting. Terrible. It's a shame. Sadly it's a strong pattern not a one off, hence the NPR search link on that site so you can satisfy yourself that "as heard on NPR" is anything but a strong signal of quality research.

And if we keep repeating that they'll likely fix it. It's absolutely fixable. NPR are not a total write off.


That website has logged 10 complaints in 13 years, most of them in 2016. This is a bit of a selection bias: yes, if you select the terrible examples, 100% of them will be terrible.

On the other hand, I listen daily to their flagship science show "Shortwave" and science-adjacent shows like "The Indicator" and their general news updates, and they are pretty great both in terms of being pedagogical and in not being misleading when they simplify something.


>This is a bit of a selection bias

Same is true of criminal courts. "If you select only the examples of fraud of course my client looks like a fraudster."

Seriously if NPR had good science reporters why on earth aren't they tapping their colleagues on the shoulder and saying "Don't" or maybe "You need to correct that, it's NPR's reputation, not just yours." I mean, on the standards of popular journalism they're pretty bad. And NPR is the punchline when someone publishes research by media release fitting some model to noise and booking their TED talk[1]. NPR are always there.

[1] Some people think TED is good science too. Maybe it's not all garbage? I don't know, my search is not exhaustive once the pattern is established.


Or the opposite happens.


Nitpick: Its Gell-Mann amnesia, Michael Chrichton's name for Murray Gell-Mann's amnesiatic behavior, not an effect discovered or promoted by Gell-Mann.


It's not a nitpick. The "Gell-Mann" in the name elevates a novelist and pundit's ideas to those of a Nobel physics laureate. It's worth pointing out!

It's also a frustrating argument. What does it actually say? "Journalists are often wrong". No shit! That's why it's called "the first draft of history". Meanwhile, everybody is often wrong. But we don't have a "Djikstra amnesia" to describe all the times we fall short of the ideals of our own discipline, but forget about that when holding other people to our notional ideals of their disciplines.


It's not that journalists are often wrong, it's that journalists often say stuff that is obviously wrong to anyone who knows anything about the subject. And that indicates sloppy investigation, like not contacting Signal before reporting this story.


The question is are they more wrong than any other segment of the population that writes for consumption?

Journalists, unlike say bloggers or marketers or think tank authors or pundits, have a fairly robust ethics/rules system about how to publish. Does it fail them at times? Of course, but do they fail at a higher % than other outlets?

I’ve never seen any actual evidence to suggest that. That there is a pithy quote from a Nobel prize winner isn’t interesting.


The point is that journalists tend to be a lot less reliable than the reading public seems to think they are. Most people would have enough sense to take information with a grain of salt if the source is "I heard it from a guy who heard it from a guy." Even if the guy is reliable, who is to say that the other guy is reliable?


Journalists aren't random people who have "heard it from a guy who heard it from a guy". They do reporting: they call people, develop sources, and work with fact checkers. That doesn't make them right all the time, or mean you should read them uncritically. But, of course, the bullshit meme is that you shouldn't take them seriously at all. It's premised on holding journalists to a standard we don't hold doctors to, let alone software developers.


Depends on the subject. In software security they're probably more wrong than the average security blogger.


there is a pithy quote from a Nobel prize winner

There isn't even that! The Nobel laureate is just part of the schtick.


> But we don't have a "Djikstra amnesia" to describe all the times we fall short of the ideals of our own discipline, but forget about that when holding other people to our notional ideals of their disciplines.

Perhaps we should?

(And then, of course, someone can post:

> Nitpick: Its Djikstra amnesia, tptacek's name for Edsger Djikstra's amnesiatic behavior, not an effect discovered or promoted by Djikstra.

when it eventually gets rounded to "Djikstra effect".)


I will commit to correcting anyone who attributes a "Djikstra amnesia effect" to Djikstra.


Is it Djikstra’s Amnesia forgetting his name is Dijkstra?


Embarrassingly, I double-checked the spelling on "Edsger", but still didn't catch that.


That too!


The strongest interpretation of the idea expressed by Gell-Mann Amnesia is "You set your posteriors to be different from your priors based on journalism more than you should be and you do not alter this difference upon seeing evidence that should indicate imperfection in journalism". i.e. it warns you that you are likely over-weighting journalism.

Fortunately, most journalism is useless for information transmission and likely rarely alters behaviour - the latter having been chosen first with the journalism being used as justification. To that degree, the fact that most journalists are usually low quality information sources is not particularly dangerous since you never use them to do anything different from what you'd do.


Journalism-as-confirmation is only used by people who have a strong conviction.

There are lots and lots of people undecided about an issue, reading some journalistic drivel. They are most easily influenced on that undecided issue.


I propose to refer to facile references to "Gell-Mann Amnesia Effect" here on HN as the "Gell-Mann Amnesia Affectation"


That’s because of the Bader Meinhoff phenomenon.


Crichton expressly noted that he picked the name ironically because people would implicitly trust it if he named it after Murray Gell-Mann.


I’m just as concerned about: > According to one cyber-security expert, the claims sounded "believable".

One anonymous source at the topic of the article that bolsters the claim, then all the experts who were willing to attach their names to their words all temper the articles claim are towards the end of the article.


A post on the 2600 group on fb said "seems legit".


They couldn't even get a single named expert to comment on record?


BBC article states "BBC has contacted Cellebrite and Signal for comment"

https://www.bbc.com/news/technology-55412230


The Signal blog post says from the beginning that:

> Since we weren’t actually given the opportunity to comment in that story

So it may just mean they were not given enough time to respond before publication, or even that they were contacted post-publication. In the race to front page "breaking news" the responses are expected to be published as updates to the story.


Yeah, all that is needed is to send a quick email to a company, then publish. There are no standards for how long one has to wait.

I imagine this is quite common behavior with journalist when they want "breaking news"


I've been on the receiving end of these calls. "This is so and so reporter from X News and we're running a story about X. Please call us back before 2pm so we can get a response."

It's 1:30pm.


And the original blog post by Cellebrite does claim to have 'cracked' the encryption, doesn't it? So the BBC's headline isn't exactly inaccurate.


So what? There's plenty of dubious blogs claiming all manner of things. The actual accomplishment isn't noteworthy (they wrote a scraper). If it were encryption-breaking, it would indeed be noteworthy, and thus worthy of fact-checking or at least waiting for Signal's rebuttal of "lolwut, no, that's nonsense".

"Accuracy" is moot here: BBC's headline is misinformation.


No, it didn't. It specifically stated that access to the Android keystore was needed to get at the stored Signal data.


There's a blog post claiming the Earth is flat. Would the BBC publish an article on it outside of satire?


How and when they contacted cellebrite/signal is important, but even when you see "refused to comment" there really isn't a timestamp for when contact was attempted/initiated. Is there a reason for this?


Additionally shameful: - they haven't printed a retraction yet - the technology reporter in question doesn't understand the tech well enough to recognize the error, even when somebody states it explicitly (https://mobile.twitter.com/janewakefield/status/134141965721...)


Hey, that's my tweet :)


It isn’t shameful. It is yet another indicator that the journalism industry is creating the intellectual equivalent of Animal Crossing.

It’s a time waster that entertains- not a reflection of truth. How could any business be considered “the best” in its field and create such a shitty product? Simplest explanation: they are not trustworthy and never were.


This is not due to incompetence. This is done with the objective to influence the public opinion of cryptographic tools so that people will stop using them. The system has no way of actually breaking encryption, that's why it is focusing on the other ways to circumvent it - one of them being making most people (non-experts) believe that it doesn't work anyway so they will stop using it.

This is a focused campaign, this is not just random occurence of incompetence.


This is a focused campaign

What's the evidence for this?


There is a thin line between modern journalism and click-bait ad farms.


I used to work at a traffic signal company. We got bad press whenever somebody "hacked" our infrastructure. It was always a super sophisticated "default password attack".

I tried to get the default password changed to unique-per-unit randomly chosen uuid, just to be so obnoxious as to convince the customer to set their own password. Encountered resistance of the "but then they'd need to be retrained" sort.

I wish I could say we didn't deserve the bad name, but we kinda did.


i have more faith in the onion than bbc, they are from the best newspaper


The BBC isn't a newspaper.


man i dont care about your technicals


The onion is


> I feel like a lot of today’s mistrust of news stems from publications not verifying sources

First, a nitpick: that is a thought not a feeling: you didn't state how it made you feel, you stated an idea.

Moving on... That's not why people mistrust the media. They mistrust the media because they are told to by politicians seeking to discredit journalism and control the narrative.


Double nitpick: People often colloquially use "I feel like" in place of "It is my opinion that" and it doesn't even strike me as literally wrong to describe an opinion as a feeling...

And do you think this episode is evidence of trustworthiness on the part of the BBC?

I agree with you that politician sow distrust, but poorly researched pieces are the fault of no one but the journalists.


I don't think this impacts the BBC trustworthyness one lick. They reported a story objectively, stating each side, without their own opinion. Even though one side was acting in bad faith. Is the BBC out of their element here, absolutely. But are they engaging in lies and deceipt, like Fox or OANN? Hardly.

> poorly researched pieces are the fault of no one but the journalists

Agreed. THey should have consulted with HN, look at all the experts here.

> literally wrong to describe an opinion as a feeling.

I don't know the gender of the OP, but one area where therapists and psychologists struggle with male patients is that they often say "I feel" instead of "I think", ann when asked what the feeling was, they struggle.

You can't feel an opinion. You can have one, and it can make you feel suspicious, or doubtful, or confused, because those are feelings.

This is a good example of how words used incorrectly are symptomatic of a deeper issue: male avoidance of feelings beyond hungry/angry/horny. It's not exclusive to men, but it is very common.


I don't think it's reasonable to conclude that there is no influence directing the overall thrust of stories, choosing which stories or which versions of stories, or which writers get published, any more than to conclude that every single story was scripted by the Illuminati or the Rothchilds.

We HAVE seen enough evidence to know that much just by tabulating stats and things like that John Oliver bit showing all the tv news stations using the exact same supposedly off the cuff remarks.


I approve of your nitpick.

I am a counterexample to your main point. I distrust most media sources because I've not once seen one present rigorous, transparent, verifiable research about a current event of interest to me.

I think I've seen every media source I've followed get significant facts wrong about things I know well.

I try to fight back against Gellman amnesia in my own head.


I think I put this too strongly, in retrospect.

By "distrust", I just mean that I do not default to trusting any news source I know of. I try to take what they say as plausible but not likely the complete picture. I also keep an eye out for what their overarching narratives tend to be and try to compensate for that in my own interpretation of events.


> I distrust most media sources because I've not once seen one present rigorous, transparent, verifiable research about a current event of interest to me.

"Because I've never seen it, clearly it does not exist."


"I have no reason to believe this, but it clearly must exist"

That's called faith and religion.

You cannot rationally make decisions based on anything other than what you know and have seen or can reasonably project from there.

IE, I can't see an atom with my eyes, and I can't duplicate all the research of history myself, but I can see some things with my eyesb and I can duplicate some research myself, and I can follow a reasonable, logical, defensible chain of trust from what I can directly prove to myself, to proofs I can accept indirectly, and distinguish those from fairy tales.


> "I have no reason to believe this, but it clearly must exist"

Except I never said that, whereas the previous post had a congruent summary of OP.

Strike one.

> You cannot rationally make decisions based on anything other than what you know and have seen or can reasonably project from there.

True. But that isn't what we are talking about, you ventured into strawman territory all on your own. The person was claiming they've never seen a truthful source, this is called the No True Scotsman fallacy. We're not talking about god, we're talking about the fact that some journalism is unbiased. There is much of it out there, but OP seems to think there isn't, or has convinced him/herself due to their own desire to not be wrong.

Strike two.

> can follow a reasonable, logical, defensible chain of trust from what I can directly prove to myself, to proofs I can accept indirectly, and distinguish those from fairy tales.

You've demonstrated that you aren't very good at following a "logical chain of trust" because you've reasoned to yourself, poorly, something that wasn't even being discussed.

Strike three.

First time playing with logic?

It never ceases to astonish me how people who throw around the terms "logic" and "rational thought" really, reeeeeally don't know how to deploy either.


I did not even imply it does not exist, let alone say it. That would be a truly massive claim.

However, if I've yet to see it, I don't know why I'd trust that it's happening regularly just outside my view.

Until I see evidence that it exists and is in fact the default state of journalistic reporting, I am not going to trust that it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: