> Is it really that bad for Levitt to take published research at face value, trusting that gatekeepers upriver have done their due diligence?
Yes, it is. There is a reason the reproducibility/replication crisis, especially in the social sciences, is such a hot topic. The podcast doesn't need to "meet the high standard of peer review", but there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up before essentially saying "Wow, your results are so unexpected! OK I'm pretty sold!"
>there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up
Is that expected of Freakonomics? I don't know how much rigor they do with their interview subjects, nor how much of a subect matter expert they are when it comes to pushing back.
They like to entertain crazy theories, but there’s a cost, as has been observed multiple times in the past. I do still like to listen to Steven.
I think the whole problem is how he presents the podcast as being very factual, data driven and scientific and on the other end he just lack rigour in some cases - like this one.
Basic research has become rare in journalism, but they either should stop pretending to be data driven or should do their homework.
The Frakonomics brand leans more into the info side of infotainment. Having listened to the show, they also lean into their academic backgrounds, so yes. This isn't WTF with Marc Maron, but even he famously excused himself to do some research when he found out he was interviewing the "other" Kevin McDonald.
Umm, of course? Shouldn't that be expected of any interviewer? I mean, they invited a guest onto their show specifically because they keep coming up with unexpected results - shouldn't they have done at least a little bit of their homework to see why a gaggle of people are condemning their results as non-reproducible?
No? Imagine how ridiculous that would become if interviewers actually followed that logic. "Great gameplay out there, <insert professional sports star>, but nevermind the sport we are all watching, my research identified that you erroneously wrote 1+1=3 in Kindergarten. What was your thought process?"
The podcast in question is known as "People I (Mostly) Admire" from the Freakonomics podcast network. The name should tell you that it is going to be about the people, not diving deep into their work. Perhaps there is room for a Podcast that scrutinizes the work of scientists, but one that literally tells you right in its name that it is going to be about people is probably not it.
Your example completely and ridiculously mischaracterizes my point.
A better example, to piggyback off your sports analogy: Suppose a podcast titled "People I (Mostly) Admire" decided to interview Barry Bonds, and the interviewer asked "Wow, how did you get to be so good in the second half of your career?" and Bonds responded "Just a lot of hard work!" Yeah, I would totally expect the interviewer to push back at that point and say "So, your steroid use didn't have anything to do with it?"
Point being, I'm not asking the interviewer to be knowledgeable about the subject's kindergarten grades. I do think they should do some basic, cursory research about the specific topic and subject they brought the interviewer on to talk about in the first place.
> I would totally expect the interviewer to push back
Are you confusing expectation with desire? I can understand why you might prefer to listen to a podcast like that – and nothing says you can't – but that isn't necessarily on brand with the specific product in question.
In the same vein, you might prefer fine dining, but you wouldn't expect McDonalds to offer you fine dining. It is quite clearly not the product they sell.
So, I guess the question is: What is it about "People I (Mostly) Admire" that has given you the impression that it is normally the metaphorical fine dining restaurant and not the McDonalds it turned out to be here?
Are you like the king of awful, straw-man analogies or something? Will just say I think your attempt to redefine this podcast and the Freakonomics brand to just "light, fluffy entertainment" is BS. These other comments put it better:
> Are you like the king of awful, straw-man analogies or something?
Yes...? Comes with not understanding the subject very well. I mean, logically, if I were an expert I wouldn't be here wasting my time talking about what I already know, would I? That would be a pointless waste of time. Obviously if I am going to talk about something I am going to struggle to talk about it in an effort to learn.
> These other comments put it better:
These other comments don't even try to answer the question...? Wrong links? Perhaps I didn't explain myself well enough? I can try again: What is it about this particular podcast that has given you the impression that it normally asks the hard hitting questions? Be specific.
The type of journalism that involves saying "This person makes an incredible claim" and then goes on to allow the person to present said claims uncritically is called "tabloid journalism[1]." Yes, I would expect a podcast hosted by a NYT Journalist and University of Chicago Economist to have higher standards, particularly in the field of academic research.
That's a fun tangent, but doesn't answer the question. What in particular about this podcast has indicated that it is not "tabloid journalism"? You clearly recognize that tabloid journalism exists, so you know that this podcast could theoretically intend to be. But what, specifically, has indicated that it normally isn't?
The background of the people involved is irrelevant to the nature of the product. Someone who works on developing a cure for cancer by day can very well go home and build a fart app at night. There is no reason why you have to constrain yourself to just one thing.
Yes, it is. There is a reason the reproducibility/replication crisis, especially in the social sciences, is such a hot topic. The podcast doesn't need to "meet the high standard of peer review", but there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up before essentially saying "Wow, your results are so unexpected! OK I'm pretty sold!"