It's deliberate. The journalists know they're making a wild claim, and by asserting in the story that it's based on a "study" by "experts" at the University of Whatever but not actually linking to the original paper, the story benefits from ambient trust in the scientific system regardless of whether it is justified in this particular instance. If the underlying study was easily findable then annoying comment posters might actually read it and discover that the claim which is getting so many juicy clicks is nonsense/misleading/etc.
This is one of the key problems facing journalism: if journalists started taking measures to improve the reliability of their output they'd take a short term hit to profitability which they cannot afford.
I recently did a bit of investigative journalism myself on this very topic:
Briefly, mainstream media (Times of London, NY Times, Economist etc) started claiming that there were hundreds of thousands of Russian controlled bots posting pro-Brexit tweets in the runup to the UK/EU referendum. It was outright stated that the vote was manipulated and this claim was picked up and repeated by the British Prime Minister.
The story was based on an academic study that was not linked to or even named anywhere. However the academics in question did interviews for the media where they bigged up their results. It took some digging but I was able to find an early draft of their paper online (the only copy anywhere) which immediately raised huge red flags: the paper appeared to be deliberately obfuscated, it made claims that weren't backed by its own data and the claims it did make were patently absurd e.g. they defined a "bot" as including anyone who sometimes posted after midnight. The low quality of the paper would have been detected by comment posters immediately if it'd been more easily discoverable. Indeed it became clear that the story's foundations were weak to the point of collapse.
Interestingly after I published this one of the journalists at the Economist reached out to me to ask some questions about it. He'd done a story based on that paper which came across as very biased and I told him that. He agreed with me, at least to some extent, and told me the story had originally been much more equivocal but it'd been taken out in editing. Still, we had a decent conversation about it. I also sent it to the author of the relevant article at The Times of London and got a response indicating he'd read it, but of course, nothing ever happened there.
There are of course many other examples from pop psychological, miracle cures and so on where primary sources are either abused or are actually weak/bogus to begin with, and where demanding a higher standard of accuracy would mean the stories never got published at all.
This is one of the key problems facing journalism: if journalists started taking measures to improve the reliability of their output they'd take a short term hit to profitability which they cannot afford.
I recently did a bit of investigative journalism myself on this very topic:
https://blog.plan99.net/did-russian-bots-impact-brexit-ad66f...
Briefly, mainstream media (Times of London, NY Times, Economist etc) started claiming that there were hundreds of thousands of Russian controlled bots posting pro-Brexit tweets in the runup to the UK/EU referendum. It was outright stated that the vote was manipulated and this claim was picked up and repeated by the British Prime Minister.
The story was based on an academic study that was not linked to or even named anywhere. However the academics in question did interviews for the media where they bigged up their results. It took some digging but I was able to find an early draft of their paper online (the only copy anywhere) which immediately raised huge red flags: the paper appeared to be deliberately obfuscated, it made claims that weren't backed by its own data and the claims it did make were patently absurd e.g. they defined a "bot" as including anyone who sometimes posted after midnight. The low quality of the paper would have been detected by comment posters immediately if it'd been more easily discoverable. Indeed it became clear that the story's foundations were weak to the point of collapse.
Interestingly after I published this one of the journalists at the Economist reached out to me to ask some questions about it. He'd done a story based on that paper which came across as very biased and I told him that. He agreed with me, at least to some extent, and told me the story had originally been much more equivocal but it'd been taken out in editing. Still, we had a decent conversation about it. I also sent it to the author of the relevant article at The Times of London and got a response indicating he'd read it, but of course, nothing ever happened there.
There are of course many other examples from pop psychological, miracle cures and so on where primary sources are either abused or are actually weak/bogus to begin with, and where demanding a higher standard of accuracy would mean the stories never got published at all.