Quote: "New evidence has put into doubt the long-standing belief that a deficiency in serotonin - a chemical messenger in the brain - plays a central role in depression."
Let's summarize this issue:
1. Big Pharma has sold billions of dollars' worth of SSRI-based drugs, drugs whose mode of action is to regulate serotonin to control depression. Tl;dr: big influential companies, big sales, a gullible public who think psychiatry is an evidence-based medical field.
2. Studies show that the above drugs do not work for the majority of patients. In a meta-analysis conducted by the FDA that combined the results of published and unpublished studies, "antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance." Tl;dr: these drugs do not work.
3. The linked study shows that serotonin has no clear correlation with depression in an animal model. Tl;dr: because serotonin levels and depression aren't correlated, SSRI drugs could not possibly work.
Guess what effect these studies, this science, is having on the sale of antidepression drugs? None whatever. The reason? Psychiatry and psychology aren't sciences, and (unlike in real medicine) ongoing drug therapies don't hinge on evidence for effectiveness.
This is why the NIMH recently ruled that the DSM, psychiatry and psychology's "bible", may no longer be used as the basis for scientific research proposals -- not surprising, it has no scientific content.
I appreciate skepticism about the state of pharmacological treatment of depression. Point 2 is important: it is in fact the case that several meta-analyses of the effects of anti-depressant drugs show only modest benefit in moderate depression. However, the drugs do work, they just don't work very well in many cases. The study you cite does indeed find a clinically significant effect in the most severe cases, and the drugs have a smaller but statistically significant effect in less severe cases.
Additionally, the brain is complicated, and just because mice genetically depleted of serotonin do not show depression-like symptoms does not mean that toying with serotonin levels in a normal brain cannot alleviate depression. Even if depression isn't brought on by lack of serotonin, it can still be the case that potentiating the effects of serotonin can reverse depressive symptoms. In fact, antidepressant treatment does alleviate depression in mice, and my understanding is that the effects of antidepressants in mouse models are actually far stronger than in humans in a clinical setting.
There has been some work to track down how antidepressants work in animal models at a systems level, e.g. http://www.sciencedirect.com/science/article/pii/S0896627309... suggests that administration of antidepressants to chronic stress model mice reverses the suppressive effects of chronic stress on growth of new brain cells in the hippocampus, and this may be related to their therapeutic action.
"my understanding is that the effects of antidepressants in mouse models are actually far stronger than in humans in a clinical setting."
Interesting. I wonder if this is a case of "you make what you measure" - since measuring effects on rats is far, far easier than measuring them on humans (and is typically a prerequisite).
Here's what you say: "However, the drugs do work, they just don't work very well in many cases."
Here's what the science says: "... antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance."
Guess which source I plan to rely on until better evidence is uncovered?
One theory has it that the positive statistic among the severely depressed resulted from a dosage effect -- because of the required dosage, those specific test subjects couldn't be blinded to the fact that they were taking the experimental drug, not a placebo, therefore the experimental protocol was compromised.
> Additionally, the brain is complicated, and just because mice genetically depleted of serotonin do not show depression-like symptoms does not mean that toying with serotonin levels in a normal brain cannot alleviate depression. [emphasis added]
First, you're overlooking the fact that the studies that got SSRIs approved in the first place relied on animal studies using the same animals.
Second, imagine a scientist saying "... just because mice genetically depleted of serotonin do not show depression-like symptoms does not mean that toying with serotonin levels in a normal brain cannot alleviate depression." Surely you see the logical error you're making here -- the argument essentially says, "if it hasn't been disproven, then it might be true." It equates an absence of evidence, with evidence.
To a scientist, the guiding precept is that an idea is assumed to be false until supporting evidence appears. This is commonly known as the "null hypothesis".
To a pseudoscientist, the guiding precept is that an idea is assumed to be true until contradicting evidence appears. You have just aligned yourself with a pseudoscientist's outlook.
Quote: "While a pseudo-science is set up to look for evidence that supports its claims, Popper says, a science is set up to challenge its claims and look for evidence that might prove it false. In other words, pseudo-science seeks confirmations and science seeks falsifications."
This passage is particularly relevant to our exchange.
> In fact, antidepressant treatment does alleviate depression in mice ...
Not according to the FDA's meta-analysis quoted above. Shall I say it again? "... antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance."
> ... and this may be related to their therapeutic action.
Apart from a well-documented Placebo effect, which therapeutic action are you alluding to? Don't you understand that the present literature, generated by investigators subject to critical review by many interested parties, argues that there is no clear clinical role for SSRIs? And that the most recent study calls into question the most basic assumption of this class of drug therapy, that depression and serotonin are correlated?
We have pharmaceutical companies advocating the use of drugs to alleviate depression on slim-to-no reliable evidence, we have studies that fail to show a correlation between depression and serotonin in the same animal population used in the original drug studies, and most important of all, we have a large population of depressed people who want to believe that psychiatry has an answer to their difficulties.
As it happens, there is no present reliable answer, but there are hopeful signs that lie in other directions:
Quote: "As it turned out, 8 of the 12 patients he operated on, including Deanna, felt their depressions lift while suffering minimal side effects — an incredible rate of effectiveness in patients so immovably depressed. Nor did they just vaguely recover. Their scores on the Hamilton depression scale, a standard used to measure the severity of depression, fell from the soul-deadening high 20's to the single digits — essentially normal. They've re-engaged their families, resumed jobs and friendships, started businesses, taken up hobbies old and new, replanted dying gardens. They've regained the resilience that distinguishes the healthy from the depressed."
Is DBS a panacea? No, not at all. First, it's a lab experiment, not a therapy. It's risky and expensive. But it shows what might result from an effort to actually understand what depression is, something that psychiatrists and psychologists seem uninterested in.
> Here's what the science says: "... antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance."
With all due respect, read the paper you linked to! Here's what it says in the places you didn't quote:
"Meta-analyses of antidepressant efficacy based on data from published trials reveal benefits that are statistically significant, but of marginal clinical significance [1]."
"Although the difference between these means easily attained statistical significance (Table 2, Model 3a), it does not meet the three-point drug–placebo criterion for clinical significance used by NICE."
"Although drug type and duration of treatment were unrelated to improvement, the drug versus placebo difference remained significant, and amount of improvement was a function of baseline severity (Table 2, Model 1a)."
"The difference between drug and placebo exceeded NICE's 0.50 standardized mean difference criterion at comparisons exceeding 28 in baseline severity."
> First, you're overlooking the fact that the studies that got SSRIs approved in the first place relied on animal studies using the same animals.
SSRIs were approved several decades ago, but studies in the last decade still show the same effects, e.g. the study I linked to above, which was conducted in 2009 using fluoxetine, which was approved in 1987. The lifespan of a mouse is around a year, so there is zero chance that any of the animals are the same. In Figure 1, it reproduces the result that antidepressants reduce depressive behaviors in mouse models on a variety of tests. This is not an isolated result; basically all of the rodent studies show similar findings.
> Second, imagine a scientist saying "... just because mice genetically depleted of serotonin do not show depression-like symptoms does not mean that toying with serotonin levels in a normal brain cannot alleviate depression." Surely you see the logical error you're making here -- the argument essentially says, "if it hasn't been disproven, then it might be true." It equates an absence of evidence, with evidence.
This is in response to your claim that "the drugs could not possibly work." The drugs can work even if serotonin-depleted animals are not depressed. And as I've stated, there is an abundance of evidence that the drugs do work in mice, even though serotonin-depleted mice are not depressed.
> > In fact, antidepressant treatment does alleviate depression in mice ...
> Not according to the FDA's meta-analysis quoted above.
That meta-analysis is in humans. Mice are not humans! That is a big reason why it's so hard to develop good antidepressants in the first place.
> Apart from a well-documented Placebo effect, which therapeutic action are you alluding to?
The therapeutic action in mice, which is replicated in the study I linked to and many others.
As far as DBS goes, one of the problems with determining if DBS actually works for depression is that most studies have not been placebo controlled. Also, the largest study that I'm aware of was recently shut down by the FDA for lack of efficacy: http://neurocritic.blogspot.com/2014/01/broaden-trial-of-dbs...
> The therapeutic action in mice, which is replicated in the study I linked to and many others.
Would that be the same therapeutic effect that was falsified in the most recent study, the one that showed no correlation between serotonin and depression? Neither study represents anything like a conclusive outcome, but both must be read with an open mind -- who funded the studies, how strong is the evidence, what are the p-values, and so forth.
But let's step back. Do you know what's missing from this exchange, and do you know why such exchanges tend to be so wordy and inconclusive? Both of us are discussing symptoms, descriptions, not causes. This issue will never move forward until people move beyond psychiatric descriptions and locate a biological cause for depression.
Science requires a focus on causes, explanations -- descriptions aren't enough. And this is not science, because psychiatry is not science.
For those who doubt the central role of explanations in science, I have an anecdote -- Doctor Dubious invents a new treatment for the common cold. His treatment is to shake a dried gourd over the cold sufferer until the patient gets better. Sometimes the treatment takes a week, but it always works — the cold sufferer always recovers. So, why doesn't Doctor Dubious get a Nobel Prize for his breakthrough?
The answer is that the procedure is only a description — shake the gourd, patient recovers — without an explanation, without a basis for actually learning anything or being truthful about the connection between cause and effect. It's the same with psychiatry.
"For those who doubt the central role of explanations in science, I have an anecdote -- Doctor Dubious invents a new treatment for the common cold. His treatment is to shake a dried gourd over the cold sufferer until the patient gets better. Sometimes the treatment takes a week, but it always works — the cold sufferer always recovers. So, why doesn't Doctor Dubious get a Nobel Prize for his breakthrough?"
You always go to this example, but it does not convince anyone of anything because you are not only foregoing explanation but also a control group. If you do have a randomly assigned group of people who get gourd-shook and a randomly assigned group of people who do not, and those who do recover significantly faster than the those who do not, and you have no idea how gourd-shaking might cause that... maybe it's still incorrect to say "the study showed gourd-shaking effective in treating the cold" but that's not intuitively obvious - explain why.
> You always go to this example, but it does not convince anyone of anything because you are not only foregoing explanation but also a control group.
Yes, because my example is meant to caricature psychological "science" -- that's its purpose. Therefore it must have the same logical pitfalls as the class of study it caricatures.
Is there a suggestion that psychiatrists or psychologists have control groups that work? Does the expression "no-treatment control" sound familiar? It's an often-seen expression in psychiatry and psychology studies. It results from an awareness that real scientists have control groups, but because of the nature of psychological studies, that's often not practical (imagine a study comparing therapeutic methods -- how would one design a realistic faux therapy for control purposes?).
But these people want the appearance of science, so they call those denied treatment a "no-treatment control". "Send him home, we're not treating him. Oh, and add him to the 'no-treatment control' group."
Quote: "Although no-treatment controls have an appealing simplicity, they also have a number of potential disadvantages."
I always laugh when I read that.
> ... maybe it's still incorrect to say "the study showed gourd-shaking effective in treating the cold" ...
Remember that I was mimicking a psychiatrist or psychologist, the sort of person willing to publish a paper containing this kind of reasoning in a theoretical vacuum.
Anyway, the absence of a central corpus of falsifiable theory in psychiatry speaks for itself. All real sciences have such a corpus of theory, it's falsifiable and it informs all work in the field, psychiatry doesn't have one. Psychiatrists are free to go in any direction they please, even contradict each other on matters of substance, and without a basis in theory. It's not science.
'Yes, because my example is meant to caricature psychological "science" -- that's its purpose. Therefore it must have the same logical pitfalls as the class of study it caricatures.'
You explicitly stated that the example was "for those who doubt the central role of explanations in science". That you can throw together an example with a bunch of flaws that produces a flawed result is both unsurprising and not especially damning of any specific one of the flaws - one flaw is enough to be wrong. Therefore, your example is not very persuasive on the issue you claimed it directed at.
If you're just trying to get your jollies lampooning everything that sometimes goes wrong with psych research, that's fine - but present it that way. If you're trying to help people reason better - and possibly find flaws in your own reasoning - isolating particular problems is a much better approach.
> You explicitly stated that the example was "for those who doubt the central role of explanations in science".
Yes, and my example proves the point I am making -- one must have tentative, falsifiable explanations in science. Choose your own example if you don't like mine, but make no mistake about it -- scientists explain things. If the explanations fail, then another explanation is tried. But description is not science, that's stamp collecting.
This goes as far back as Francis Bacon, who first articulated it -- reading Bacon's philosophical works, the role of explanation, of theory, is clearly set out as a requirement.
More recently, falsifiability has been made a requirement for science, especially in cases where science is defined in legal actions. And descriptions cannot be falsified -- if I say that I saw many tiny points of light in the night sky, that can hardly be falsified. But if I make the claim that those points of light are actually thermonuclear furnaces at a great distance, that explanation is open to examination and falsification, and I have crossed the threshold of science.
This is why the dried gourd example is perfect for my purpose -- it aptly caricatures real psychological work. And once one tries to take a step toward science, to explain the result, it falls apart, like so many psychological studies do.
It's like you know what you want to rant about, and it doesn't matter what you're actually responding to. I'd hoped to learn something, but I think I've got to give up on your posts.
> It's like you know what you want to rant about ...
You mean by quoting authoritative sources like the chairman of the NIMH, whose views resemble my own?
> ... but I think I've got to give up on your posts.
Feel free, but if you've managed to miss the historical change taking place in mental health right now, then nothing I might say will help. The tl;dr is that neuroscience is taking over.
Given that your quoting of Insel wasn't in response to me, wasn't related to anything I said, and given that I've not been disputing anything Insel said... You demonstrate an eagerness to rant about issues that you're sure you're right about (and often are) regardless of whether they actually address the thing you are replying to, and keeping up is too much work getting past the noise.
> "... antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance."
vs.
> "Meta-analyses of antidepressant efficacy based on data from published trials reveal benefits that are statistically significant, but of marginal clinical significance [1]."
Are you sensing some kind of contradiction here? Both are quotes, both are truthful, one includes unpublished studies and one does not. Are you honestly arguing that we shouldn't include the unpublished studies (which tend more to negative results than the published ones - big surprise.)
It's pretty dirty to accuse the person you're having a discussion with of not reading the paper, especially when you never explained the problem in what lutusp claimed but just relied on innuendo to carry you through.
There is a difference between statistical and clinical significance. Statistical significance indicates that there is a real effect that is not due to chance. Clinical significance is achieved only if the effect size exceeds a certain threshold. The clinical significance criterion used by the study lutusp linked to is "a three-point difference in Hamilton Rating Scale of Depression (HRSD) scores or a standardized mean difference (d) of 0.5." That study shows antidepressant treatment does not have clinically significant effects beyond placebo in all but the most severe cases of depression; the effect size is quite small. However, the effects are statistically significant (in previous meta-analyses of published studies and this meta-analysis of both published and unpublished studies) and thus likely to be real, and they are clinically significant in the most severe cases of severe depression, even when unpublished studies are included.
To summarize, antidepressants do work: their effect is statistically significant. But they don't work very well: their effect is not clinically significant in any but the most severe cases.
> To summarize, antidepressants do work: their effect is statistically significant.
This is deliberately misleading. You just explained that statistical significance is not clinical significance, but saying "antidepressants do work" strongly implies that they subjectively improve a person's state. These are words you chose, words that contradict the study's conclusion with respect to human subjects in all but the most severely depressed.
> But they don't work very well: their effect is not clinically significant in any but the most severe cases.
This is like saying, "the skydiver was perfectly all right until he hit the ground." It's misleading. If he used language like this, a clinician could be accused of unethically misleading his patients.
The reason for the wide gap between statistical and clinical significance is to guard against the distorting effect of self-reporting, always a risk in a study like this. I want to repeat what you said above, but broken down:
(a) "To summarize, antidepressants do work:"
(b) "their effect is statistically significant."
But according to your own analysis, item (a) does not -- cannot -- acquire its authority from item (b), for the reason that statistical significance doesn't lead to the claim that these drugs "work" in the commonly accepted sense, i.e. subjective improvement in a human subject.
I refer again to the difference between science and pseudoscience I quoted earlier -- pseudoscientists seek confirmation, scientists seek falsification. The outcome for the layman? If a scientist cannot falsify something, it acquires a small bit of temporary credibility, but if a pseudoscientist says something works, well, read the study behind the words.
I put the parens in the wrong place. What I meant was that the effects are statistically significant in both previous meta-analyses and this one. I've adjusted my comment above to make it a bit clearer.
Your tl;dr for #2 isn't accurate. Yes, these medications "have reported only modest benefits over placebo treatment", but the measured benefit of placebo treatment in this case is quite large and the medications work BETTER than placebo. So even if the difference between placebo and treatment is small, the difference between treatment and no-treatment is quite large and is clinically significant.
As for the claim that the difference falls below accepted criteria for clinical significance, the "accepted criteria" used to reach that conclusion are pretty arbitrary and it only just BARELY falls below the arbitrary level picked (by some), depending on which studies are included.
> Your tl;dr for #2 isn't accurate. Yes, these medications "have reported only modest benefits over placebo treatment", but the measured benefit of placebo treatment in this case is quite large and the medications work BETTER than placebo.
Yes, unless unpublished studies are included in the analysis, after which the effect falls below that required to make a claim of clinical effectiveness. That distinction is perfectly clear in the post to which you replied.
> As for the claim that the difference falls below accepted criteria for clinical significance, the "accepted criteria" used to reach that conclusion are pretty arbitrary and it only just BARELY falls below the arbitrary level picked (by some), depending on which studies are included.
Weasel words, words that a scientist wouldn't dream of including in a study means to generate light, not heat. Which explains why this kind of argument is absent from the FDA study.
In any case, a more recent study quoted above finds no correlation between serotonin and depression. If this study bears up under scrutiny, it undermines the basis for SSRIs as a remedy for depression, for the simple reason that serotonin and depression have no relationship. It would explain why the statistics for SSRIs are so marginal -- what's being measured is a placebo response.
> Yes, unless unpublished studies are included in the analysis, after which the effect falls below that required to make a claim of clinical effectiveness. That distinction is perfectly clear in the post to which you replied.
I think you are not reading carefully. Even WITH the unpublished studies included, the placebo effect far exceeds the required effect size to be considered clinically effective which means that the combination (placebo effect+treatment effect) also far exceeds the required effect size.
And the claims you make are pretty clearly rebutted in the first link I provided, which gives precise numbers to the different "clinical effectiveness" strengths.
(Regarding the correlation between serotonin and depression, if it bears up it will likely establish that there is some OTHER reason that Paxil is a moderately effective antidepressant.)
Here's a relevant quote from that link I gave before:
( http://slatestarcodex.com/2014/07/07/ssris-much-more-than-yo... )
"They also note that Kirsch’s study lumps all antidepressants together. This isn’t necessarily wrong. But it isn’t necessarily right, either. For example, his study used both Serzone (believed to be a weak antidepressant, rarely used) and Paxil (believed to be a stronger antidepressant, commonly used). And in fact, by his study, Paxil showed an effect size of 0.47, compared to Serzone’s 0.21. But since the difference was not statistically significant, he averaged them together and said that “antidepressants are ineffective”. In fact, his study showed that Paxil was effective, but when you average it together with a very ineffective drug, the effect disappears. He can get away with this because of the arcana of statistical significance, but by the same arcana I can get away with not doing that.
So right now we have three different effect sizes. 1.2 for placebo + drug, 0.5 for drug alone if we’re being statistically merciful, 0.3 for drug alone if we’re being harsh and letting the harshest critic of antidepressants pull out all his statistical tricks."
> Even WITH the unpublished studies included, the placebo effect far exceeds the required effect size to be considered clinically effective which means that the combination (placebo effect+treatment effect) also far exceeds the required effect size.
This is astonishing. The paper in question flatly contradicts you. The paper concludes, "antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance."
The above conclusion is not open to the interpretation you've given it.
And, in case you missed the train leaving the station, none of this will matter until some science is done -- science that uncovers the cause of depression and ends this guessing game and debating society.
That quote does not "flatly contradict" what I said. Read the link I gave. Did I perhaps need to italicize placebo?
The paper you quote is talking about the measured benefits of SSRIs relative to a placebo; the reference I gave is talking about the measured benefits of SSRIs relative to non-treatment. (which includes the placebo effect)
(And as a separate issue, the link I gave also - I suspect correctly - calls into question the standard used in your paper for accepted criteria for clinical significance. And points out that grouping antidepressant medications into one large category can hide the effectiveness of individual medications when your average includes some medications that are known to be not very effective.)
As you mention the downvotes, I guess you understand it's not so simple.
First, practician already know that the effect are far from being guaranteed, and usually after the precription there is a follow-up time span where the drug effects are checked. If something has no results, you try something else.
Then, there will be results sometimes, but it might be a form of placebo. But, will you stop something that seems to help the patient, just because in most cases it shouldn't really work. Usually, no, you go on because placebo or not the results are positive.
All in all, the results of this study are important, but don't have any immediate impact on how current drugs are used I think.
> All in all, the results of this study are important, but don't have any immediate impact on how current drugs are used I think.
Yes, I understand. These things take time. But imagine that a cancer drug is proven to be ineffective -- guess how long before clinicians would be required to stop prescribing it?
Science is certainly more difficult to apply to psychiatry than other branches of medicine - due to the relatively subjective endpoints, lack of useful animal models, and conflict with profit-based motives within the pharmaceutical industry - but your assertion that there is no scientific component to it at all really is quite the overexaggeration.
You need to understand that psychiatry was never a science and is not one now. Don't take my word for it -- ask a psychiatrist. Ask Thomas Insel, psychiatrist, director of the NIMH, and the person behind the recent ruling described above. Or ask Freud, who eventually acknowledged that psychiatry cannot be a science.
The article you linked, and Insel's related pronouncements (as detailed in his NIMH blog) do not support your position.
His quite reasonable criticisms of the DSM as a framework for categorising and funding mental health research (and suggestion of what may be a more suitable replacement) does not imply that the entire field of psychiatry is unscientific.
> The article you linked, and Insel's related pronouncements (as detailed in his NIMH blog) do not support your position.
Here's my position: "The goal of this new manual, as with all previous editions, is to provide a common language for describing psychopathology. While DSM has been described as a “Bible” for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity."
"Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever. Indeed, symptom-based diagnosis, once common in other areas of medicine, has been largely replaced in the past half century as we have understood that symptoms alone rarely indicate the best choice of treatment. Patients with mental disorders deserve better."
> ... does not imply that the entire field of psychiatry is unscientific.
But I never made that claim anywhere, and certainly not with respect to Insel's views -- I don't need to. It's sufficient to note that there's no science in psychiatry. It's sufficient to say that the DSM, the sum of psychiatric knowledge, is being discarded by the NIMH for the reason that is has no scientific content.
It's sufficient to ask for the identity and location of the unifying corpus of theory in psychiatry, a theory that forges a consensus between psychiatrists, that demonstrates its scientific standing, a theory such as exists in every legitimate science without exception.
If you will name a science, I will tell you the theory that defines it and legitimizes it as a science. Even though all scientific theories are subject to falsification and many are in fact falsified, at any given time a central corpus of theory guides research and practice in each and every scientific field.
But for the sake of argument, let's say the above isn't a requirement for science, see where this takes us. Let's say that we don't need a falsifiable explanation for what we're studying, that descriptions -- the descriptions that Insel complains about with respect to the DSM above -- are all we have. Is it still science?
This example may help you over this intellectual hurdle. An imaginary Doctor Dubious invents a new treatment for the common cold. His treatment is to shake a dried gourd over the cold sufferer until the patient gets better. Sometimes the treatment takes a week, but it always works — the cold sufferer always recovers. So, why doesn't Doctor Dubious get a Nobel Prize for his breakthrough?
The answer is that the procedure is only a description — shake the gourd, patient recovers — without an explanation, without a basis for actually learning anything or being truthful about the connection between cause and effect. It's the same with psychiatry.
Are you sure he's deliberately trolling, perhaps he genuinely believes this? Distrust of psychiatry is not uncommon, and there certainly are some really awful facets of psychiatry to selectively draw an extremist position on, especially if one looks into its abuses in recent history.
Distrust of SSRIs is also mainstream science, although it's still mainstream practice. The meta-analysis that showed that studies don't show a significant affect on depression from SSRIs was not published in a crank journal, and has very few serious disputes or alternative ways to view the numbers.
People who cite that paper and others (and this one in the future) get treated like cranks still, though. There are a lot of people that feel like they owe their lives to SSRIs, and that they will die if they stop taking them, sent back to the same mental state as they were in during a period that they will all identify as the worst in their lives. There are also a lot of people who make an enormous amount of money in encouraging that belief.
I'm not against the treatment of depression, and even possible chemical treatments of depression. I just would prefer them to work better than placebo.
> The meta-analysis that showed that studies don't show a significant affect on depression from SSRIs ... has very few serious disputes or alternative ways to view the numbers.
"Effect size is a hard statistic to work with (albeit extremely fun). The guy who invented effect size suggested that 0.2 be called “small”, 0.5 be called “medium”, and 0.8 be called “large”. NICE, a UK health research group, somewhat randomly declared that effect sizes greater than 0.5 be called “clinically significant” and effect sizes less than 0.5 be called “not clinically significant”, but their reasoning was basically that 0.5 was a nice round number, and a few years later they changed their mind and admitted they had no reason behind their decision.
Despite these somewhat haphazard standards, some people have decided that antidepressants’ effect size of 0.3 means they are “clinically insignificant”."
...and...
"They also note that Kirsch’s study lumps all antidepressants together. This isn’t necessarily wrong. But it isn’t necessarily right, either. For example, his study used both Serzone (believed to be a weak antidepressant, rarely used) and Paxil (believed to be a stronger antidepressant, commonly used). And in fact, by his study, Paxil showed an effect size of 0.47, compared to Serzone’s 0.21. But since the difference was not statistically significant, he averaged them together and said that “antidepressants are ineffective”. In fact, his study showed that Paxil was effective, but when you average it together with a very ineffective drug, the effect disappears. He can get away with this because of the arcana of statistical significance, but by the same arcana I can get away with not doing that.
So right now we have three different effect sizes. 1.2 for placebo + drug, 0.5 for drug alone if we’re being statistically merciful, 0.3 for drug alone if we’re being harsh and letting the harshest critic of antidepressants pull out all his statistical tricks."
Here's a thread about cosmology where he drops in his cut'n'pasted screed about psychiatry. Derailing a thread to rant about some unrelated topic is trolling or kook-like.
Frustratingly he has useful stuff to say about other things.
Trolling does not have to be deliberate to be effective.
Lutusp makes very good points about the awful state of science reporting; about the very poor quality of a lot of research; about the lack of scientific rigour in some areas.
But the rest of it is kook-like noise that refuses to address the actual thing being discussed.
"Why did the substance become known as serotonin rather than its first name, enteramine? The most likely explanation is that it was first synthesized and made available for research by the American drug company, Upjohn Pharmaceutical, who chose the name 'serotonin.'"
Quote: "As it turned out, 8 of the 12 patients he operated on, including Deanna, felt their depressions lift while suffering minimal side effects — an incredible rate of effectiveness in patients so immovably depressed. Nor did they just vaguely recover. Their scores on the Hamilton depression scale, a standard used to measure the severity of depression, fell from the soul-deadening high 20's to the single digits — essentially normal. They've re-engaged their families, resumed jobs and friendships, started businesses, taken up hobbies old and new, replanted dying gardens. They've regained the resilience that distinguishes the healthy from the depressed."
I hasten to add that DBS is nowhere near ready for the clinic -- it's experimental and risky. But it shows what can happen when people are willing to consider biological explanations instead of psychological ones.
Because the Hamilton Depression Scale only measures symptoms and doesn't address the issue of causes (theories), no, it isn't. And the linked study will have to be repeated very carefully, with more patients, before it will be accepted as a treatment.
And finally, regardless of its value as a treatment, that study can only be a steppingstone toward actually understanding what depression is, in a biological, scientific sense.
There have been some promising studies showing rapid-onset relief for major depression following the administration of ketamine. Glutamate seems to be getting more attention these days, in terms of its potential role in mental illness.
I'm as skeptical of big Pharma as the next guy, but to say with such grandiose broad-stroked generalizing that psychiatry (or, perhaps you mean instead/also neurological pharmacology) is not science is simply untrue.
Anyone with Google at their fingertips can find a dozen peer-reviewed articles about serotonin's link to mood and behavior.
So, because some critics of psychiatry are Scientologists, therefore all critics of psychiatry are Scientologists? I can only recommend a crash course in logic.
> to say with such grandiose broad-stroked generalizing that psychiatry (or, perhaps you mean instead/also neurological pharmacology) is not science is simply untrue.
The burden is not on critics to prove that psychiatry isn't a science, the burden is on psychiatry to prove that it is (for the history-illiterate, it has never been a science and many of its staunchest advocates freely admit this, including Freud). The recent NIMH ruling, to which I alluded in my post above, suggests that the granting agencies aren't going to wait for that burden of evidence to be met ---- psychiatry is not a science, is not an evidence-based practice, and can't masquerade as such without evidence.
> Anyone with Google at their fingertips can find a dozen peer-reviewed articles about serotonin's link to mood and behavior.
Yes, and there's a name for that: confirmation bias. Have you bothered to ask yourself how the FDA could come out and say that SSRIs don't actually work, and how that meta-analysis could coexist what all those other studies that claim otherwise? And how could this most recent study, which shows no correlation between serotonin and depression, survive the critical eyes of editors and reviewers to find its way into print?
The FDA meta-analysis, which for the first time included studies that the drug companies funded but then chose not to publish, and which showed no clinically significant effect from SSRIs, is not by itself conclusive, but the silence that followed it certainly is. There are too many interested parties involved for that study to go unchallenged ... if a challenge were possible.
This most recent study simply shows why SSRIs don't work -- because serotonin and depression aren't correlated, therefore SSRIs cannot possibly work, in principle.
They can also find a dozen peer-reviewed articles thoroughly skewering the misuse of statistics, intentional massaging and selective cherry-picking of results, weak significance testing, and general lack of replicability that are all incredibly rife in biomedical research, and particularly in psychiatric research.
It's hard to know what to trust in that area at all anymore.
Still, it's what demarcates the boundary between "blog about your idea, show it to some smart folk" and "proper science". And we like clean demarcation lines, they allow us to categorize without actually investigating (which is infeasible if you want to get broad knowledge).
I'd say avoid "peer reviewed" papers like the plague if you want to find out what's actually "worth investigating" and " get a broad knowledge".
Instead, wait for 2-5 years to see what still floats from all the crap that has been published.
Better to read slightly behind the times, but solid, university guidebooks and published books that stood out, than to read the hot, but crappy, steam of published research.
Is waiting 5 years always an option? Is 5 years always enough to distinguish good from bad science? Then, you still use the "peer reviewed" filtering, just add "test of time" to the pipeline. Which is fine, as long as the topic is of level of importance to you at which being 5 years behind the trend is ok.
Unless you really mean that being peer reviewed is bad. I don't want to delve into that option...
If your goal, as stated above, is to get a "broad knowledge", then yes.
If you want to know recent research trends, or are doing research yourself, then no, go read current papers.
>Is 5 years always enough to distinguish good from bad science?
No, sometimes you have to wait even more. Just gave it as a delay period to counter the "read the peer reviewed papers" notion.
>Then, you still use the "peer reviewed" filtering, just add "test of time" to the pipeline.
No, I'm saying "forget the peer reviewed" in themselves, go for items that not only have stood the "test of time", but have also become succesful and well regarded books and/or university guides in their domain.
In essense, I'm saying that a journal's tiny "peer review" team is BS, the majority of the scientific community agreeing on matured material is better.
>Which is fine, as long as the topic is of level of importance to you at which being 5 years behind the trend is ok.
It's not a matter of "importance to you", it's what you want to use it for.
A subject could be extremely important to you as a study subject, and you could still avoid losing time with the current, unfiltered, papers as they come in.
It's only when you want to take advantage of recent research (e.g because you are a researcher yourself, or an implementor and needs a new solution etc) that you have to have the latest research -- which I think is different than "importance". Let's call it "business importance" if you wish...
> Is waiting 5 years always an option? Is 5 years always enough to distinguish good from bad science?
In the case of Relativity theory, it took 55 years for full validation of all its aspects. In principle, to assure solid science, one might say, "as long as it takes."
My qualm isn't with the pharmaceuticals companies, oddly enough. I have issues with the psychiatrists mostly. They rely far too much on literature and don't listen to their patients. Also, the patients are overly supine and continue treatments that aren't effective (or are partially effective with significant side effects). This combination of behaviors is toxic and leads to reduced quality of life for the patient.
There's no down voting here. Your summary of the "science of Psychiatry" in general is spot on. It's time people open their eyes and expose the absolute Fraud that has bee thrust
upon patients and consumers everywhere.
We were all lied too requarding the efficacy of of these
SSRI drugs--even the Psychiatrists were lied too. I heard anecdotally
that Proxac showed so little promise that it was "shelved"
for years--until they started to cherry pick studies.
I am now questing most drugs! When I was in college a peer reviewed, double blind study was Holy. I had no idea
Anyone(on a huge scale) would screw with clinical studies.
The whole Psychiarty profession needs a thorough vetting. I recently heard some describe it as, "still in the Dark Ages"; I was so fitting.
A word to the High Priced Psychiatry Specialists--your patients see right through the facade. I understand the
reasons why a doctor would prescribe a lousy drug--just because they want to invoke the Placebo effect. The response from the Placebo Effect is Huge. I don't charging
huge sums of money to the Non-Responders. Yes--you went
to med school. Yes--you picked one of the easier Specialties. No--you don't have the moral right to take advantage of societies most fragile. It 's like kicking
a Homeless person?
(That said, I know how frustrating your profession has become. I cringe whenever I hear something like a suicide, or missing Psychiatrist (missing in Point Reyes-now.) Your
patients do care. It's time we all band together and fight
these drug companies?)
Yes--I went on a rant. I went on a rant because the person above me said what I have felt for a long time.
Does anyone have experience with atypical depression? That's what I have, and SSRIs seem to do almost nothing for it. Prozac made me feel spacey, and it didn't fix any of my problems (except OCD), so I quit it. I've since read a lot of articles in psychology/psychiatry journals and it appears that MAOIs (nardil, parnate), are much more effective for this condition, although they have strong side effects and dietary restrictions. MAOIs were the original antidepressants used in the 1950s and 1960s. A lot of articles indicate nardil is particularly effective for social anxiety, over SSRIs.
Atypical depression is characterized by excessive sleeping, extreme rejection sensitivity, leaden paralysis (heavy limbs/fatigue), and the ability to feel happy in response to a positive event. This last characteristic is a distinguishing factor from regular (endogenous) depression, where even positive events won't make someone feel happier.
You probably need something targeting dopamine, the heavy limbs/fatigue, excessive sleeping etc. are typical symptoms of dopaminergic systems not working properly.
Try an NRI like reboxetine. Exercise to make it work better. Go outside for 15 minutes in the morning, if you don't do that already. A study showed that light exposure of 30 minutes per day make a 1/3 reduction in medication possible. And stop smoking cannabis, if you do, it has catastrophic effects on dopaminergic systems.
My depression didn't respond well to five SSRIs, one SNRI, two tricyclics and one tetracyclic. It turns out I didn't have major depressive disorder, but bipolar 2. The only medication that worked to alleviate my depression was Lamictal (lamotrigine). It also caused no side effects at all.
There are a few other medications that can be safely added to SSRIs, if you're not getting enough benefit from them. Lamictal is one of them. Buspar (buspirone) can also be added to help with anxiety. It's neither an SSRI (doesn't seem to work for you) nor a benzodiazepine (addiction potential), so perhaps it's worth a try.
But talk with your doctor, because I don't know your particular disorder or medication history.
You know what? I'm not either a doctor nor a psychiatrist, but what you've described sounds like chronic fatigue syndrome, not depression. Or (second vote) hypothyroidism, which also can create some of the symptom set you describe.
Both depression and CFS have no known causes, which makes their diagnosis problematic and their treatment even more problematic. But overall I would suggest that you look at CFS -- IMHO it seems to be a better fit to your symptoms.
Again, I am not a doctor and this is just an off-the-cuff opinion.
In the spirit of pharmacology as the subject; I might suggest a Norepinephrine-dopamine reuptake inhibitor (NDRI) like Methylphenidate (Concerta). It seems to have far less abuse potential. ( Of course this is after exercise and sunlight and therapy have exhausted their potential for results ) Also, don't take with MAOI's. Don't take my advice at all obviously, but I think you ought to research it and speak to doc.
Even with all our advanced modern knowledge we still barely understand how the brain works much less what happens when it doesn't "work" and even that depends on how you define "doesn't work". I suffered from a major depressive time about 12 years ago and taking a drug helped me get out of it and it hasn't returned. But there is no way to know what went wrong much less prove that the drug had any actual clinical effect; at least it appeared to make life possible again. Some day we might actually understand enough brain chemistry to know the exact mechanisms. Today it is still a lot of guesswork.
It makes me wonder if we'll even have much more understanding of the brain in 100 years than we do now. At least, understanding at a level that lets us effectively stop the root cause of many complex mental disorders without deleterious side effects.
<disclaimer > This is a general statement, I do not intend it to be 'in response' to your comment in particular, your comment just provides a logical space to precede mine.< /disclaimer > . I don't understand this negative, defeatist attitude of people with regards to our 'current' understanding of anything. We certainly know a lot more about everything in this world, including the brain than what we did in the year 1914. Why then is there a doubt that 100 years hence we'd understand lesser ? Is there really a proven 'law of diminishing returns' in the field of scientific research. Also, in reference to the parent comment, sure there isn't possibly a way to be 100% certain that the drugs did indeed help but the reasoning behind manufacturing them were presumably sound scientifically speaking based on the knowledge known at the time of their creation and the experiments conducted then. Development occurs not by discarding prior knowledge but by refining it or in the worst case scenario, eliminating the factors that make it obsolete. So, in summary, I think we are in the 'discovery phase' of medicine these days, similar to what the 1600-1900s were for physics.
It's important to note that they are using the word major to substantiate. It might not be major, but I think the word important would be justified. Saying there might be other chemicals or components involved in depression seems to be obvious given our limited understanding of the topic. We just have a currently useful tool to help, SSRIs. Maybe with these type of studies we'll see more accurate medicines.
> We just have a currently useful tool to help, SSRIs.
Yes, but the quoted study shows that serotonin and depression aren't correlated. If that is so, if the study bears up under scrutiny, then SSRIs cannot possibly work. The prior FDA meta-analysis shows that SSRIs don't work for the majority of patients, this new study explains why.
> Maybe with these type of studies we'll see more accurate medicines.
Let me suggest an alternative -- instead of searching for a better description of depression, we should seek an explanation, like in science. Armed with an explanation, we could treat depression's causes, rather than its symptoms.
But we can't do this with psychiatry -- psychiatrists aren't scientists and have no respect for evidence.
Let's summarize this issue:
1. Big Pharma has sold billions of dollars' worth of SSRI-based drugs, drugs whose mode of action is to regulate serotonin to control depression. Tl;dr: big influential companies, big sales, a gullible public who think psychiatry is an evidence-based medical field.
2. Studies show that the above drugs do not work for the majority of patients. In a meta-analysis conducted by the FDA that combined the results of published and unpublished studies, "antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance." Tl;dr: these drugs do not work.
http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fj...
3. The linked study shows that serotonin has no clear correlation with depression in an animal model. Tl;dr: because serotonin levels and depression aren't correlated, SSRI drugs could not possibly work.
Guess what effect these studies, this science, is having on the sale of antidepression drugs? None whatever. The reason? Psychiatry and psychology aren't sciences, and (unlike in real medicine) ongoing drug therapies don't hinge on evidence for effectiveness.
This is why the NIMH recently ruled that the DSM, psychiatry and psychology's "bible", may no longer be used as the basis for scientific research proposals -- not surprising, it has no scientific content.
http://www.newyorker.com/tech/elements/the-rats-of-n-i-m-h
Let the downvotes begin.