Their actual clinical trial had 62 subjects and it had a control arm. The paper has only 6 subjects and no controls.
If I run an experiment with 60 people, I can find evidence for anything (precognition, telekinesis, etc.) if I'm allowed to cherry pick 6 data points and drop any controls.
The authors should be ashamed for polluting the scientific record with this junk.
it sounds like you are assuming they collected more data than they reported, is there evidence of that? They preregistered to collect 60, but they couldve simply been unable to collect that many (eg COVID).
If it was preregistered perhaps the journal were obligated to publish the results no matter the outcome. Some journals do this ostensibly to combat publication bias.
I didn’t read the whole thing: are the authors actually claiming the human data are informative? If so, that is wrong. But if they are just saying “hey, these are the data!” I wouldn’t be so harsh.
Edit: Just read the methods. Data collection was 2016 to 2020, so COVID probably wasn’t the issue. They were selected for poor memory yet normal cognition in other domains, so it is possible they couldn’t recruit the number of subjects they proposed who met this criteria and also were willing and able to complete the treatment.
I’m now quite confident the memory result is regression to the mean, and therefore nothing to do with the treatment.
I also read the portion of the discussion related to cognition. They are claiming to have found something meaningful (in the human data) when they definitely haven’t.
I would absolutely be harsh on this. There might well be reasons why they didn't include the other 54- perhaps these were the only 6 to maintain cognitive function sufficiently to continue to receive the treatment, or even be alive!
This is why we analyse on an intention-to-treat basis rather than who actually received the treatment. The publication makes no such attempt to explain this potential source of bias.
I'm not being harsh. This is basic due diligence that the editors and reviewers at Aging must do, by their own rules.
Aging agreed to follow the ICMJE (International Committee of Medical Journal Editors) rules (for many reasons including pressure from funding agencies). Those rules are fairly extensive. Among them is that you register your trial, then after you're done you must complete your trial and update it. The ICMJE rules literally say "The purpose of clinical trial registration is to prevent selective publication and selective reporting of research outcomes"
The 62 patients aren't theoretical. They actually existed and were enrolled in the trial. That's why the preregistration says
> Actual Enrollment : 62 participants
Before the authors completed the trial it would have said "Anticipated Enrollment". They had to update this field to be able to publish according to the ICMJE.
So this data exists. What happened to it?
Did the authors cherry pick? Then they could show that God exists.
Did the authors exclude subjects because they died, declined too much, developed diabetes, etc? Then that completely biases their results. They need to carefully explain what happened. They need to correct for this source of bias in all of their statistics and can't just publish whatever they want and pretend the study as registered didn't exist.
This wasn't a mistake either. If you look at the data sharing section they said they're undecided if they will give you the raw data to check their results. I wonder why?
> Aging supports the position of the International Committee of Medical Journal Editors (ICMJE) on trial registration. All trials initiated after 1 July 2005 must be registered prospectively in a publicly accessible registry (i.e., before patient recruitment has begun), or they will not be considered for publication. Authors of randomized controlled trials must adhere to the CONSORT reporting guidelines appropriate to their trial design. Please check the CONSORT statement Web site for information on the appropriate guidelines for specific trial types. Before the paper can enter peer review authors must: 1) name in the paper trial registry, trial registration number, and IRB and 2) provide a copy of the trial protocol and a completed CONSORT checklist as supporting files. The CONSORT flow diagram must be included as Figure 1. Any deviation from the trial protocol must be explained in the paper. Authors must explicitly discuss informed consent in their paper, and AGING reserves the right to ask for a copy of the patient consent form. Information on statistical methods or participants beyond what is indicated in the CONSORT statement should be reported in the Methods section.
Where's the CONSORT checklist? Where's the CONSORT flow diagram? "Any deviation from the trial protocol must be explained in the paper"
Let's look at another random paper from the same edition of Aging. "Association between tooth loss rate and risk of mild cognitive impairment in older adults: a population-based longitudinal study". https://www.aging-us.com/article/203504/pdf Figure 1 is the CONSORT flow diagram as required by Aging's own rules.
This is a boondoggle that doesn't even meet the most basic criteria for scientific publications.
I stand corrected on that point then. Thanks for the info!
So it looks like they cherry-picked the 6 subjects who did (by chance) worse on the memory task than the other tasks. Then regression to the mean happened, and voila! Some positive results.
I disagree. Its more just sensationalization by journalists. The actual paper talks more about an existing model for mice and how their findings are consistent with the model. The most outrageous claim they make is that the study demonstrates the efficacy of HBOT on Alzheimers, which is technically the truth but misleading in the sense that it demonstrates how effective it could be.
The authors should be happy to have added to ongoing alzheimers research. Their actual claims are much less bold than the article makes them out to be.
Oh how I wish that were true! Journalists aren't sensationalizing anything here.
Direct quotes from the paper:
> Motivated by these findings, we exposed elderly patients with significant memory loss at baseline to HBOT and observed an increase in CBF and improvement in cognitive performances. This study demonstrates HBOT efficacy in hypoxia-related neurological conditions, particularly in AD and aging.
> In summary, we showed here that HBOT offers multi-faceted neuroprotective effects on the complex pathology of Alzheimer’s disease and also improves CBF and cognition in humans.
> Given that HBOT is considered a safe and tolerable treatment currently being used in the clinic, the increasing number of clinical trials showing that HBOT improves cognitive function in patients suffering from chronic brain damage, the pre-clinical studies elucidating mechanisms of HBOT action, and the fact that there is presently no effective intervention for AD, HBOT should be considered as a therapeutic approach to slow the progression or even improve the pathophysiology responsible for this disease.
I can't imagine bolder claims than "we showed here that HBOT ... improves CBF and cognition in humans" and "HBOT should be considered as a therapeutic approach".
I mean this literally. I can't imagine it. What else could they have said that would be more direct aside from HBOT will cure you 100% guaranteed?
This is true (and rarely mentioned); rodents don't get AD. But they can be genetically engineered to produce the proteins that researchers consider the hallmarks of AD. The fact that this is a circular definition never fails to frustrate me when discussing the misuse of animals for AD research. (and the fact that billions have been wasted on proving that this circular definition doesn't apply to humans is especially sad)
> The study population comprised adults (5 males, 1 female) with significant memory decline aged 64 years and older
> The protocol consisted of 60 daily sessions at 5 sessions per week within a three-month period. Each session included breathing 100% oxygen by mask at 2ATA for 90 min with 5 min air breaks every 20 min.
> At baseline, patients attained a mean global cognitive score (102.4±7.3) similar to the average score in the general population normalized for age and education level (100), while memory scores were significantly lower (86.6 ± 9.2). Cognitive assessment following HBOT revealed a significant increase in the global cognitive score (102.4 ± 7.3 to 109.5 ± 5.8, p=0.004), where memory, attention and information processing speed domain scores were the most ameliorated (Figure 8C). Moreover, post-HBOT mean memory scores improved to the mean score (100.9 ± 7.8), normalized per age and education level (100).
I skimmed the article for the pertinent table. Three observations:
1. There is no control group. Since people get better with cognitive tests with practice, we don’t know whether these improvements were due to the treatment or just the normal practice effect. (Though if I were a betting person, my money is on the latter.)
2. Cognitive tests are not noiseless. The same person may vary by a few points between repeated tests. The modest improvements on most tests found here might not be above this noise floor.
3. The most impressive result (memory) is almost certainly regression to the mean. That is, the group did unusually poorly on this task in the first session by chance, then they performed more typically in the second, which looks falsely like a big improvement.
Basically, I think this paper provides no meaningful evidence of cognitive improvement in humans. I can’t speak about the mice stuff, that might be fine.
While a control group is essential to determine if the effect is really due to the treatment or to practice, if they did the mouse study first and expected a positive effect, then they might have felt pressure to only have a human treatment group in what looks like a pilot trial. Maybe their original proposal included a control group, but their institutional review board decided against it?
I'm a little surprised they wouldn't have done a larger secondary trial before publishing, the impact would have been much greater if the effect was still observed in a larger group, even if they still did not have a control group- or perhaps had an alternative treatment schedule (shorter exposure times- is treatment duration correlated with degree of improvement? Then you have less of a moral dilemma of denying participants of a useful treatment while still getting some data that would indicate the treatment itself actually does something).
I am not a practitioner, so I have no idea if this is actually plausible, but it would seem absurd to me for an experimental, out-of-left-field treatment for Alzheimers, in a <10 person study, could be denied a control group on ethical grounds. There are tens if not hundreds of millions of Alzheimer's patients not receiving this potential treatment already.
This. There is basically zero chance that an ethics review would see this as a case in which a control group is unethical. I have been through plenty of IRB processes.
Resource constraints are a much more plausible explanation - running the study with a control group would cost twice as much (in time as well as money). This is either a pilot to apply for funding to run a better-controlled trial, or it's just not scientifically serious. Both are very plausible, in my experience.
Thanks for the perspective, I have only done data analysis after the data is generated, and never have actually had to deal with an IRB directly for the design of a study.
6!? They used an N=6 to make such a bold claim? If I had ever thought about that as a sample size for any scientific research my professor would have run me off campus.
With 6 subjects, it feels a bit silly to even turn on the math. 5% is about two legs of a participant. I think humans can process those low numbers instantly, intuitively; we can do it by pattern recognition, without the need for abstraction. The statistics of a dinner party. I think even some animals could do some "math" on 6 food.
Should be called "collected case studies" or something.
It is a paired t-test as the test takers are the same individuals. This is confirmed in the statistical analysis section of the actual paper. Paired t-test have more statistical power because we don't correct for group variation (as they are the same group measured twice).
Edit: One way to think about this is assuming the effects are random and symmetric then the probability of going up is 1/2. If all 6 participants had increased scores, then the p-value is at least 1/2^6 and this is not taking into account the magnitude of the shift.
You're right about the paired test (although I think there's an issue with the group variance). But your final calculation is for a sign test. For a t-test where every point shifts a very little bit (e.g. 0.01 stddev), p is larger, I think.
Yes, my edit was more meant to illustrate how 6 data points can yield a significant result. Paired t-test is all about the differences. So if all test subjects shift up by 0.01 (no variance in the differences) then the statistic will actually be infinite (i.e. very large) and so p is theoretically 0 (very small). I also agree that the paired testing needs more convincing from the researchers.
Unfortunately a lot of medical science involving expensive laboratory resources and trials currently rests on sample sizes around this range (~5-20). Then whenever there's any kind of technical innovations that makes it cheaper to collect the biosignals there is a sudden series of breakthroughs in what is really happening. All science is a work in progress.
Since it's not immediately obvious what the issue is since it's in the "reception" section
> The peer review process employed by the journal has been criticized by Jeffrey Beall, a university librarian and expert on predatory open access publishing, who also included the journal and its publisher on his list of "potential, possible, or probable predatory scholarly open-access journals" in July 2015.
106 is well within a standard deviation of both of those. In fact, 109.5 is only one standard deviation away from the first, and the reverse is almost true. How do they get p = 0.004?
The popular paper completely fails to to state sample size or selection. It seems 10+10 mice also were studied. Maybe a little early to call it a breakthrough...
What do you mean? The sample size was 6, and they were selected on having the same symptoms. There is more information on the statistical analysis in the paper, but the conclusion is that it was significant.
If this is a significant treatment of Alzheimer's then that's a breakthrough to me, what bar would you set?
I mean that the article on technology.org doesn't state the sample size, but still claims this has proven the effectiveness of the treatment. You would have a hard time getting a treatment cleared for public use with those numbers. If you treat 6 people and they seem to go in some direction, then you have to put that into context by having some sort of control or else it is not possible to evaluate the degree to which the improvement was in patient selection or pure chance. I think that OPs article overstates the significance at this point and should have highlighted that further studies are necessary.
> If this is a significant treatment of Alzheimer's then that's a breakthrough to me, what bar would you set?
Wouldn't having some kind of proof against a placebo or at least a dose-dependent improvement be the least possible requirements to actually believe the paper found any kind of effect at all?
If you randomly give 6 Alzheimer's patients an apple a day for 5 weeks, they could show improvements in cognitive tests by pure accident, especially for a disease that is known to ebb and flow.
statistical significance is not the same as clinical significance. statistical significance in a small non-placebo controlled trial is less likely to result in a clinically meaningful result.
HBOT is a widely used 'alternative' treatment for other neurological conditions. It's controversial, in part because there are very good reasons that it can cause neurological damage. This study is NOT a reason for anyone to run out and get HBOT. Even if the effect is real (and I don't think the study has enough info to draw than conclusion), you would need much larger trials to balance benefits and side effects.
If the effect is real then would in not indicate that hypoxia is part of the pathology? There might be other ways besides HBOT to increase oxygen delivery to the brain. The data point is the justification for further research grant proposals.
Doctors are stressed, overworked, and only keep up on cutting edge research at yearly conferences and sometimes through the grapevine. Your typical doctor won't know anything about HBOT.
That’s why you should see a good doctor not a typical one. A good doctor would be interested in any research that is well founded despite being non-consensus. That doesn’t mean that they will endorse it. But people like the person I was replying to cuts them out of the process by presuming they already know the right choice for a patient.
Lots of treatments are "potentially beneficial", according to various "authorities", such as grapefruit as a cure for cancer. Going outside the scientific consensus is essentially an argument for alternative medicine.
I'm using 2.4ATA 90min back to back sessions to treat my covid induced brain fog, with great results.
It's the first time in a year I can think clearly. Anyone with long covid should seriously look into this as a treatment, it has some solid science behind it and is very safe, albeit expensive.
At this point I've tried 14 different drugs, not a single thing has moved the needle except this.
Too early to say how transient the effect is, it's very possible my symptoms return after a few weeks. There are conditions that HBOT can out right 'cure', the bends and carbon monoxide poisoning are the first to come to mind. So I don't think it's out of the question that it's effects can be permanent.
One of the causes of my symptoms will be straight up organ damage, and there's plenty of evidence HBOT speeds up the healing of tissue damage. So any progress in this area should be permanent.
I found some instructional videos on how to make the chamber, looks very easy. It's only 5 PSI of pressure and an O2 mask. I already know how to make O2. This might be a fun project. On further reading, these aren't legal to have in the home in some countries due to fire codes.
5 psi chamber pressure won't get you to 2.4 ATA of inspired oxygen. Be really careful with building a chamber yourself. A fire inside a pressure chamber is a horrible way to die. Asphyxiation and explosive decompression are also risks.
Correct, 2.4 specifically is 35.27 PSI. I was just suggesting I can start with a small amount of pressure to test it myself. Avoiding explosions is rather simple. The air in the chamber can be normal levels of O2, which just means I need a proper mask that does not leak the O2 or the air I exhale. I already have O2 sensors. I can just tell it to alarm at 25% O2. Watching the videos on making the chamber is actually giving me a bit of a chuckle. I see people building these really complex steel chambers with hundreds of bolts as if they were expecting 50k PSI. I bet I could do this with plexiglass or even plastic. I can even add several pressure relief valves that open just above the threshold.
I found something. "ata" seems to mean "atmosphere absolute" and describes the total pressure:
"The ata unit is used in place of atm to indicate the total pressure of the system, compared to a vacuum. For example, underwater pressure of 3 ata would mean that this pressure includes 1 atm of air pressure and thus 2 atm due to the water.", from https://en.wikipedia.org/wiki/Standard_atmosphere_(unit)#Pre...
Okay this sounds great, what is the specific protocol they talk about in the article, is it some breathing techniques in the chamber or four one hour sessions per day for 5 years?
My guess is that the mask is the only source of pure oxygen and that the rest of the atmosphere in the chamber has less oxygen than atmospheric air, making it still breathable because of the higher pressure.
Ignoring the obvious problems with the study design that others have pointed out, it does seem like an interesting concept.
We see several stories a year discussing the links between air pollution and dementia, and also a link between more exercise and less dementia. so it makes some sense to me that there might be a link between decreasing lung function and increasing dementia. if the function of the HBOT is to increase the ability of lungs to oxygenate blood, maybe there is a subtle but important threshold of blood oxygenation that we don't understand well enough.
note: I am not a medical professional at all, it just seems like there is a "lung function" commonality in several of these stories.
If this turns out to be true I wonder if it'd be practical (given the financial resources) to have pressure sealed rooms that you spend a lot of time in at home (e.g. your office). These could be precisely controlled (O2, CO2 levels, pressure, …) and possibly both increase current and future performance. The main problem seems the time needed to exit because an abrupt loss of pressure would be hazardous. But if you have a well-structured day it might work. This could be very interesting for high earning knowledge workers imo.
Breathing oxygen at a partial pressure over about 0.5 atm causes progressive pulmonary oxygen toxicity (loss of vital capacity). Brief exposures like the treatment protocol described in this paper are fine but it's not something you want to do all day every day. Saturation divers who live under pressure for days are careful to control chamber oxygen pressure below that limit.
Nitrogen is regular air is an anaesthetic. It literally makes you stupid, and the effects increase with pressure. If you want to be smarter you need to replace the nitrogen with helium. The high pitched voices are going to make Zoom meetings hilarious.
So generally working under pressure is totally impractical.
On a more feasible scale, a similar idea could be better air filtration/ventilation. Better ventilation in homes or offices to decrease CO2 levels could be an easier way to achieve a similar effect.
If true this is even bigger, as it indicates new directions to look for cures. No longer stuck on plaques; it may be metabolic, respiratory, or pulmonary. This could be more direct evidence of it.
1. Isn’t hyperbaric therapy used by athletes to reduce inflammation?
2. Isn’t the failure of the amyloid hypothesis leading to researchers considering whether inflammation is the progenitor of Alzheimer’s?
So if hyperbaric therapy does work… could that be a sign in favor of the inflammation hypothesis? Although, the authors suggest the results are due to improved oxygenation rather than any anti-inflammatory effects.
Also, I’m not a doctor nor am I educated in medical things.
There's a huge difference between the acute, localized inflammation from a sports injury versus the chronic, systemic inflammation associated with some metabolic diseases. Acute inflammation is actually a useful part of the healing process so we don't necessarily want to reduce it unless it's causing other problems.
Hyperbaric oxygen therapy appears to be somewhat effective for accelerating healing certain soft tissue injuries. So athletes have used it. But it's not clear whether that therapy is any more effective than other recovery modalities. Research has been limited.
The amyloid hypothesis is not dead - just like the tau hypothesis is not dead. The $1m question is what role do the proteins play and at what stage and to what degree
There are so many plausible Alzheimer’s theories that it makes me wonder: are multiple of them right? Is Alzheimer’s really many different diseases that present the same way?
Quite possibly - however we haven't developed enough discerning power to differentiate between what may be a potential 'Alzheimer's Cluster' and say Fronto-temporal, Lewy Body, Vascular, etc etc.
I assume "the amyloid hypothesis" is the hypothesis that plaques cause Alzheimer's. That hypothesis is indeed dead. There are plenty of successful plaque treatments. Basically none of them are effective.
> Alzheimer's disease is believed to occur when abnormal amounts of amyloid beta, accumulating extracellularly as amyloid plaques, and tau proteins, accumulating intracellularly as neurofibrillary tangles, form in the brain affecting neuronal functioning and connectivity, resulting in a progressive loss of brain function.
Wikipedia is correct to say it's widely believed that plaques cause Alzheimer's, because the new evidence is still encountering resistance. The hallmark of science is falsification. We now have many drugs that remove amyloid plaques with few side effects. Removing plaques does not improve symptoms or halt progression [1, 2].
Furthermore, 60% people over 80 have these plaques but only 10% develop dementia [3].
Ergo, the hypothesis that plaques cause of Alzheimer's is pretty much falsified. More than likely they may be a biomarker of some kind that may be associated with cognitive decline of some sort.
I can't get the whole paper, but your first link seems to be saying that we've put all our eggs into one basket and we are ignoring too many other theories. It doesn't go so far as to say the amyloid angle is bunk, does it?
Your second link says this:
> researchers have said experimental treatments targeting the beta amyloid protein might not have worked in the past because the doses were too low or the patient populations used for the trials should have been younger
Again, that doesn't seem to be concluding the amyloid & tau street is a dead end.
The Alzheimer's Association site (alz.org) agrees:
> Alzheimer's has no cure, but one treatment — aducanumab (Aduhelm™) — is the first therapy to demonstrate that removing amyloid, one of the hallmarks of Alzheimer’s disease, from the brain is reasonably likely to reduce cognitive and functional decline in people living with early Alzheimer’s.
> I can't get the whole paper, but your first link seems to be saying that we've put all our eggs into one basket and we are ignoring too many other theories. It doesn't go so far as to say the amyloid angle is bunk, does it?
> Furthermore, the blind adherence to the “Amyloid code” [297] has resulted in the overwhelming rationalization of clinical trial failures - the lack of validation of the amyloid hypothesis – as being due to recurring issues in their planning and execution, not that the hypothesis has failed in its validation. This viewpoint, described as “sheer obstinacy in the face of compelling proof ..[of being]… on the wrong track” [285] has also compromised the translational approach to AD therapeutics making the criteria for compound advancement from preclinical research to the clinic exclusively based on either a blind faith in a hypothesis or animal models that have proven irrelevant.
This addresses the special pleading you quoted from people who continue trying to explain away the failures of the amyloid model.
The writing is on the wall, but as they say, "science advances one funeral at a time".
which does nothing to explain why people with APP mutations such as found in fAd develop early onset Alzheimers. So the mystery of Amyloid and its role in dementias persists
The simplest explanation is that APP mutations cause some other dysfunctions which then causes Alzheimer's. It's these other dysfunctions we should be looking for.
We still haven’t proven that amyloid plaques don’t have at least some level of responsibility. Just because we now have the ability to clear plaques, and it hasn’t lead to improvement, doesn’t prove that amyloid plaques aren’t involved in disease progression.
Now let me be clear, I’m not a cheerleader for any of them. I have read every original paper on the pathophysiology of AD from the 60s through to 2014 and I saw in detail the debate bounce back and forth from Amyloid to Tau with sprinkles of ‘Type 3 diabetes’ and inflammation and back again like a game of tennis. The science will come out the way it will come out.
But there is nothing about our ability to remove plaques with no change in the disease that says that amyloid plaques don’t have a role in disease pathogenesis. Could be the damage is done and clearing them is irrelevant. We just don’t know enough
I think the etiology is best summed up as "energy deficiency", basically the mitochondria aren't able to provide enough energy. Nutritional repletion, ketogenic diets, ketone supplementation (specifically ketone esters) and everything else that improves energy supply / circumvents issues (impaired glucose metabolism being #1) will help.
There are a lot of people who appear normal and even have a hemoglobin A1c level in the normal range but are still insulin resistant. Usually the best way to check is with a glucose tolerance test.
Scuba diving oxygen exposures are much more limited. Most divers still breathe compressed air, so a typical dive to, let's say, 66 ft / 20 m would only cause an oxygen partial pressure of 0.63 ATA.
Some of us do go deeper and use mixed gasses with more oxygen, but we carefully limit oxygen exposure to ensure safety. Acute oxygen toxicity can cause a seizure which is usually fatal underwater. So we never go above 1.6 on the oxygen, and then only for brief periods.
Citation needed; I know research into psychedelics is difficult due to restrictions, but anecdotal evidence is not enough to start recommending it without listing the necessary caveats.
I mean "Ivermectin has been shown to significantly help with Covid" has the same weight to it, and I'd never recommend anyone take horse dewormer. Maybe if they had worms and it was the only thing available.
Lions mane isn't psychedelic, but it's noticably psychoactive. Similar to caffeine in effect, intensity and duration without the jitters or sleep disturbance.
I've never heard of it having any stimulating effects; indeed, I've heard it can be helpful with sleep. Are you able to point to supporting information?
In my last comment I was speaking from experience too - I tried taking it for 3 months, and never noticed any effects whatsoever.
Horse dewormer is just one way Ivermectin is used, just like ketamine is a horse tranquilizer, but has other uses. Not advocating people take it or buy horse dewormer to get it, but you shouldn't call it that generally.
Ivermectin is an FDA approved drug and is regularly prescribed to humans. Sure, it probably doesn't do anything for covid but it's not just 'horse dewormer'.
You can remove the "probably". The manufacturer of Ivermectin states there is no scientific evidence [0] supporting the use of the drug for COVID-19. So much for the notion some espouse that "Big Pharma" pushes their drugs on us without fail.
It's a pity that the whole issue has become politicized, but that's modern times for you. That's why ivermectin is called 'horse dewormer' rather than simply a medicine for parasite infections with a number of human applications. It's a term like 'XXXX denier' which is used to tie a political adversary to concentration camps by playing on our disgust or fears.
Does it do anything for COVID? Dunno. Should it be a discussable matter without internet censorship and crazy people twitterstorms? Maybe. Am I sick of being propagandized by everyone? Yes.
If you're so certain that ivermectin doesn't do anything for COVID-19 then you ought to let the NIH know. They're currently running a huge clinical trial on it.
I can’t edit the comment, but this provides some more detail on Lions Mane. Also note that it is not a psychedelic mushroom.
“ The evidence so far has shown that H. erinaceus mycelium enriched with its active compounds is capable of delaying neuronal cell death in rats with neurodegenerative diseases, such as ischemic stroke, Parkinson's disease, Alzheimer's disease, and depression. Moreover, results have indicated that administration of H. erinaceus mycelia enriched with its active compounds can promote functional recovery and enhance nerve regeneration in rats with neuropathic pain or presbycusis. Despite that more clinical research is needed to fully understand the potential applications of erinacine-enriched Hericium erinaceus mycelium, the majority of preclinical data strongly suggests that it is safe and offers much-needed neuroprotective applications.”
Source: Neurohealth Properties of Hericium erinaceus Mycelia Enriched with Erinacines
6 subjects!? Did anyone actually check the study they registered? https://clinicaltrials.gov/ct2/show/NCT02790541
Their actual clinical trial had 62 subjects and it had a control arm. The paper has only 6 subjects and no controls.
If I run an experiment with 60 people, I can find evidence for anything (precognition, telekinesis, etc.) if I'm allowed to cherry pick 6 data points and drop any controls.
The authors should be ashamed for polluting the scientific record with this junk.