Hacker News new | past | comments | ask | show | jobs | submit login

Without any evidence of efficacy, this article doesn't strike me as much different from "I went into the woods and gathered up some herbs and made myself an elixir that cures covid". I guess it's neat that you did all that stuff, but unless it actually helps, who cares? Doing it with science-y ingredients instead of herbs and ordered them online instead of harvesting them from plants doesn't magically mean it actually works.



The difference however is that this article is based on the scientific method, and the go into the woods and gather herbs is based on mythology.

Now the reasonableness of experimenting on yourself, even in a scientifically "valid" way, is certainly arguable. But the elements are all there; Theory of the Immune system, Hypothesis that it can be trained in this way, experiment that trains immune system in a known with with the experimental training element.

As others have pointed out, the immune system is designed to kill cells it doesn't like. And it has been demonstrated in other scenarios that it can target cells that are vital to one's survival, resulting in death when the immune system targets and kills those cells.

Thus the risk in the experiment is that it will successfully invoke an immune response, it just won't be the one that was anticipated.

Not surprisingly, this is the whole point of animal trials to get a feel for what might happen.


I would grant that it has a lot of the trapping of the scientific method, and uses a lot of the terminology. But I think we're kidding ourselves if we pretend that there's much knowledge to be gained here. It's a non-controlled trial with n=2 using ingredients of uncertain purity and a loose experimental protocol ("We’ll add the other three peptides, take another few weeks of boosters, maybe adjust frequency and/or dosage - we’ll consider exactly what changes to make if and when the optimistic test comes back negative."). I'd certainly put the net contribution to scientific knowledge below that of someone who just signs up for a clinical trial.

It's like a scientific method LARP.


On the other hand, while the sample size is small, you can’t beat the representativeness of their sample.

Self experiment, not in order to generalize conclusions to the rest of the world, but to find useful conclusions for own use.


It does not have a sample size. It was not sampled and one is not a size.

The most basic cases of statistics run on point estimate evaluations. With one datum you just have a point, not an estimate, and you cannot evaluate anything.


Okay, I love "scientific method LARP"! Granted it is a bit elitist but still, conveys a solid point. As a counter point, was Edward Jenner[1] a LARPer or a scientist? Follow up question, was his net contribution to science "big" or "small"?

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1200696/


Jenner was living in the 1700s and was doing the best he could with the tools of the time. Still, his report included 20+ people, included challenge results in which subjects were exposed to smallpox to verify their immunity. If the role models for this exercise are all from the 18th century, that should ring some alarm bells.


He's not really Doing Science in the manner you are describing, nor is that the intent as far as I can tell. The aim is not to contribute to the body of knowledge but to inatead to experiment (yes, it is literally LARPing, but that's a perfectly fine hobby) and hopefully reduce chance of getting covid.


It's hardly the scientific method when the outcome isn't actually measured. It is entirely unknown whether this whole procedure has done anything at all.


The author said he is going to try to do measurements to the best of what is easily possible:

> So, we’ll do (up to) two more blood tests. The first will be two weeks after our third (weekly) dose; that one is the “optimistic” test, in case three doses is more-than-enough already. That one is optimistic for another reason as well: synthesis/delivery of three of the nine peptides was delayed, so our first three doses will only use six of them. If the optimistic test comes back positive, great, we’re done.

> If that test comes back negative, then the next test will be the “more dakka” test. We’ll add the other three peptides, take another few weeks of boosters, maybe adjust frequency and/or dosage - we’ll consider exactly what changes to make if and when the optimistic test comes back negative. Risks are very minimal (again, see the paper), so throwing more dakka at it makes sense.

> Consider this a pre-registration. I intend to share my test results here.

Pre-registration for this kind of thing seems pretty good, and I am looking forward to seeing the results.


I've taken a whopping one class about immunology, so I'm not even remotely qualified. But maybe I'm just qualified enough to call this nonsense.

The author is planning to take a "test". What test? The standard lab tests are some proprietary qualitative test looking for antibodies to a specific synthetic antigen. For a (new, untested!) vaccine, this is a dubious experiment at best. The vaccine could produce an amazing antibody response that doesn't trigger the test, or it could produce a tiny, useless response that happens to trigger the test. And the test is unlikely to give numbers.

But this vaccine is using short-ish peptides. There are two (at least? I think just two) different types of T-cell receptor, and they are sensitive to different lengths of peptide. Testing for those is complicated and expensive.

I don't know how one would find a lab to do the relevant tests, but it's probably not so easy.


And yet he titled the post "Making Vaccine". Unless you make it all the way through to the end, you might assume that he actually did make a vaccine -- and judging by the comments on HN, it looks like some people are assuming this. It all seems a bit... irresponsible?


Irresponsible or less wrong?


From the article (towards the end):

> If the vaccine induces an immune response in the blood, then it almost certainly induces one in the mucus lining, but the reverse does not hold. So a positive blood antibody test means it definitely works, a negative antibody test is a weak update against.

> So, we’ll do (up to) two more blood tests. The first will be two weeks after our third (weekly) dose; that one is the “optimistic” test, in case three doses is more-than-enough already. That one is optimistic for another reason as well: synthesis/delivery of three of the nine peptides was delayed, so our first three doses will only use six of them. If the optimistic test comes back positive, great, we’re done.

> If that test comes back negative, then the next test will be the “more dakka” test. We’ll add the other three peptides, take another few weeks of boosters, maybe adjust frequency and/or dosage - we’ll consider exactly what changes to make if and when the optimistic test comes back negative. Risks are very minimal (again, see the paper), so throwing more dakka at it makes sense.

> Consider this a pre-registration. I intend to share my test results here.


Does "go into the woods and gather herbs" follow the scientific method if I write my observations neatly in a blog post?

The author says they will accept that their homebrew worked if they get a positive antibody test (yet, plenty of people will get a positive antibody test without any administration of homebrew or vaccine); but will not accept that it failed if there is a negative antibody test.

As such, their hypothesis is not falsifiable and it is not science.


I think you misunderstand what "falsifiable" means. This hypothesis is absolutely falsifiable. He just doesn't have the resources to conclusively falsify this hypothesis, because he's working with a small sample size and doesn't have affordable access to testing for immunity in the mucus lining.


The hypothesis "my homebrew works at scale" is falsifiable. As you say, that really isn't under test, nor is it a hypothesis the author cares about.

The hypothesis "my homebrew has worked on me" is what the blog asks ("I'm curious whether it will work - or whether we'll be able to tell that it works."). That is not falsifiable.


> The hypothesis "my homebrew has worked on me" is not.

If he ends up getting COVID-19 in a few months, he'll have pretty strongly falsified that hypothesis.

You cannot dismiss a hypothesis as "not scientific" simply because it's likely that your search for evidence will be inconclusive.


I don't agree that is technically falsified.

I know you used "pretty strongly" as sarcasm, but falsifiability is absolute - and this is not.

(On your later point about my claim that this is not scientific, we are back to the comparison with gathering roots and berries - either both are, or both are not)


>I don't agree that is technically falsified.

> I know you used "pretty strongly" as sarcasm, but it's not falsifiability is absolute - and this is not.

I'm having a very hard time parsing what you're saying. At this point, it looks like you're trying to make an argument that "falsifiability" must mean that it is both possible and practical to reject a hypothesis with 100% certainty, and if you can't then you're not doing science. Would you care to explain more clearly why the position you're taking is less extreme and absurd than that?

(Additionally, I did not use "pretty strongly" in any sarcastic way. I used it to acknowledge the possibility of confounding factors that I did not care to enumerate, which mean that even the experimental outcome of getting COVID-19 after inoculation with the homemade vaccine would not be a 100% certain rejection of the hypothesis that the vaccine conferred some protection against COVID-19. But if you're operating in a mindset of 100% certainty being achievable and necessary for science, then I can see how you would misunderstand me.)


My point was to back up the grandparent.

This is equally science as going into the woods, picking some berries and roots, grinding them and put them up your nose, doing an antibody test, and writing your conclusions up in a blog post. There's no difference.

(But, yes, I think that writing hypotheses with 100% falsifiability, before challenging them practically, is quite a good definition of "scientific method", based on Karl Popper's work. You can compare with what Wikipedia has to say.)

Edited to respond to your edit: The word falsifiability is deliberately used to distinguish that we are talking about False, the Boolean state. The confounding factors are important - the confounding factors here mean I can't prove it False; there are no circumstances whereby the author has to accept that their hypothesis was False.


> (But, yes, I think that writing hypotheses with 100% falsifiability, before challenging them practically, is quite a good definition of science)

It seems like you're still not making the necessary distinction between whether it is possible for an experiment's outcome to falsify the hypothesis, vs whether it is guaranteed that the experiment will falsify the hypothesis if it is in fact wrong. The latter is an unreasonable requirement to make part of your definition of science.


> there are no circumstances whereby the author has to accept that their hypothesis was False.

The author described such a circumstance, but then explained that he did not have access to the necessary lab testing to actually do so. The hypothesis is falsifiable in principle even if the researcher does not expect to have the equipment necessary to measure falsification by that outcome.


(I should clarify that I don't mean the author's opinion is important to falsifiability)

> The author described such a circumstance, but then explained that he did not have access to the necessary lab testing to actually do so.

In the parent article? Could you quote that part?


The scientific method isn't "make up something plausible-sounding and then assume it will work". That is, at best, the first step.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: