Hacker News new | past | comments | ask | show | jobs | submit login

I work at an immunology lab which is next door to most of the Oxford vaccine group.

I think the quick (and overly simplistic) answer to this question is that models (e.g. convolutional networks) that predict which chunks of viral (or human) proteins are displayed to immune cells (HLA presentation), along with other kinds of models (like those predicting crossreactivity of T cell receptors against mutated epitopes) are really primitive (mostly because of low quality training datasets, it's not really a hard problem like e.g. protein folding).

Hence, rushing a new vaccine is not easy as there is a lot of labwork to do. As a matter of fact, all COVID vaccine designs had essentially the same payload (the whole spike protein). Modern subunit vaccines would typically include only little chunks of the spike (to increase efficiency and avoid side effects). But this was not trivial to do quickly without good in silico models.




Why not just update the vaccine with the corresponding sequence from the Delta variant? Isn't the Delta variant today a better baseline than the ancestral virus?

Or do a mix - first shot ancestral, second shot Delta.


I think this is the logical thing to do, but now there are two new roadblocks clinical trials have to face:

* It's hard to recruit non-vaccinated participants

* There will be safety issues to address on people that have received multiple shots and / or designed against different strains


What are the safety issues surrounding multiple shots or mixed vaccines?


There are also approved vaccines based on the inactivated virus like Covaxin from India.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: