> There. A single base-pair change, flipping two bits, is perhaps all you need to turn the current less-deadly H1N1 swine flu virus into a more deadly variant.
Similar idea:
I have a friend in a lab who recently made a synthetic version of a virus (she said she was "resuscitating" the virus). It has a single point mutation. It's normally BSL-4 and causes lethal infection. With the mutation it's supposedly safe to work with and won't enter certain cells.
Apparently the polymerase screwed up and altered some part of that gene, or maybe it wasn't mutated in the first place, or maybe the initial sequencing was incorrect. Regardless, I got a terrified message from her saying the synthetic version was replicating in a cell line that it wasn't supposed to. (Fortunately, she had continued to work with it in a containment lab.) Eventually her lab group scraped the study.
Before this, I wasn't really concerned about this kind of work.
Idk why you're being downvoted. I hadn't heard this conspiracy theory before. Interestingly isn't this similar to the Ebolavirus origin conspiracy theories?
Learning about these in class must have been the most "OMG biology is so fucking cool"-moment I've experienced. Seriously, read the links (especially [1]) – it's evolution going full Wozniak, with a few hundred million years of time.
What an incredible way to conceptualize and compare computer versus biological viruses (with the size of them in bits looking reasonably similar).
In the virus writing scene there are a number of trends that have allowed generic approaches to outsmart even incredibly complex "virus scanners":
1. polymorphic code - basically you keep the Turing Machine / Program Logic the same, but you use random widgets with random side effects to implement the program logic. In this case each instance of the virus is different per infected host and there are no reliable signatures based on the bits to recognize them.
2. Packers - basically a program that on its own is not malicious but can carry another compressed and encrypted program that it will "unpack" in memory and then execute - allowing a virus to ride through virus scanning.
3. Stagers - A small and otherwise innocuous piece of code that pulls commands in either real time or as needed from the network, keeping all program logic in memory, and executing malicious code that is never packed into a formal executable file (and thus never scanned by anti-virus). These have the advantage that they are basically impossible to forensically debug because the attacker will change or turn off the payload after the stager succeeds the first time.
Has nature found a way to emulate any of these creative methods for bypassing immune systems? I remember hearing that HIV can look innocuous to an immune system, but I'm not sure that's because it uses a technique like the above.
It's a little click-baity, but it's also a pretty good framing for the article, which discusses both the number of kilobytes encoded in the virus and the number of bits that need changing to make the virus deadlier. Much like with naming poems, they took the first sentence of the post (minus some preamble) and used it as the title. Idk. Seems pretty defensible to me.
Deriving a "lethal bit count" from virus genomes is not quite fair, since the virus requires considerable metabolic support from the host genome.
I think I can kill you with 21 bits: hydrogen cyanide is a three-atom molecule. Assume 128 chemical elements --> 7 bits per atom. And the molecule will assemble itself from the constituent atoms, so no extra information is needed for that.
Hydrogen fluoride might work, too: 14 bits.
And, of course, a slug of plutonium would kill you both chemically (poison) and radioactively: 7 bits. Can't do better than that.
Oh wait, if I shot a stream of electrons at you, that could kill you, too. There are six types of leptons (ignoring anti-matter), so that's three bits.
Fascinating stuff. But for computer viruses, 22k is really a lot. Back when computers and OSes were simpler, there were viruses fitting into a boot sector (512 bytes). Of course, since the boot sector has also to have some useful information (otherwise the OS won't boot and the virus can't distribute itself) the actual size was less. It wasn't super-hard to do such viruses either - provided you are comfortable with assembly coding and low-level OS programming of course. I wrote one myself back when I was a student as an exercise (didn't distribute it of course ;). It didn't anything sophisticated, but still could replicate.
The read is just fascinating. I have sleepless nights sometimes thinking about DNA cell manufacturing. If we could program DNA, understand it structure, can we just manufactor, houses, cars, and machines from a seed that you drop into water and it grows into a design of your choosing with the help of sunlight.
Where can I find more information about how DNA actually ends up creating a little cell.
Looking at the numbers, Bunnie's proposed mutation may pop up in nature quite a bit. I guess it doesn't spread as well as the other strain and so dies out rather than growing exponentially.
Historically, the D->G mutation around 222 (numbering schemes vary) in 1918 Spanish flu caused a significant increase in pathogenicity. That's a single bit-flip.
Similar idea: I have a friend in a lab who recently made a synthetic version of a virus (she said she was "resuscitating" the virus). It has a single point mutation. It's normally BSL-4 and causes lethal infection. With the mutation it's supposedly safe to work with and won't enter certain cells.
Apparently the polymerase screwed up and altered some part of that gene, or maybe it wasn't mutated in the first place, or maybe the initial sequencing was incorrect. Regardless, I got a terrified message from her saying the synthetic version was replicating in a cell line that it wasn't supposed to. (Fortunately, she had continued to work with it in a containment lab.) Eventually her lab group scraped the study.
Before this, I wasn't really concerned about this kind of work.