I wonder would it be possible to accelerate a synthetic cell's evolution via software?
If you already know exactly what's in the cell (you built it!) could a computer speed up its life via a software model and jump ahead to something more complex which you build as version 2?
You'd be hard pressed to beat biological evolution...
A liter of e. coli can have upwards of 100s of billions of cells. Which can double every 90 minutes. Try to run that many instances of a full environmental simulation in software. That's 1L of bacteria. Scaling that up to 100L is pretty easy. Try to do that on your super computer...
A straightforward evolutionary experiment to find resistance to an antibiotic can literally run trillions of trials in a stupid simple overnight culture. Slowly increase the dose of an antibiotic in a few liters of bacteria over the course of a day or two and see what survives.
And then checks that number in some way and signalizes somehow when it arrived at solution ? For example by releasing a toxine that kill others and reproducing without changes from that point on.
If we could, we could beat exponential-time problems in linear time as long as there's enough space and growth medium :)
The comment was playing on how much more efficient biological components are compared to digital ones.
An actual attempt to answer your question.
Currently it takes quite a lot of computation to detect outlines of objects in a raster image. This is an ability innate to many animals. Suppose we had the technology to keep individual brain cells of an insect alive, and ability to interface with biological neurons, we could pass visual data through, say, an insect's eyes and brain, and get a signal on the other end that corresponds to vectors and lines that describe the object in the visual data. The insect brain is able to process images orders of magnitudes more efficiently than modern computers.
Or we could make some sort of Dog helmet, hopefully without a physical connection, that somehow transmit the vector data inside the Dog's brain to our computers.
Anyways, all of this looks almost impossible. We simply don't have the technology to interface with the detailed and intricate data that is present our bodies, as our brain process the world around us.
I do think one day we will be overlaying data in our own eyes using some sort of neural implant. That device will probably rely on existing vision-related, biological, computational infrastructure in our minds.
I wouldn't be surprised if it's two hundreds years away.
P.S. Here's an article relating to the idea of putting insect brains in robots: http://www.kurzweilai.net/robots-with-insect-brains It's talking about a replicated insect brain on digital hardware though.
To take it a step further, I wonder if it is possible to engineer better catalysts to speed up cell division. Evolutionary pressure should bias towards maximum reproduction rates (given a the right environmental conditions). But I wonder how much variation there is in cell division times between different species of bacteria.
I think you're describing the field of [bioprocess engineering](https://en.wikipedia.org/wiki/Bioprocess_engineering). A lot of time is spent trying to balance cell growth with the possibility of mutations being introduced into the system. Even between different strains of E. coli the doubling rate can vary. Moreover, one also needs to contend with the fact that prokaryotes typically cannot produce very complex proteins. This leads some processes to need to use eukaryotes such as CHO cells, which are much slower.
That would be awesome, but it is unlikely with current software and hardware. Even small scale simulations (nanoseconds to microseconds of a single protein molecule) require hours of sim time (on large clusters).
The number of atoms in the smallest viable cell is actually reasonably small, less than a billion atoms, such that you could hope to store it all in memory on a single beefy computer.
Meanwhile, you need something like picosecond resolution in the simulation, while you need seconds of simulated time to see interesting macro-level things happen. If you want to see cell division, you're probably talking a minimum of thirty minutes of simulated time.
Not to mention that the molecular dynamics simulations you're describing can't actually simulate chemical reactions. (the covalent bonds can't change in the sim, unless you do a much more intensive QM calculation and even that's iffy).
If you want evolution, you would also have to simulate the cell's environment. Without an environment, there's no natural selection and thus no evolution.
Actually, I was wondering if a learning algorithm could be coupled with a builder that uses real synthetically created biological components in an automated system. This way, the builder comes up with new combinations, automates all the science around evaluating the combination as quickly and automatically as possible, and the software/learning algorithm controls the development towards some generalized goal.
Ok, I admit this sounds ridiculously sci-fi, but, why do I feel like this is actually possible (just really, really hard)?
EDIT: To clarify, I am actually suggesting a real living system coupled with software/hardware, not a simulation. The idea is, I realize real life can't be "sped up" like a simulation, but if it's truly automated and kept running/fed/operational, the system could just sit there and keep trying things as long as it runs - over a long period of time, humans could monitor its progress.
Somebody did do a software-style analysis of a simple cell. Fascinating article, amazing results, pity I don't recall it. Worth looking for, shouldn't take long to find.
I've been a programmer at the Folding@home project for about 8 years now. It takes massive computing power and a lot of time to simulate even one reasonably sized fast folding protein using the state of the art methods. Simulating a whole cell ab initio is way beyond the reach of current technology.
You didn't read my post. I said O(2^N) which means basically computationally intractable for large N and even if Moore's law were to continue indefinitely it would still be computationally intractable for large N. O(N^2) problems, on the other hand, are far more approachable by software.
In the second caption: "Each cell of JCVI-syn3.0 contains just 473 genes, fewer than any other independent organism."
In the main text: "In a 1995 Science paper, Venter’s team sequenced the genome of Mycoplasma genitalium, a sexually transmitted microbe with the smallest genome of any known free-living organism, and mapped its 470 genes."
Which is it? Venter would have seemed to disproven his own claim to novelty, unless we've found new genes in M. genitalium's genome since 1995.
Edit: As with most biological terms, part of the problem is the fuzziness of the definition of "gene". More recent studies claim M. genitalium has 525 genes [1], but that might be including tRNA and ncRNA regions. I'd still object to the article's poor editing. Also, let's get down to brass tacks here: we've only trimmed a 580kb genome down to 531kb (9% reduction). Clearly, life is already pretty damn efficient.
Hey! So I worked in this lab (but on a different project): yeah some of the accounting is wierd and pr-ish, there are 530kb phytoplasmas (but they can't even make their own nucleotides)
M.mycoides was used because it has a reasonable growing time (genitalium colonies appear in weeks)... So really the impressive achievement was generally strategically reducing a genome without doing much damage to the doubling time.
That's not what I'm getting at. If M. genitalium really has only 470 genes by whatever counting scheme we've chosen, then the claim that "each cell of JCVI-syn3.0 contains just 473 genes, fewer than any other independent organism" is simply mistaken.
Often this topic makes me think of some of the fictional technology present in Deus Ex, released back in 2000, e.g:
> The cells of every major tissue in the body of a nano-augmented agent are host to nanite-capsid "hybrids." These hybrids replicate in two stages: the viral stage, in which the host cell produces capsid proteins and packages them into hollowed viral particles, and the nanotech stage, in which the receiver-transmitter and CPU are duplicated and inserted into the protective viral coating. New RNA sequences are transmitted by microwave and translated in to plasmid vectors, resulting in a wholly natural and organic process.
Creating an actual cell from scratch would be very difficult but not particularly compelling.
The synthetic genome gets "transplanted" into a host cell (in this case the M.capricolum baterium), from that point on, it "IS" the new organism and divides as such.
And anyway, there's a tremendous amount of stuff to study with "the platform" of a transplanted genome. Venter should probably get the Nobel prize twice-- once for shotgun sequencing and at least once (if not twice) for synthetic biology. These are huge advancements.
I would say it's the new organism only after nearly all the old host cell components have decayed away. In the meantime, it's a hybrid. Then the new genome starts transcribing and translating new proteins to replace the host cell's components, and due to aging/entropy/repair eventually all the old components will be gone.
Venter didn't invent shotgun but he did popularize it. He was a top-notch biologist even before he switched to genomics, though, and that work was good enough that I think had he kept to it, he may very well have won the prize eventually.
Likely, Venter will not win the prize for several reasons. He's political- too many people hate him. Most of his work is viewed as "technical" by the scientific community. The words "obvious" and "boring" seem to be the things that I hear most levied against his efforts.
Minimal genome ... which also means the minimal viable cell. They are trying to figure out what is the minimal amount of genes to create a cell that can survive and reproduce.
Craig Venter, the researcher from the Nature article, claimed to have built "artificial life" earlier in 2010. The new Nature article is reporting he's claiming to have made a "new species".
"Craig Venter and his team have built the genome of a bacterium from scratch and incorporated it into a cell to make what they call the world's first synthetic life form"
"JCVI-syn3.0 is a working approximation of a minimal cellular genome, a compromise between small genome size and a workable growth rate for an experimental organism. It retains almost all the genes that are involved in the synthesis and processing of macromolecules. Unexpectedly, it also contains 149 genes with unknown biological functions, suggesting the presence of undiscovered functions that are essential for life. JCVI-syn3.0 is a versatile platform for investigating the core functions of life and for exploring whole-genome design."
One question I would have is "Are there homologs to the 149 unknown function genes in the human genome"?
That isn't exactly "from scratch". They took pre-existing genes from a known bacterial genome (M. mycoides) and started knocking out genes via an automated process until the final resulting genome was no longer viable if any of the remaining genes were likewise removed.
That doesn't quite meet my definition of synthetic life form. It's like the difference between stripping down a stock car for racing and building an entire go-kart from unrefined, unsorted piles of rock.
That synthetic genome is the end product of billions of years of evolution minus billions of years of accumulated kludges and cruft. And it's about half a megabyte. It wasn't designed, but discovered.
I would guess that there are plenty of genes in the sequences common to fungi, plants, and animals that cannot be removed without ruining the whole organism. Many of those would have currently unknown functions. And those 149 genes are what remain from running the cruft-removal process on one species of bacterium. To determine whether they are truly required for life, they woul have to also be retained after running the process on other species to discover their minimally viable genomes.
I think you guys are missing the point a bit. The important thing here is that there's a minimal genome in a working organism that can be used as a research platform. The "from scratchness" level is purely for bragging as once the cell divides a few times and fails to maintain components, it'll be working from the set of things its genome specifies. Whether you consider that cheating isn't really relevant for the potential experiments you can run.
EDIT: For a CS example, imagine trying to reverse engineer a computer system and make it do things it wasn't supposed to do. It's a lot easier to tinker with hardware from an Apple ][ than with a modern x86 chip. The advance here is that we are finding the Apple ][ in the x86 chip and doing a run of those for people who want to play with them.
Exactly. Furthermore, you don't need to build life from scratch. That's wasteful. You'll just set yourself back years, possibly a decade or more, trying and erring until you get to roughly the same place as the stripped-down bacterial genome at which Venter and his colleagues have arrived.
Venter isn't explicitly trying to create life from scratch; he's trying to find the minimum viable platform from which to harness life for various useful purposes. Starting from the proverbial "pile of unrefined rock" would accomplish nothing but scoring a few vanity points.
You would need to build divergent life from scratch if you wanted to prove you knew everything there is to know about how life works. As the synthetic genome shows, anyone who tried that today would end up 149 genes short of viability with too few clues for how to fix it.
I went to the article to try and dig up information about the orthologues, but as is kind of par for the course these days, the meat of the useful information is not readily available.
If you go to the supplementary information, you can find an excel sheet with the locus and amino acid sequences for all the genes. It's then just a matter of blasting the unknown function genes against the human genome and using a cutoff for similarity. I'm going to have to wait to do this next week, but if any enterprising soul wants to do this, you can get the supplementary data here:
Looking at that supplementary Excel spreadsheet, only 42 of those essential genes are annotated as "hypothetical protein/unknown function/unclear functional category". For example, locus MMSYN1_0421 has unknown function, but belongs to the alkaline shock protein family, so we can't say we have no idea what it does.
They seem to be the only loci where we're truly in the dark. There are many more with "putative" or "probable" roles and known homologs.
In total the headline seems to be misleading. We're only truly lost on about 9% of these essential genes.
The original article uses these concepts interchangeably: minimal genome and minimal cell. They basically define the minimal cell as a cell with a minimal genome.
E.g. "Because our minimal cell is largely lacking in biosynthesis of amino acids, lipids, nucleotides, and vitamins, it depends on the rich medium to supply almost all of these required small molecules."
I don't think it necessarily follows that a minimal genome has the minimal cell or vice versa. It really depends on how much support structure (the rich media) is provided and the specific details of how the genome produces the cell structure, and the cell structure details.
One can imagine little cycles of genes that all need each other, but don't actually provide any extra fitness. IIRC the Venter method can't identify and remove those sorts of cyclic dependencies if the cycle is more than 1-2 long.
There are probably some things we don't know about the insides of a bacterium, and there are probably some fabrication techniques to be worked out, but I would expect the knowledge of how to fabricate a whole cell from scratch within a decade. (I know little of molecular biology, so this is a wild layman's guess.)
They're not. If you put a synthetic genome in to a cell that's just had the genome removed, that's very different then starting with pure molecules and building something that will boot.
Old electric cars were built from donor cars, rip the ICE from a Volkswagen or something and then put in batteries and electric motor.
This is worse though, you're not building a car, you're building a cell factory. It's much more like bootstrapping a compiler. Now, it's possible to hand code the assembly to make your low quality compiler, that's just good enough to compile a subset of your language that'll let you bootstrap the full language. That's rough though. It's much quicker to just use C or something to implement the first version of the compiler, then implement the target language compiler in the target language.
I agree, it would be cool to be able to snap together molecules to build structures. The best way to do that today is to use cells to make the molecules and snap them together. We just don't have little Mems devices (or whatever) to do that. Especially not in bulk.
Someday, we'll have a device that's not a cell that can do what you want - it just doesn't exist yet. Cells are almost a magic cheat code, giving us access to building stuff we can't build any other way.
But... at every cell division, two half cells will be built. So after twenty divisions, there will (on average) be a millionth part of the original cell in every descendant.
I don't really see the point in creating all organelles outside a cell. When we copy the functionality chemically, the results are very different. (Air planes not flapping their wings, etc.)
It doesn't start off as being the same thing. In particular, the new genome might create a very different cell wall than the one from the original bacterium.
The species used here (mycoplasma) does not have a cell wall. ;)
The whole point was that the new genome, when transferred to a recipient bacteria's plasma membrane with the recipient's genome removed, produces a different phenotype, which they showed. I think they are entitled to call this a minimal cell.
Most cells don't have cell walls, typically only plant species / fungi and some bacteria do. I think the confusion here is due to the term 'cell wall' which has a very specific meaning in biology and doesn't just denote the outer parameter of a cell. The 'cell membrane' is what keeps all the innards of a cell in place, but that's a distinct structure to the cell wall.
Is that true? You can take a naked genome, and do something to it, and a cell will materialize? With a cell wall and cytoplasm all its mitochondria and everything?
Well, they had to have a receptive cell to bootstrap this thing, that is true.
Yet, since we're talking about mycoplasma here, they don't have a cell wall (only plasma membrane) nor mitochondria (only eukaryotes have those). The new minimal genome really has to synthesize everything needed by the cell to grow and divide.
> Church says that genome-editing techniques will remain the go-to choice for most applications ...
George Church is kind of a sarcastic name for a genetic scientist..
But the ethics question is more open then ever.
Didn't you immediately think about how cool it would be if we could program these organisms to do stuff ? I did.
Could you then program it to become multi-cellular ?
How long before this can be achieved ? 10 years ? 50 ?
However, "Because it’s there" is a worrying motivation to pursue this knowledge, because the pandora's box it opens is very real.
We're getting closer to the point were we can play God and achieve magical technological feats.
But are we mature enough to handle the powers that this technology bestows upon us ?
Do we really need this technology now ?
If you already know exactly what's in the cell (you built it!) could a computer speed up its life via a software model and jump ahead to something more complex which you build as version 2?