Hacker News new | past | comments | ask | show | jobs | submit login
Hacking DNA: CRISPR, Ken Thompson, and the Gene Drive (blog.ycombinator.com)
212 points by craigcannon on April 3, 2017 | hide | past | favorite | 66 comments



Here is our software you can use to start building your own tools that use Cas9 [1]. Build new Cas9s with new functions, peer into and mix and match functions that already exist, and build payloads to be delivered by Cas9. We help you program biological 'function' into DNA at an abstraction above the DNA sequence itself, and get you the DNA that encodes your functional designs delivered to your door.

[1] https://serotiny.bio

Some protein designs that undergird entire companies:

Cure hemophilia: https://serotiny.bio/notes/proteins/hbb/

Vaccinate against HIV: https://serotiny.bio/notes/proteins/ecd4ig/

Cure the American Chestnut of its blight: https://serotiny.bio/notes/proteins/oxo/

Make spider silk clothing: https://serotiny.bio/notes/proteins/adf3/


Your website looks interesting enough to warrant its own post like a Show: HN.


I have tried a few times. But the community is small enough that it gets no upvotes :) We just released it. It's built with Ember & Go. I'd be happy to answer any questions about our technologies (biological or technical).


Passed this on to some biochem buddies and got the following response:

"There are plenty of constructs I'd love to have but don't have the time or expertise to make them. However, could this tool make them? There's so much expertise that goes into "if i hang this tag here, will the protein still traffic to where it needs to go/will it interfere with other domains/will it hang on the right side of the membrane/if I change literally one other amino acid the new tag will work but not if I leave it normal." The experts can already design proteins, but if this tool democratizes that, who's going to do QC?

AFAIK the real way people make substantive changes to proteins is to make 10 variants, test them all, and iterate the designs that worked best. Feels like we're miles away from being able to drag and drop functions."

And "In principle proteins are modular but I can give you ten examples where that principle ruined someone's life. The QC is hard and is case-by-case."

What is your editor aiming to help with? Is QC still the rate-limiting factor to innovating new protein designs with your tool? Does your tool help with QC in any way?

Thanks for sharing, I'm always interested to see new tech in this area.


That is exactly what we aim to do:

- We keep track of how domains have been used before (so as to mitigate if not entirely predict that QC). We are trying to encode that expertise in a way that is intuitive and frictionlessly sharable. The more people use it the more helpful it becomes. We help encode biological institutional knowledge.

- We collaborate with DNA synthesizers so that you just do the designing, and we help get you the designed DNA to your bench at a reasonable cost. You get exactly the sequence that you want, not what someone gave your neighboring lab's buddy sometime ago, or what some restriction enzyme/quick-change let you that's probably good enough. We alleviate the pain of cloning.

- We help you build entire sets by dragging and dropping: "I want these three fluorophores upstream, these 4 linkers on these proteins of interest" - Boom! Buy! Done! (in literally as much time). We make efficient high-throughput design.

- It is true, there are many proteins that are note entirely modular. But there are many more than 10 examples of proteins designed exactly that way (see Addgene's database). Biology is hard. We help you prevent failure buy being able to see how other successes have been achieved, prevent stupid mistakes, and get you your material to you as quickly and cheaply as possible. If your design does fail, it has failed faster and with less sunk cost than if you had to do all the construction yourself.

- Finally, because combinatorial design and purchasing is reasonably priced (and you don't have to do the construction work), you can order an entire logical set of proteins to actually figure out which would work and which would not. When I cloned I made some tiny fraction of what I expected to work purely because cloning was so painful - and if that small fraction failed I didn't even bother to figure out why or what would have worked.

I'd appreciate any feedback you might have.


Why doesn't my FRT site logo invert when I reverse complement it? (not a serious complaint..) Signed up and played around a bit tonight. Really great concept. I think a couple of features will probably be helpful to your target audience, though some are a bit tricky to implement:

-User designed sequences (maybe I just couldn't find it). Everyone wants to drop their primer of interest in, their unique tag, their CRISPR site, etc. Also, some well-balanced GC spacers would be helpful, even make them unique so people could PCR off them if needed.

-Barcoded sequences. This will ruin / complicate the auto IDT pricing, but a nice drop-in feature would be a degenerate barcode sequence. This is all the range with MPRAs or other high-throughput methods, and could be really helpful

-Architectures. Say I want to design a lentivirus construct. Give the user the scaffold to drop their payload in, with set LTRs, etc. Screen for no polyA signals in their design, etc. An easier way to walk novice users into design (maybe best not to start with lenti...)

Anyway, keep up the good work, it's a cool product!


My friends didn't seem to be convinced by this response for their own areas of inquiry, but I'm not really qualified to hold a discussion here; happy to try to arrange a conversation if you're interested in going to the source. (PhDs and postdocs FWIW).


Hard to hold a conversation on HN. If you ever see this comment - I'd love to hear their skepticism :)

Email is my first name at my company's domain.

-justin


As someone who is really interested in DNA programming but not am expert in biology, how approachable is the editing.

If I'd like to make trees that have a firefly glow at night, is this something that can be done by an amateur?

How about creating plants with higher efficiency of photosynthesis than 4-5% in most plants.


At the end of the day (decade?/century?) it should be as straightforward as making beer, an espresso shot, bread or using a fertilizer. The end-products and techniques are quite simple, even ancient.

However the search-space to get to those solutions is enormous, and getting to those very simple conditions is still a significant (if no longer herculean) undertaking. If you're given the correct tools and a good protocol it's really pretty easy to accomplish useful and interesting biological tasks (again you already do this when you make beer/bread/yoghurt/kimchi). But for the time being engineering an entirely novel biological function would require a fairly sophisticated understanding of the context you're trying to fiddle with.


Favorited and I will check it out soon. Thanks for posting it.


Re. Gene drives, the article points out (and links to [1]) a major issue, but doesn't quite give the full story on that issue. Resistance to gene drives will arise, and isn't even that difficult to have happen.

Key point:

One source of this resistance is the CRISPR system itself, which uses an enzyme to cut a specific DNA sequence and insert whatever genetic code a researcher wants. Occasionally, however, cells sew the incision back together after adding or deleting random DNA letters. This can result in a sequence that the CRISPR gene-drive system no longer recognizes, halting the spread of the modified code.

CRISPR/Cas9 is a nuclease, it cuts DNA. It does not stitch it back together, the cell does that afterwards. Sometimes that goes smoothly, other times it doesn't, but one thing that happens frequently when you cut DNA is mutation. It's why radiation exposure is bad, DNA breaks cause mutations. If mutations yield immunity, then something that depends on mutations NOT happening to keep operating, but itself CAUSES mutation, is unlikely to function for very long.

[1] http://www.nature.com/news/gene-drives-thwarted-by-emergence...


As someone who has done lots of molecular biology (and also is a reasonably good programmer) I took issue with the comparison of CRISPR to ken thompson's Unix hack for this very reason (among others).


CRISPR is a biological weapon!

Yes, this is something that is mostly not talked about. CRISPR is a virus engineered to attack and penetrate human cells and modify its DNA. It can be used for good. But at the same time the technology is becoming so easy accessible that it can be used to create a virus capable of whipping out the people. Or put a dormant gene in there that gradually kills people.

Up to 80% of people have Oral Herpes by some estimates. Imagine a virus like that could be engineered and spread quietly that in 10-20 years cause a worldwide cancer.

My point is, CRISPR can be used for nefarious activities. It's inevitable! So we need to create an antidote for it to prevent unwanted viruses that might one day be created by CRISPR.


The term is "dual-use technology". [1] There are many. And in general users of those technologies are precisely those most keenly aware of it.

In particular, the gene driver really is only of interest in organisms with very rapid generational turnover. It 'rapidly' speeds the propagation of a mutation over sexual transmission proportional to the generation time. This works for mosquitos. It makes no sense to use a gene-driver on a human population. A virus itself is, well, virulent - they need no help from CRISPR to get there.

Further, there are always exceptions/antidotes/alternatives in biology. In fact the recent discovery of some (useful) Cas9 inhibitors was predicated on the idea that there are organisms out there that have CRISPR systems in them that are themselves flanked by excision sites. By all rights they should not be able to have both the CRISPR system and have that system be marked for destruction - but they did - simultaneously. So they went searching, and they found the inhibitor in those organisms. And there are many more out there. [2]

[1] https://en.wikipedia.org/wiki/Dual-use_technology

[2] http://www.cell.com/cell/fulltext/S0092-8674(16)31683-X


What would be the advantage of killing people in 10-20 years versus killing them right now?

I mean, smallpox has been synthesized, and it spreads fast, has pretty high mortality rate, and hardly anyone is vaccinated against it any more.

I can't think of any military (or even terrorist) reason why you'd want that kind of delayed mortality, but maybe I'm overlooking something.


Hard to identify that it was a deliberate act.


Could happen by accident too


For genocide.


Okay, still not seeing why you would rather they be dead 20 years from now instead of right now.


What I learned from the video game "Pandemic 2" is that the longer your disease remains undetected, the more widely it can spread; if you're asymptomatic and apparently harmless, you may be able to infect every human before you activate the bad side effects. (Which is the goal of that game.)


In other words, any sign of symptoms would cause Madagascar to remain the last place inhibited by humans since they would close their ports/airports real quick


It's always Madagascar. Always.


Wouldn't the modified virus need to have a selective advantage to outcompete the normal oral herpes virus? (And since viruses don't reproduce sexually, you can't use a gene drive.)


Not with a gene drive, that's the whole idea behknd it AFAIK. A gene drive copies itself and the payload to 100% of the host's offspring. So, as long as a host is able to reproduce (which granted has lots of implications) it will spread the payload. This accelerates the spreading of a gene tremendously, covering the whole population in just a few generations.


I'm talking about (the impossibility of) a gene drive for virus replication, not a human gene drive delivered by a virus.


I was trying to make a point a virus can be "benign" like herpes and silently spread effecting a large population. You don't need a selective advantage as the virus is not competing for scarce amount of resources. You can have herpes and still spread common cold.


The medical community just needs to have better crispr systems than the few nefarious actors.

It does seem likely that an arms race will likely occur.


CRISPR is not a virus. It does not come with a delivery mechanism.


There are a small number of known advantageous mutations already present in a small number of humans that could be introduced to larger numbers via CRISPR, brought to the clinic as a somatic gene therapy for adults. This is an enormous market in comparison to "merely" curing all genetic disease, as near every adult human would be able to benefit.

Not much movement in getting that into the clinic yet, however, for reasons that are entirely cultural. The first company to get one of these working via medical tourism will make a lot of money. CRISPR makes that goal pretty easy in comparison to the past.

These changes include things like extra follistatin or knocking out myostatin to gain greater muscle growth and less fat tissue, or removing or disabling ANGPTL4 to reduce heart attack risk by 50%. There are others along the same lines and more are being discovered as sequencing becomes ever cheaper.

The comparative lack of effort to make enhancement gene therapies a reality is nothing short of crazy, given the observed benefits in the few humans lucky enough to already have these variants or loss of function mutations.


For anybody interested in performing CRISPR in their kitchen with $150 worth of pre-packaged materials, I would encourage them to take a look at The ODIN. It is a kit to get you started, and it comes with tutorials to take you all the way from knowing nothing to being able perform CRISPR and understand what you are doing.

(I have no affiliation with the product, I just like it): http://www.the-odin.com/diy-crispr-kit/


Listening to the description of applying Gene Drive in the radiolab episode sounds worrying as a layman. First order manipulation with CRISPR sounds reasonable, but then contemplation of Gene Drive, the adding of the editing mechanism itself into the genetic material of a mosquito in the wild seems wildly irresponsible. Now you're running an experiment x millions (billions?) of variations where if or when a virus breaches a given mosquitos cell defenses, they could potentially pick up the crispr gene themselves and then one could be a cross-species jump away from re-introducing that same cross-editing gene into other species (including humans).


DARPA are doing a heck of a lot with CRISPR, and as you can imagine for both defense and offence angles.

https://www.scientificamerican.com/article/u-s-military-prep...

This whole video is great btw, where Arati Prabhakar spins through CRISPR, Gene Drive and a heck of a lot more in a presentation at UW.

https://youtu.be/ZHipS0-1ykE?t=2812


We can barely evaluate the effects of genetic changes within a single living organism - probably pretty well with focused pinpointed areas, but with the overall single living organism we can probably evaluate very little. It seems sheer folly to claim that any such system would be 'safe' exposed to a wide range of external parameters of virus, bacteria, immune systems, regular replication errors, irradiance or chemical mutagens, etc.. that we can barely enumerate thoroughly, let alone assess the interactions.


Quite.

This road seems a much bigger existential risk than any AI holocaust scenario I can imagine. It's very scary stuff and everyone in the field seems to be brimming with glee at how easy it all is.

Just consider that the very mundane HeLa cell line alone managed to jump continents and invade the most stringent safety precautions at dozens of labs. At least they are pretty benign.

Now imagine a nice mosquito delivery vehicle with payload strapped on that nobody has a real clue what it is capable of. I shudder to think.

A lot of people talking about how to re-establish the baseline and that it's not a big deal and nobody should worry.

I get that there are very many wonderful things that can come from this as well, but on the other hand the research is going very fast and this is one area we should be exercising extreme caution.


I think a better analogy is that CRISPR (Cas9) is the cursor package within the text editor. The (nearly completed) genomic text editor is the the entire suite of biological technologies we have at our disposal. Copy/Cut/Paste have been around a long time and made these discoveries possible. Rendering has been around for quite a while. Keyboards have permitted larger buffers to be written. But until recently a non-random, arbitrarily positionable cursor has been missing. Cas9 is importantly, and just, that piece of the suite.


this analogy breaks down: A non-random, arbitrarily positionable cursor is available in pretty much all lower organisms. CRISPR, for example, is totally useless for yeast (because the effort to get it to work far outweighs "just add addressed DNA" which is basically "how it works" for yeast). CRISPR is really only spectacularly useful for higher organisms and higher organism derived cell lines. There may be a limit to that for genes where there's high copy number (often a problem in plants).

Als; CRISPR can specifically induce a strand break (which is the FIRST step), but it is not quantitative, nor does it do selection. Generally as a part of a CRISPR protocol you have to counterselect for cells which have the DNA integration in them.

There is quite literally no computer science analogy to this. This would be like having editing a database that is sharded and replicated over multiple servers, randomly integrating the change you want on some (probably small) fraction of the servers, and then going through, scanning all database replicates for the existence of the change, and physically destroying with a hammer the servers that had shards that were unchanged, as the mechanism to insure eventual consistency across your database.


One of the successes of Unix was because of free and open source software. One part of that in a biological setting is free/libre data. Efforts like the Harvard Personal Genome Project [1], Open Humans [2] and OpenSNP [3] are trying to provide researchers and the community as a whole with free/libre data so that research isn't done in silos or walled gardens.

CRISPR is pretty exciting but we also need to make sure there's a rich commons to build on. I encourage everyone to look into and support projects that make free/libre genomic data more available.

[1] http://www.personalgenomes.org/harvard/about

[2] https://www.openhumans.org/

[3] https://opensnp.org/


For those wondering, the article makes an analogy between Thompson's 'Trusting Trust' paper and gene editing.

It wouldn't have surprised me to hear that Thompson had moved into gene hacking (at the age of 74) but I think he still works for Google.


Interesting that you brought that you should bring this up - reminded me to dig up this old Ken Thompson interview[0] (see the section starting at COMPUTER SCIENCE AND THE FUTURE) he talks about the increasing complexity and specialization of computer science as a field as a barrier to innovation, and suggests:

Well, I had to give advice to my son, and my advice to him—to the next generation—was to get into biology.

Definitely worth a read through the whole interview even if you don't agree.

[0] http://genius.cat-v.org/ken-thompson/interviews/unix-and-bey...


The analogy helped me understand things a bit, but I have a question coming from what I think is an oversimplification in the explanation of Thompson's Trusting Trust concept.

In the Trusting Trust attack, not only you change the compiler to miscompile the "login" program, but you change the compiler to miscompile the compiler itself so the "login" miscompilation persists even if you later revert all changes to the compiler source (as long as you use the new binary once, of course). Does this part of the attack have a gene drive analogy?


That part of the analogy does persist. It is the feature. The gene drive doesn't just insert the change you want - it inserts the change, and the code to make the change:

Wild type mosquito DNA: =================

Desired change to DNA: =============XX==

DNA encoding Cas9 and XX-Payload: =C9(XX)=

Gene Drivered Mosquito (before activation) (change to compiler): ====C9(XX)=============

Gene Drivered Mosquito (after activation) (change to login code): ====C9(XX)========XX===

------------------------------

A normal mosquito mating with a XX mutant mosquito:

Mother Gamete: -----------------

Father Gamete: -------------XX--

Mendelian fraction of children (and grandchildren) are either --, --/XX,, or XX/XX (if two chromosomal copies)

A normal mosquito mating with a Gene Drivered mosquito:

Mother Gamete: -----------------

Father Gamete: ---C9(XX)----XX--

All children: ====C9(XX)========XX=== (for all 'N' chromosomal copies)


It's important to consider that genome editing tools existed prior to CRISPR and a novel technology is only as valuable as what preceded it. Tools such as zinc-finger nucleases, and TALENs were used and millions of dollars have been invested in developing those technologies to address the same problems for which CRISPR could be applied to. The author also posits that CRISPR could be used to edit any organism, which sounds impressive until you start to break down what that means. Take for instance that >99% of all micro-organisms haven't been cultured, which means that it's a really hard to make them genetically tractable. It's not like you just filter seawater, get some microbes and mix some CRISPR-encoded DNA with them. You would have to know their sequences a priori. And to sequence them, you would have to lyse the cells, generate large amounts of it, sequence it, thus leading to a classic chicken-egg problem.

I'd also like to mention that this technology can be imprecise/precise and due to the combinatorics of the systems you have to design in order to create edits, the ability to reliably edit different parts of the genome can be difficult to predict. It's not perfect. Furthermore, DNA self-replicates, mutates, and has external forces which dictate whether those changes actually should persist in the environment. What if a gene-drive mutates thus rendering it non-functional? The number of mutations that could render such a system to be non-functional vastly outnumbers the number of gene drives you would need to deal with said mutations. Although these types of mutations are unlikely, I'd just like to emphasize that DNA is constantly evolving, unlike say, a series of UNIX commands.

Also, spider-silk producing goats existed before CRISPR was used to create spider-silk producing goats. https://phys.org/news/2010-05-scientists-goats-spider-silk.h.... And so did glow-in-the-dark cats: http://www.bbc.com/news/science-environment-14882008. The only difference is that they didn't have to pay millions of dollars in licensing fees to the Broad Institute.

I get that this sounds like I'm saying CRISPR sucks. It doesn't. It's a very valuable technology, which has much to offer. I just want to readily identify the ceiling of the hype.


I recently went to see George Church and Siddhartha Mukherjee talk about genetic manipulation at Pioneer Works here in Brooklyn. Of course CRISPR/cas-9 came up for part of the discussion, but one question I wanted to ask and didn't get a chance to is: what is the error rate on the technique? I can't imagine that it is zero, and every article I read about CRISPR/cas-9 seems to leave that part out. Can anyone speak to that?


You got me wondering, so I did a quick google search. A little over a year ago an advance was made to take it to a 60% success rate[1]. I'm not sure if any other advanceshave been made. Additionally, this[2] site seems to note multiple different success rates based on project goals. Another article[3] notes:

Why some guide RNAs work well, setting up Cas9 to cut and disable a gene nearly 100 percent of the time, while others bind but seldom or never knock out the gene, has been a puzzle since the technique was invented by Jennifer Doudna of UC Berkeley and Emmanuelle Charpentier of Umea University in 2012. The cutting efficiency varies with the type of cell and the particular cell line...

1: http://news.berkeley.edu/2016/01/20/advance-improves-cutting...

2: http://ko.cwru.edu/services/crispr_cas.shtml#successrates

3: https://phys.org/news/2016-08-crispr-cas9-genes-disrupt-dna....


I don't think the 'technique' is settled enough to have an answer to your question yet. There are a lot of different 'errors' that can happen, each of which has entire sections of researchers working to ameliorate them.

The floor to error is that a wild-type Cas9 protein with a naively generated guide RNA, and a payload DNA along with it, just dropped into a petri-dish of cells correctly edits some single-digit percent of cells. That's the crudest of experiments a grad student might do on their first try. And that it is that robust (in a biological laboratory) is precisely why it is so powerful. Single-digit percent activity on an unoptimized first try is actually pretty amazing in the context of biology.


Thanks, that's helpful to know. I'm actually thinking about within the context of that edit of the cells. Is it a binary thing where either the cell is edited or it isn't, or can we have an edit with some error rate in regards to what actually was pasted in? The cutting seems highly specific, you need some sort of homology between your guide RNA and the target sequence, right? It's the pasting I'm mainly curious about.

All the pop science articles seem to make it out to be a flawless system, and the engineer in me is highly skeptical of that narrative.


The cutting is mostly specific (given you have a reasonable target sequence - a pure 'GGGGGGGGGGGGGG' sequence is just chemically problematic anyway).

The cutting splits the DNA in half completely. It has to rejoin itself (by a not-well understood mechanism termed 'NHEJ'). This rejoining of the DNA is the kind of 'well it works (mostly, sometimes!)' part that is actively being better understood. Cas9 really has no relevance to this part of the process - unless it is used as a platform to attach DNA-repair machinery onto.

Like in all things biology, concentration and time matter. So if you leave Cas9 on 24/7 at very high concentrations with nothing to do, it will go ahead and cut all over the place. The goal is now to bring to bear everything else we know about biological expression to put the Cas9 homing system in its place, only when it should be in place. We already know pretty good mechanisms to make sure Cas9 only turns on under certain conditions, at certain times, is inhibited under other conditions, is otherwise made even more specific. Those are engineering 'details' at this point. And by 'details' I mean you have a huge academic and corporate effort underway currently figuring out how most efficiently and most accurately control Cas9's function.

Common results with a naive wild-type Cas9 (which we have moved on from in a lot of ways):

(no change): ====== --> ======

(unhealed breakage): ====== --> === & ===

(forwards insert): ====== --> ===(insert)===

(backwards insert): ====== --> ===(tresni)===

(tandem insert): ====== --> ===(insert)(insert)(...)===

(off-target insert): ====== --> =(insert)=====

(intermediate failure: ====== --> ===(insert) & === || ....

There are also very powerful uses for Cas9 that do not cleave DNA at all. Sticking a gigantic 'BREAK()' command onto Cas9 (that cannot cut DNA) can be very useful where the natural break command has been lost.


I've worked on this exact problem before. There are two kinds of possible errors: 1. On site errors- small errors that occur at the spot you intended to edit. Occur maybe 30% of the time- usually will not break anything, but worth thinking about. What we know so far is that these are not random and it will be possible to rationally design crispr sequences to reduce (or even purposefully cause) specific on site errors.

2. Off target errors- this is when Cas9 cuts the genome at the wrong spot- this is the main error people are worried about. Computational efforts in guide design have attempted to reduce this, and will get more advanced as we learn more about the binding kinetics of Cas9. These also occur non-randomly, at highly quantifiable frequencies, and are due to biophysical rules. Also, newly discovered and engineered Cas variants have substantially better specificity and will work to further reduce this problem. Also off target effects are usually not problematic and can be tested for (will be bred out of crispr-ed crops)- however, if you're going to do CRISPR in live humans it is likely that regulatory bodies will require a demonstrated very low probability of off target effects.

Basically, errors are an issue to spend a lot of time thinking about when designing a crispr system intended as a human therapeutic. For most scientists, however, errors are not terribly important or common and they don't need anything beyond a bit of rational guide design for their experiments.


I've been interested in the same question. It's hard to say because the genetics papers are very technical indeed, and the error rate is going to depend on how you define it, what kinds of edits you were trying to make, how you measure/detect errors (as well as how reliable your process is there), whether you're doing it to in vitro cells or embryos or whole organisms (in increasing order of difficulty), and, since CRISPR has been advancing so fast, when the work described in a paper was done (something done in 2014 and published in 2016 is going to be much less relevant than work done in 2016 but not yet published). For example, kbenson quotes a 60% figure, but that is for number of successful edits as opposed to non-edits or edits somewhere else; which one you care about is going to depend on your purpose.

For 'edits somewhere else', which are new mutations which might be harmful, as far as I can tell as an interested layman, in terms of the last category of errors, 'off-target' edits, the state of the CRISPR art is a low, effectively near zero mutation rate:

- "High-fidelity CRISPR-Cas9 nucleases with no detectable genome-wide off-target effects" Kleinstiver et al 2016, https://www.gwern.net/docs/genetics/2016-kleinstiver.pdf

- "Rationally engineered Cas9 nucleases with improved specificity", Slaymaker et al 2016 https://www.gwern.net/docs/genetics/2016-slaymaker.pdf

- Church, April 2016: "Indeed, the latest versions of gene-editing enzymes have zero detectable off-target activities." http://www.wsj.com/articles/should-heritable-gene-editing-be...

- Church, June 2016 "Church: In practice, when we introduced our first CRISPR in 2013,19 it was about 5% off target. In other words, CRISPR would edit five treated cells out of 100 in the wrong place in the genome. Now, we can get down to about one error per 6 trillion cells...Fahy: Just how efficient is CRISPR at editing targeted genes? Church: Without any particular tricks, you can get anywhere up to, on the high end, into the range of 50% to 80% or more of targeted genes actually getting edited in the intended way. Fahy: Why not 100%? Church: We don't really know, but over time, we're getting closer and closer to 100%, and I suspect that someday we will get to 100%. Fahy: Can you get a higher percentage of successful gene edits by dosing with CRISPR more than once? Church: Yes, but there are limits." http://www.lifeextension.com/Lpages/2016/CRISPR/index

In human embryos specifically, the published work is below state-of-the-art (unsurprising, given the taboo) but shows increasingly good performance in terms of making the desired edit and also not making undesired edits:

- Liang et al 2015 "CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes" http://link.springer.com/article/10.1007/s13238-015-0153-5%2... http://www.nature.com/news/chinese-scientists-genetically-mo...

- Kang et al 2016, "Introducing precise genetic modifications into human 3PN embryos by CRISPR/Cas-mediated genome editing" https://www.gwern.net/docs/genetics/2016-kang.pdf http://www.nature.com/news/second-chinese-team-reports-gene-...

- Komor et al 2016, "Programmable editing of a target base in genomic DNA without double-stranded DNA cleavage" https://ase.tufts.edu/chemistry/kumar/jc/pdf/Liu_2016.pdf

- http://www.nature.com/news/chinese-scientists-to-pioneer-fir...

- "CRISPR/Cas9-mediated gene editing in human zygotes using Cas9 protein" Tang et al 2017 https://www.gwern.net/docs/genetics/2017-tang.pdf : no observed off-target mutations (but very small sample size); efficiency of 20%, 50%, and 100%


the error rate on cutting is low enough that "it is no longer the issue". CRISPR is not the whole story for any given genetic manipulation project. Here's generally how it works:

1) cut DNA (this is the part that CRISPR does)

2) add replacement DNA

3) kill everything that doesn't have the replacement DNA in there

4) optional: repeat the process to 'clean up' your modification [0]

Basically in the pre-CRISPR era, if you, say, wanted to make a transgenic mouse, you had to take some embryonic mouse stem cells and just add DNA and hope that it found its way into the "right place". If you were adding a totally new gene - usually not a big deal because you kind of don't care where it winds up, shoot first, do your experiment, ask questions later.

If you were knocking out or replacing a gene, (aka specific location addressing) it is a big deal. Suddenly step 3), while necessary, is not sufficient to guarantee successful DNA modification. Furthermore "checking" is really hard, You need to implant the modified stem cell into a mouse embryo, create a chimeric mouse (a mouse that has cells from two genomes), hope to hell that the cell you implanted into the embryo randomly got chosen to turn into an ovary, and then take the children of the mice which should have the gene modification. And then you check the genome of the mice and it turns out that it's all wrong and you have to start from scratch (literally had a friend who unluckily spent the first 5 years of grad school repeating this process about 10 times IIRC). This gets really complicated when step 1) is inefficient, or cuts in the wrong place, or has a spontaneous integration preferentially in the wrong place - you off the bat take a huge hit (a hit on the order of 90%, if you were lucky).

With CRISPR, you basically derisk step 1), pushing the 'hard part' of molecular biology elsewhere. Suddenly your step 3) selections instead of being a low-yield, probabilistic crapshoot, are near-quantitatively correct. Since it's also the first step, suddenly the one or two 'actually hard' parts of molecular biology, like in the case of the mice the complicated process of generating chimerics, etc, can really be tackled head on by sheer numbers much more easily.

[0] in order to do 3) you usually need some extra stuff (resistance genes) that you might not want in your 'final work', so you might have to come up with a second stage to 'clean this up'.

Keep in mind that Church and Mukherjee have not really ever 'been in the trenches' actually doing these things (church was a structural biochemist and later a yeast geneticist - yeast are easy peasy). Their grad students and postdocs have.


Here's something I don't understand about this, maybe someone can help explain it. To implement a change in a person, you would have to modify the DNA in every cell in their body, would you not? I understand how the change is made to DNA with CRISPR, that's clear, but how can that change be propagated to affect an entire individual? Even an embryo is multicellular, is it not?


If you were to modify the genome of a sperm cell and fertilize an egg cell with it, then the resulting individual will have the modifications in all of its cells.


> Even an embryo is multicellular, is it not?

I've not read the article yet, but an answer to this question is, "not at the very start".


the answer is you can't and there aren't any reasonable proposals to do so at the moment (this is not to say there won't be in the future). Modifying embryos at the 1-cell stage, however, is doable.


Relatedly, if you're an engineer looking to build the software that powers CRISPR[0], Benchling (YC S12) is hiring!

Most of the major research organizations working with CRISPR leverage Benchling -- the Broad Institute, various labs at Berkeley, companies like Editas, etc.

Shoot me an email: josh at benchling dot com

[0] https://benchling.com/crispr


If anyone wants to play around with designing CRISPR guides, check out the Desktop Genetics website[1]

I work in the R&D side of the company, but our front end team has built a really nice website to design CRISPR gene knock-out/knock-in experiments from start to finish. Its all free to use until the final checkout where you order your CRISPR guides.

[1]https://www.deskgen.com/


Slightly off topic, but where does a layman start learning the fundamentals of DNA and proteins, etc?

It seems to be such a huge body of knowledge and the field seems to be moving so quickly.


sigh for the life of me I never remember the expanded acronym of CRISPR off the top of my head.


Listening to CS people talk about biology is maddening.

"CRISPR... is to genomics what vi (Unix’s visual text editor) is to software. "

This terrible comparison between DNA and code has been happening for years. In 2012, Thiel did the same thing in CS 183. We get to look back and see that he was wrong, but we'll have to wait a while before the 'DNA is code' group will admit this essay is bad. At least its not a TED talk and at least its better than Aubrey De Grey.

"Each VC on the panel made 2 predictions about technology in the next 5 years. The audience voted on whether they agreed with each prediction. One of my predictions was that biology would become an information science." -PT


So help educate. There is clearly an interest. If you know the bio then you know it gets a fraction of the funding and attention of standard tech stuff. Help smooth the analogies out, correct timelines, and otherwise inform outside industries about the powerful things biology is doing.


> Listening to CS people talk about biology is maddening.

The case of CS people commenting on biology via misinformed comparisons to computing eventually became known as the Andy's Grove Fallacy, named after his comments on the state of cancer therapies:

https://www.quora.com/What-is-the-Andy-Grove-fallacy


I'm curious; what don't you like about Aubrey De Grey? I haven't yet seen an interview or conversation with him where he doesn't seem extremely knowledgeable about his field, or lacks an understanding of anything relevant that he's queried on. He also seems to have a fairly significant number of credible people on his side in the biotech/medical fields.


Insofar as coding DNA sequences are like computer instructions in that they instruct to replicate proteins, I don't see why it can't be thought of as computational. I get that there is more to it, and I agree that vi is a terrible analogy to CRISPR, but I don't think genomics and turing-model computation are two worlds apart. It is silly to see CRISPR compared to text editing, because it is significantly more restrictive than that. However, by definition genomics and computation share a basis on logic and combinatorics.


My microbiome is laughing its ass off. Stupid humans.


This is how bio-bloat ware starts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: