I've used Transcriptic in a research setting for a while now. From a user's perspective, it's like looking into the future, and it's awesome.
From the business side, this makes YC a more attractive option for biotech startups. The life sciences are still very capital intensive. While the New YC Deal helps in this department, many businesses still need to look toward an STTR/SBIR grant from the NIH to get to the stage where they have a product to show investors.
Moves like this probably won't change that for a ton of companies, but there are a few on the margin who may be able to pursue an idea through YC with the benefit of the extra $20K in fuel.
We find it to be pretty competitive for our needs (PCR, genotyping, long-term storage).
PCR is something like ~$1.50/rxn with our standard genotyping protocol, which works out to ~$0.30-0.40 more than the same reaction run in house. If I recall correctly, the cost per rxn goes down if you run more in parallel, because they share the same instrument time.
Setting up that reaction might take me ~0.5 hours base, and 0.05 hours for each subsequent reaction prepared in parallel.
Grad students are cheap, but even valuing my skilled labor at minimum wage, it's cheaper to use Transcriptic.
The biggest pain point I see in this market is that experimental parallelization is an intervention-heavy process. Ignoring the equipment costs, it's also capital-intensive (unlike say deploying to AWS). And finally, obtaining usable data still requires experiential knowledge. There's something about knowing and feeling the data (yeah, that's awfully fuzzy) that is still an important part about obtaining good results [0]. So for any biologic process that is parallelizing the operators are going to want to own the machines anyways.
In order to capture a real market, you're going to have to figure out a way to offer parallelization services - be given a non-parallel experiment with certain parameters and scale it up on behalf of the users. So, the user has an experimental plan and just 'hands it over' to transcriptic. I still worry about the experiential knowledge part, putting the experimenter one step away from the experiment is potentially counterproductive.
[0]http://onlinelibrary.wiley.com/doi/10.1002/pro.2339/full
In this paper, the grad student (and lead author) who had spent four years of her graduate work on a previous paper had a nagging feeling that the data were strange. By actually looking at the wells, she figured out post-publication (with nothing to gain) that the protein was sticking to sides of the 96-well plate and causing the observational data to be artefactual. Then there was the question of how to do more experiments to prove that was going on. And then the political problem of convincing her grad advisor to publish a retraction (well at least it was a retraction worth a 10 page paper and a new citation. The story has a happy ending; she got a position at a pharma company largely on the back of her due diligence).
I think the opposite is actually true: you get better data when you don't feel the samples. This is a highly unpopular view right now but I have to wonder what's going on when things like Amgen getting 11% reproducibility of foundational papers (http://www.nature.com/nature/journal/v483/n7391/full/483531a...) are happening. There are things that are hard to automate because of their mechanical nature, but I think that for the reproducibility of science somewhat distancing the human from the process is a good thing. Of course, this adds short-term costs (which may sometimes be unacceptable) and requires a lot of behavior change for how people are used to working. I will say that we put a lot of thought into giving you the fidelity of interaction such that you can still make "breakthroughs from your errors" on Transcriptic, and improving those reporting capabilities are an ever-ongoing process.
I'll also say that this challenge is bigger than just one company. Some things may make more sense to do via Science Exchange, for example if the method requires some very customized hardware or there are only a few experts in the world who are sufficiently familiar with an unusual method's sensitivities. I'm also excited to see what Riffyn comes out with to help labs understand where reproducibility comes from. We're just getting started, but I can't see a path forward that puts more humans at benches rather than less. The humans should be free to do real science.
I have exactly the opposite opinion. Even scientists are captiviated by scientism - the idea that there is something poisonous about human subjectivity and imprecision and that removing subjectivity (and gathering more data) is necessarily a good. Sydney Brenner, for example has a 'money quote' about the path that biologists take: "low input, high throughput, no output science". Note that this quote doesn't make sense in an environment that doesn't faddishly flock to high througput 'big data' solutions.
Your example is greatly flawed. The biggest consumers of highly parallelizeable workflows is the pharmaceutical industry. Highly parallel medchem was a big fad and the number of drugs that it produced for its efforts is disappointing. The fact that 11% of Amgen's results are irreproducible is if anything a condemnation of parallel scaleup, at least in the context of an operator with a strong motive for selective interpretation.
Another big problem is that when you bring your numbers up, you 'get what you are looking for'. Precision optimization can optimize for an artefact. I joke I like to make is that sloppy science is good, because if you keep seeing the same result under a noisy platform, what you're seeing is probably real and, more encouragingly, robust.
You are absolutely correct that the lack of standardization in assay preparation harms productivity and reproducibility in the sciences.
Molecular biology has embraced standardization and automation to a great degree but it is an outlier.
I've encountered enormous resistance from practitioners trying to apply automation into cell biology.
Biology is enormously held back from the tools for improving productivity have to come from computer science but there is huge amounts of interdisciplinary friction.
Hi, I'm a researcher currently and your stuff looks really cool.
I have a number of questions:
1. I saw that flow cytometry is listed on your features. Can you explain how you guys manage this?
2. How do you pay for reagents, etc? Or do people send their own?
3. Any chance for more tissue culture friendly technologies? Centrtifugation etc?
4. I noticed you offer use of a Tecan plate reader; how does the user configure the parameters that the plate reader uses? I have a few templates which I use when I read plates, but I'd hate to have to redefine all of them.
1. Right now we're only doing analysis (no sorting). Eventually we'll develop an interactive UI where you'll be able to eg draw gates and get feedback in real time but right now it's just setting up analyzer runs fed by an autosampler that takes a microplate just like every other device.
2. You can send them in or buy commercially available reagents through us (sadly mostly not covered by the platform credits).
3. Yes! Lots more on tissue culture capabilities coming soon.
4. The only parameters you can set are the ones exposed in the low level API docs. So, we don't let you configure for example custom spatial reading patterns right now. If that's a deal breaker send me an email and we can discuss.
This sounds very interesting.
Just curious, could I, for example culture large amounts of well plates with specific cell lines, add certain substances to said well plates and run RNA arrays or RNA-seq for each well?
Any plans on adding ways to culture cells in 3D? Or culture Organoids?
Could I visually inspect the growth of all those things growing in my plates (using fluorescence)? Can you quantify the growth for me in an objective manner?
You know, we biologists like to see our cells, we generate hypothesis from the weird growth patterns or strange behaviors.
You are addressing a very specific subset of biologists, not the classic ones. I guess it highlights the importance of interdisciplinary education :)
Some high level publications using your service would help, but you know that. Anything planned?
Any interest in working with expression re fermentation down the line? CROs working specifically with .25-2L tanks are very tricky to find, price and deal with, and are not local to many biotech firms in the bay. About 25% of my time is spent just managing our CRO/CMO, and they only have 4x2L tanks. I used to work for a place with multiple parallel tanks (30+) that made DoEs expedient and was a great resource - but sadly not something I can tap into at my small biotech firm.
Combining what you're building with something with a ambr250 or even just a bunch of Applikon micros or wellplate fermenters could see a lot of action (though something like the ambr250 would fit your business model better, robotics > people).
I'll be contacting you for information about FACS, protein quantitation and cell viability work. Stuff I definitely want to farm out.
So someone would ship samples to you guys, then your LIMS/robotic automation handles data-flow and actual physical work? Where are you guys storing all the data?
I have two hesitations to point out:
1) I worked in DNA, and the small little issues that cropped up on every major platform (MiSeq/HiSeq/iontorrent,454,etc) seems like it would make automation of fixes difficult. I guess if you are keeping a stricter list of reagents, parameters, etc, then you could help prevent this, but then people aren't pushing the edge science quite as much.
2) So much data! My systems used to generate over 200gb per day. Good luck downloading that via any api if you have anything but fiber. Do you intend to allow computation to be run on data as a cloud service? If so, I can see this going big places... as long as you allow full control of the VM for all the bio-hats and their custom wizjangles.
I'm out of the industry and have one year left on my non-compete, but I wish you the best of luck! Especially on the LIMS integration: a good LIMS is freaking expensive!
This is the first I've heard of Transcriptic, but it sounds amazing.
Initially I was sceptical of YC working with startups which would have more conventionally come out of universities etc., but this is the sort of technology that has the ability to completely revolutionise scientific research.
This is Omri (founder of Genomecompiler.com). I'm always amazed about how many biologists think their work is pipetting small amounts of liquids and performing massively low productivity experiments rather then their real work of increasing our understanding of nature and finding solution to real world problems (like disease, hunger, aging, running out of civilization critical commodities, etc) using the best available tools.
Robots aren't taking our jobs - they help us be more productive so a biologist Ph.D. might in the future get paid like a CS undergrad!
Very cool concept! I guess it's also another point to show that the number of jobs (in this case, lab assistants) that can't be automated away is getting smaller and smaller.
Just an observation though, I'm not a luddite about this.
I've used Transcriptic in a research setting for a while now. From a user's perspective, it's like looking into the future, and it's awesome.
From the business side, this makes YC a more attractive option for biotech startups. The life sciences are still very capital intensive. While the New YC Deal helps in this department, many businesses still need to look toward an STTR/SBIR grant from the NIH to get to the stage where they have a product to show investors.
Moves like this probably won't change that for a ton of companies, but there are a few on the margin who may be able to pursue an idea through YC with the benefit of the extra $20K in fuel.