For this stage of drug discovery (finding potential hits off of ligand databases, finding potential targets, and refining ligands in silico without running real high-throughput lab work) machine learning may work better at lesser cost. It's the direction (some) research is going right now.
Of course the real cost of drug development is running clinical trials, and losing something like 90% of ligands because they don't make it into the bloodstream, can't be synthesized at scale, or wind up having no efficacy in vivo for no reason anyone can fathom.
(Even making "copycat drugs" where you pick a known target, known ligand class, and try to minimally alter the synthesis process to get a newly-patentable product can sometimes have odd surprises, including the kind of odd surprise where being more specific to the identified target leads to diminished efficacy.)
This goes to show that fundamental research in biochemistry is still needed and we are nowhere near having "cracked the code", genomics notwithstanding.
> This goes to show that fundamental research in biochemistry is still needed and we are nowhere near having "cracked the code", genomics notwithstanding.
Personalized medicine is the future. You will find medication that is tweaked slightly at the molecular level to optimize therapeutic effects for the individual as opposed to a population of individuals.
I don't see how this conception of personalized medicine would work. For many drugs, the binding sites aren't terribly different or different at all between individuals. I can see this being different for larger populations. More likely you'll see drugs created that very specifically target something so that the personalization will be at the level of the drug cocktail as opposed to synthesizing some new molecule. This isn't to mention the possibly wildly different effects you can get from changing something seemingly minor about the molecule. A sort of good example of this is methamphetamine (yes, I know chirality isn't minor but its a good layman's example).
There are drug classes where for many reasons, some IP related, there are multiple (all FDA approved) very similar molecules, or the same molecule in different formulations. Those drugs all have some measure of efficacy (otherwise they wouldn't be approved) but there's always somebody in the trial on whom the drug didn't work at all (or somebody who died). Ideally personalized medicine would tell you whether you're that somebody, and you'd pick a different drug.
For most drug classes though the cost of obtaining enough patient information to make the call in the first place, even if it was feasible, is pretty high relative to just giving you a "test" with a given drug. The argument is often made for cancer because the drugs are more expensive and they tend to be given as cocktails (which means, more costs, more side effects). But with the way medicine currently is, it feels like even if hypothetically you could tell from a blood draw and a tumor biopsy that drug 1 is going to be 99% as effective as 4 first-line drugs together, patients for whom that 1% chance means they might die are probably going to go for the cocktail.
It's possible personalized medicine will lead to cheaper trials, and then cheaper approvals. For example instead of saying "I will make a drug that cures (alcoholism/cancer/MS/Alzheimer's) and it has to not kill patients and also happen to cure (alcoholism/cancer/MS/Alzheimer's) in a relatively large subset of the population, that I will spend tens of millions of dollars finding, testing, adjusting, retrying, etc" you could say "I will make a drug that cures dementia in female Caucasian patients between the age of 60 and 70 that have Southern European ancestry, eat a low-carb diet, and have a couple specific DNA markers. I will recruit a smaller sample of this population, get results, and my (much cheaper) drug can get approved for this population. Other, also cheap drugs will follow for (men/Asians/people over 80)." Now you have a feature engineering problem where you get to spend hundreds of millions of dollars paying data scientists to figure out that South European ancestry and some random protein that moves methyl groups around are the categories to structure your trial around. There's no free lunch.
For some patients, the recent "Right to Try" legislation will alleviate this problem, and provide an opportunity for custom-tailored cures. It won't be instantly available to the masses, but it'll provide a path for more experimentation.
Yup. And judging by their team and strategy they are likely using a highly empirical ligand discovery platform (ie DEL) and using the (admittedly best in class) MD for guiding med chem. Taking the hard problem of drug discovery and then saying you are going to do rational design (a road riddled with dead bodies) on dynamic alloseteric sites of challenging targets is a lofty goal to put it mildly..
Currently doing some research on the industry and curious if you could link to some of the "dead bodies"? Also curious if you know how Verseon's model (https://www.verseon.com/) stacks up versus some of the others mentioned? Obviously it might be hard to say since all their drugs are currently in the pre-clinical trial phase but would appreciate any info you might have.
I don't understand companies like this. Its founder is worth something like $5b, but instead of financing this venture with a tiny fraction of his net worth, Shaw decided to go get funding from several other firms. I understand the idea that a strategic partnership can add value, but having complete control of a company is also very valuable, and I don't know why Shaw wants to just give that away when it would cost him very little to not do it.
This comes across as exceptionally naive. Protein-engineering is a fundamental technology, and we've been constrained computationally for decades. Few things would have as far-reaching an impact on society as being able to accurately predict protein structure would.
You can call me naive, or you can read my papers. For example, https://www.nature.com/articles/nchem.1821
is a massive simulation of a GPCR that my team designed and ran. It's a similar idea to the Relay work (simulate the dynamics of the protein, make a markov model of the subsets with transition parameters) With that tech, we could easily implement computational mutagenesis. I agree 100% that protein engineering is a great technology and I wish we could do it rationally. But to be honest, none of my computational work can beat what Jim Wells did at Genentech in the late 90s (converting subtilisin to subtiligase through amino acid mutations).
Here's another paper I wrote, https://www.ncbi.nlm.nih.gov/pubmed/24265211 which demonstrates there was a systematic error in protein force field implementations; our work was a major breakthrough in improving structure prediction.
Also, being able to predict structures isn't sufficient to engineer protein function.
Please don't call people naive; especially if they are experts in their field.
I'm confused as to why you're addressing this commenter using "argument from authority" when you seem to be weakening your position, suggesting that studying protein dynamics has led to significant advances in the field. It doesn't change the fact that you made a flippant remark that a team of some of the most experienced drug discovery scientists in the industry are wasting their time using this approach (despite not having worked in drug discovery, i.e. not an expert in the field), instead of just explaining why you believe this. Didn't mean to make it personal, that's just how I interpret your comment.
There have been plenty of amazing discoveries and advances in the area of protein Dynamics. but I don't think that anybody has any authority to claim that this particular approach is going to revolutionize pharmaceutical discovery.
If you'd like I can also show you a few of my drug Discovery papers. I'm actually one of Shaw's biggest competitors in the field and advise Venture Capital companies who consider investing in companies like relay
Relay is not doing protein engineering or working on predicting protein structure. They are making models of protein dynamics to assist in drug discovery (often using already determined structures). We both disagree with the parent commenter that it's a waste, and to claim that Murcko and D.E. Shaw are going in "blindly" would be ignoring decades of research on protein dynamics of some of the hardest drug targets out there. The fact remains that there aren't many success stories of using simulations of protein dynamics to accelerate drug discovery. Computational chemistry protocols used routinely in pharma drug discovery typically do not include this type of detail.
I wouldn't bet against relay right now - historical precendent is irrelevant here. There's an enormous opportunity for application of ML techniques in protein engineering (as an umbrella term...) - I've really been itching to take a crack at it but it's a moonshot...
A well-positioned player with the right people could make a killing in this market right now.
I’m not sure why you consider it a great deal of waste.
Building better models that simulate a human biological process must have some sort of payoff? It increases human knowledge, and should provide a foundation on which others can build.
I am highly supportive of DE Shaw's efforts to make the world better. On the other hand, OP has a point: many of these sorts of things actually do not have a payoff. I knew some guys working on protein folding in the 90s. No payoff. One of 'em must be close to retiring by now. Shaw offered me a job to do an innovative kind of FTIR back when I graduated in 2004. Pretty sure that didn't pay off either. For that matter Shaw's original research was into the 'transputer' which also didn't pay off.
The man's a risk taker, and perfectly entitled to spend his post tax income on whatever he likes. But a lot of science and stuff is dead ends, and, of course, avoiding known dead ends. Might very well be a dead end. He's been dumping considerable resources into this project since 2001; that's rather a long time ago now.
> Building better models that simulate a human biological process must have some sort of payoff? It increases human knowledge, and should provide a foundation on which others can build.
That really is still not clear for the models described in the article. In reality, these models are rehashes of what Murcko was having people do 25 years ago. Big articles were written about Vertex applying Free Energy Perturbation with pictures of Murcko accompanied by David Pearlman and Govinda Bhisetti. A book was written about these efforts, The Billion-Dollar Molecule, and more recently a sequel which partially describes how Vertex's efforts using these methods failed (The Antidote).
Obviously, computational power has improved by orders of magnitude since 1989. So have our parameters for modeling proteins and small molecules. But it really is still not clear that MD or FEP really provide any useful insights into proteins that cannot be obtained more simply via NMR-based screens and linear regression. In fact, I recently saw a talk by Relay's VP of Computation where he described using Free Wilson Analysis at Relay[1] for their drug discovery, which is a linear regression method from 1964...
Great context. You don't have to go too far back in the archives of this site to find discussion on Andy Grove's similar big project to use in silico to destroy traditional pharma discovery. We haven't heard much from that project since.
A waste for society? No. A waste for the people doing it who are never going to be appropriately compensated for the value of their failure to society? Yes.
The churn that arises as a consequence of the fact that we don't know how to reward failure (or even merely punish it less) is the real waste.
Yes, but if the scientific value capture problem were solved then science could scale appropriately rather than beg for charity.
I was (and would be) much more valuable to society as a structural biologist than I am as a software engineer, but the market disagrees so vehemently that it's cost prohibitive for me to fight it, and the result is that we all lose. The reason why it disagrees is not hard to understand and not particularly difficult to categorize as a market failure rather than a "hard truth." I can't really think of a good way to actually address the problem, though, so mostly this just amounts to venting.
He's on the admin / capital side of the equation and will do fine, I'm sure.
> paid competitively
That's not an endorsement and hardly an excuse. Unless things have changed dramatically in the last year or two, pay for computational / structural biologists is 1/2 to 1/3 of what a person with the same skills gets for helping build Uber for Poodles, which is in turn 1/2 to 1/3 of what a person with the same skills gets at AppleGooFaceZon or on Wall St.
The field, like many scientific fields, runs on passion, naivete, and green cards, and is quite abusive to the people doing the actual work, even though it showers adjacent concerns with money.
If science didn't suffer from such a severe value capture problem, scientists could win a seat at the table, but it does, so practicing scientists get shut out and the money goes instead to capital/risk, showmen (often "graduated" scientists themselves), lawyers, etc.
I didn't say that building better models didn't have a payoff. I'm saying this particular approach shows little to no evidence that it's truly revolutionary or even a marginal/incremental improvement over random guesses.
A great deal of waste? 25 years ago we thought that genomics would boost drug discovery and development. It didn't pan out quite like that but thanks to the Human Genome Project costs for sequencing have dropped to almost nothing, and we have learned a great deal about protein evolution. Talk about waste!
I wouldn't really say the HGP taught us a lot about protein evolution. The original few genomes were snapshots of indivduals or amalgams. We do know a bit more about evolution from sequencing many more people (post-HGP), but it hasn't really been super-productive compared to previous work (note, my postdoc work was on protein functional evolution) from before the HGP.
For example, Carl Woese, using only 16s RNA from a wide range of species and mostly hand-computed similarity clustering, managed to find a previously unrecognized kingdom. I don't think we've really had any truly revolutionary discoveries like that from HGP and post-HGP sequencing that focuses on humans.
That's like saying the single hardest problem in AI is having a beyond human intelligent general AI.
Sorry not trying to be pedantic but predicting what xenobiotics (ie foreign molecules) will do in the human body, much less a diverse sample of them is a really really really hard problem. One that will only be approximated and poorly at that. I'm sure there will continue to be approaches to reduce risk but clinical trials only exist because they have to. The only way we can evaluate safety and efficacy is empirically.
Predicting things like hERG toxicity is somewhat doable. And I think the realistic bright future of AI in medicine will include better on target tox prediction and some off target effects. But it's hard enough to design the drug to bind the target, imagine predicting it's affinity for all other proteins and their isoforms in the population...
For one, there are more candidates than capacity to run clinical trials. Costs & timelines for those have been going up over time. Lots of high-potential compounds just sitting on the shelf.
Most of the cost is regulatory. But there is a meaningful difference between a world in which it costs $1M to screen and a world in which it costs $100k to screen. In the latter world, startups become viable with intent to find a drug candidate, rather than only being viable if they already have one. That's a big important difference, and one that was only otherwise going to be solved by everyone transitioning from small molecule to gene therapy development.
The real cost occurs in Phase 2 clinical trials, where 70 % of promising-looking compounds turn out to be no better than placebo. The body is a complex system with a huge number of enzymes and feedback circuits, this outcome isn't much of a surprise then.
You can't blame the FDA. But thanks to Trump's "right-to-try" phase 2 will be a smaller hurdle now. Good luck, let's see how costs develop.
No. Phase 3 is where all the cost of clinical trials comes from, not phase 2. P2 usually involves fewer than 100 patients and lasts for only a few months; P3 often runs to thousands of patients and can last for years, and then can be followed by supplementary P3 and even P4 trials, multiplying costs further.
Phase 2 rises to a high fraction of clinical trial cost only in diseases with a very high mortality rate, like cancer. Then the drug may undergo a variant of P2/P3 trial where the general patient populace serves as early P3 trial participants, thereby reducing P3 trial cost normally paid entirely by the drug manufacturer. But even then, such a phase 3 trial will greatly outcost any phase 2, and will much more definitively answer the questions of drug safety and efficacy.
The rules of those kinds of clinical trials (mostly cancer) where the "right-to-try" law is applicable thus will depart considerably from standard drug safety standards for almost all other kinds of drugs. That's because safety concerns are strongly deemphasized when investigating new cancer treatments due to 1) the high toxicity of alternative cancer therapies (usu. chemo), 2) the low survival rate and lifetime typical of most cancers, and 3) the lack of better alternative treatments.
The lower standard for efficacy (and lower statistical power) inherent in phase 2 trials will certainly play a greater role in "right-to-try" than is usual for typical drugs. But this does NOT imply that taking the greater risks that will arise with these more speculative unproven therapies (like the Laetrile dud cancer therapy ca. 1975) are likely to return greater rewards. In all likelihood, by the time "right-to-try" comes into play for a patient, all hope will have been lost medically. Thus the fraction of cases where a patient will actually benefit from such a "hail Mary" therapy that's been facilitated via this law is essentially zero. Nor is the lack of methodical treatment regimen endemic to these still half-assed therapies likely to teach us much in the process of calling upon them just one second before midnight.
No, "right-to-try" seems mostly an invitation for charlatans to charge megabucks for nutty therapies that have shown no level of success. If they had, "right-to-try" wouldn't have been needed. The era of hoping for a magic elixir is over, except in Hollywood and Trump's Washington, that is.
interesting point. though this ignores the (surprisingly) successful re-purposing strategies to reduce future regulatory burden and and reduce or even eliminate high throughput screening campaigns. that well will dry up sooner or later but other discovery platforms ie phage display etc. have become commoditized and robust enough that they are pretty accessible to even low capitalized startups if finding a ligand is what you need to get off the ground.
Of course the real cost of drug development is running clinical trials, and losing something like 90% of ligands because they don't make it into the bloodstream, can't be synthesized at scale, or wind up having no efficacy in vivo for no reason anyone can fathom.
(Even making "copycat drugs" where you pick a known target, known ligand class, and try to minimally alter the synthesis process to get a newly-patentable product can sometimes have odd surprises, including the kind of odd surprise where being more specific to the identified target leads to diminished efficacy.)
This goes to show that fundamental research in biochemistry is still needed and we are nowhere near having "cracked the code", genomics notwithstanding.