Hacker News new | past | comments | ask | show | jobs | submit login

  approved treatments in humans often lag 10 years or so 
  behind what's known to work in animal models
The reason for this is regulation and IRB. I direct you to Banting and Best (which I also linked below):

http://www.nobelprize.org/educational/medicine/insulin/disco...

  Early in 1921, Banting took his idea to Professor John    
  Macleod at the University of Toronto, who was a leading 
  figure in the study of diabetes in Canada. 

  Banting and Best began their experiments by removing the 
  pancreas from a dog. ... By giving the diabetic dog a few 
  injections a day, Banting and Best could keep it healthy 
  and free of symptoms.

  The team was eager to start testing on humans. But on whom 
  should they test? Banting and Best began by injecting 
  themselves with the extract. They felt weak and dizzy, but 
  they were not harmed.

  In January 1922 in Toronto, Canada, a 14-year-old boy, 
  Leonard Thompson, was chosen as the first person with 
  diabetes to receive insulin. The test was a success. 
  Leonard, who before the insulin shots was near death, 
  rapidly regained his strength and appetite. The team now 
  expanded their testing to other volunteer diabetics, who 
  reacted just as positively as Leonard to the insulin 
  extract.

  The news of the successful treatment of diabetes with 
  insulin rapidly spread outside of Toronto, and in 1923 the 
  Nobel Committee decided to award Banting and Macleod the 
  Nobel Prize in Physiology or Medicine.
Two years from idea to animal trials to safety trials (self-experimentation) to human trials to Nobel Prize. That was when pharma moved at the speed of software; that is what a landscape free for innovation can produce.

What if we tried that today?

You mean, just rely on the judgment of the experts involved and the verbal consent of the patients?

You mean, just allow the doctors to come up with whatever dose they felt warranted and patients to take whatever dose they feel comfortable with?

You mean, resist having some kind of ostensibly judicious central authority approve all such decisions, and rely on the distributed judgments of all consenting participants involved?

Yes. The typical response is that this is a recipe for anarchy. But history shows that it is a recipe for Nobel Prizes, and it is not like 1920s America was much like Somalia.

Would there be risk? Sure. Some people will not be helped and others might even harmed by new and unproven treatments. That's the price if we're serious about rapid progress, or really any progress. There must always be a first human trial; why not as soon as possible if people really are dying?

Needless to say, this kind of boldness won't fly in the modern US. Outside of the internet, the country has become just too risk averse, too wealthy to pay the price of progress. Our task as hackers then is to create at least one spot on this earth where patients can take whatever treatments they want, where entrepreneurs/technologists can invent whatever drugs/devices they want, and where no regulator has the power to intercede between these two consenting parties. And where we can go from idea to human trials as fast as the patient pleases.




http://en.wikipedia.org/wiki/Thalidomide#Development:

"Thalidomide was developed in 1954 by the CIBA pharmaceutical company, marketed under at least 37 names worldwide. It was prescribed as a sedative, tranquilizer, and antiemetic for morning sickness.[9] Thalidomide, launched by Grünenthal on 1 October 1957"

So, slightly more than two years, but it points to the problem: the judgment of the experts may be awfully wrong.

Also: it is true that the Western World is more and more risk averse, but we are more permissive in allowing trials on patients who would die soon, anyway. I doubt it would be two years from idea to Nobel prize, but http://en.wikipedia.org/wiki/FDA_Fast_Track_Development_Prog... states a goal of 60 days for review, and states that that goal generally is reached.


So, a few points (I didn't downvote you).

1) First, FDA fast-tracks many bad things. Hundreds of millions of people were irradiated by scanners that FDA waved on through because a fellow .gov agency (TSA) sponsored them. So: even the risk-averse can't trust a single centralized regulator to be "risk-averse" rather than "pro-government". We need multiple regulators (see my posts elsewhere in the thread), where you can use things approved by the slower/expensive/safest one while I can use items approved by the faster/cheaper/riskier ones.

http://arstechnica.com/science/2010/11/fda-sidesteps-safety-...

  Dr. Holdren passed the letter on to the Food and Drug 
  Administration for review. But, in the FDA's response, the 
  agency gave the issues little more than a data-driven brush 
  off. They cite five studies in response to the professors' 
  request for independent verification of the safety of these 
  X-rays; however, three are more than a decade old, and none 
  of them deal specifically with the low-energy X-rays the 
  professors are concerned about. The letter also doesn't 
  mention the FDA's own classification of X-rays as 
  carcinogens in 2005.
2) Second, the formal IND fast-track program you mention is very political to get into (on the device side there's something similar called Pathway to Innovation). Moreover, FDA doesn't count days like you and I count days. It's like an NFL game which is 60 minutes but actually takes three hours; every time they email you back, it stops their clock. And they can email you back to ask for data that takes months to gather. This is from a device consultant but the principle is the same for drugs:

http://www.myraqa.com/blog/how_long_is_90_days

  By law, FDA must respond to your 510(k) within 90 days, and 
  typically they do. The thing you have to understand is that 
  FDA measures 90 days about the same way the NFL measures 
  the 60 minutes in a football game. It's not unusual for the 
  clock to spend more time stopped than running.
3) Third, regarding thalidomide, as you probably know there were three major catastrophes that increased FDA power (1906 publication of the Jungle which birthed proto-FDA, 1938 elixir of sulfalinamide, and 1962 thalidomide) and another major catastrophe in the early 90s that reduced FDA power (FDA delays on AZT and slowdown of AIDS drugs).

Thalidomide in particular is to the FDA what 9/11 is to the TSA, it's the justification for everything they do. If you get into the history books you'll see that Frances Kelsey never actually suspected teratogenic effects; she suspected neurological issues. Moreover, thalidomide was actually a very efficacious drug for morning sickness, it was just unsafe. Yet the 1962 revision to the FD&C act added efficacy testing on top of safety testing.

That's weird. The thing is, toxicological/safety testing, even aggressive safety testing is "only" in the tens of millions, not billions. It's efficacy testing (and then comparative effectiveness) that really piles on the dollars. If the lesson of thalidomide was that we should do aggressive safety testing, then no one got the message, because Kefauver & Harris' 1962 amendments to FD&C meant we ended up spending several hundred billion dollars on efficacy instead.

Perhaps then the lesson from thalidomide might be that pregnant mothers should be much more risk-averse in what drugs they take. It's not really a lesson that says "we need to delay all drugs more", because due to pharmacogenomics some side effects are only going to be apparent when you introduce them into humans on a large scale anyway.

Moreover, risk can't be eliminated, and different people will have different risk profiles. What if a 70 year old man with terminal cancer wants to take an experimental, non-FDA approved drug? Do you sue like the FDA did in Cowan vs. US to prevent him from doing so?

For that matter, what if a 25 year old pregnant woman wants to take a new drug? Do we prevent her from doing so? Maybe we should, but we currently don't stop pregnant women from drinking alcohol or smoking cigarettes.

One has to think very carefully about whether every tragedy means one must ban or mandate something with a federal law.


While I don't disagree with most of your points here, I want to know more about your opinions on efficacy testing. It is definitely a strange corner of the FDA mandate and seems most justified by their marketing restriction power---the principle that marketing medical claims should be done from a position of earned, valid authority.

But it's definitely the most expensive and difficult to test component of FDA regulation. It's also awkwardly theoretical do to the sterility and white coatedness of the testing procedures (you and I both have something to say against RCTs). But at the same time, a market inundated with false claims to efficacy would be terrible. The current mobile health market is a fair comparison---many of them are efficacious, all of them would love to claim it, but nobody knows which ones.


So, regarding efficacy testing, I think the costs/benefits have to be assessed in full context. If you go back to the time before the FDA, it was a time of incredible wonder drugs and useless patent medicines. Kind of like the Internet: the price of being able to put up a domain name in 10 minutes with no centralized check for accuracy means information proliferates and the web/market/search sorts it out.

And we kind of know what a safe-but-not-necessarily-effective market for drugs will look like: the supplement industry. Supplements are cheap, they vary in effectiveness on a per person basis, and they have undoubtedly produced some really great things (creatine, omega 3). Take a look at this awesome graphic:

http://www.informationisbeautiful.net/play/snake-oil-supplem...

The thing is, with centralized regulation for efficacy two things happen. First, many of the bubbles on that graph never appear in the first place. Second, because they never appear, they never accumulate enough evidence/market size to rise up the list. We are choking the channel if centralized regulators require our minimum viable products to be not just safe, but highly efficacious.

The best way to see this is that centralized regulation kills iteration. Talk to anyone in the drug space: they'd love to be able to change their dosing methodology (altering dosage amount, frequency, formulation) or otherwise take advantage of serendipitous post-market findings. Viagra, famously, was initially intended to medicate blood pressure[1].

But right now they can't even change the labels on their drugs without the FDA's approval, which is why the average layman gets a folded-up chemistry textbook[2] rather than a user-friendly instruction manual, let alone a website which totes up other people's experiences with the drug. To get a sense of how much that could contribute to the patient user experience, see Help Remedies[3], which can get away with better UI/UX because they're dealing in generics.

Anyway, on net, I think something like a pharmacogenomic erowid.org [4,5] is the best way to establish efficacy. That would be distributed and the data would be public and constantly updated, with sample sizes far in excess of the current FDA process. Patients would get accounts and link their genomic information with the site after buying any new drug, and input their own survey data in order to see other people's (aggregated, anonymized) experiences. This would mean that you can launch safe drugs of unproven efficacy, and then collect efficacy data at a far larger scale than we do today. But this kind of innovation will only be possible in a jurisdiction outside the FDA's thumb.

[1] http://www.mc.vanderbilt.edu/lens/article/?id=116

[2] http://dailymed.nlm.nih.gov/

[3] http://www.helpineedhelp.com

[4] http://www.pharmgkb.org/

[5] http://www.erowid.org/


To take the internet domain registration metaphor further, it also requires a centralized value authority (google) in order to be navigable. In some sense, Google's primary task is spam filtering---analogous to efficacy guarantees---which enable efficient information gathering.

I don't argue that the FDA is an efficient structure for doing efficacy testing, I just think punting the value discovery/marketing process to vague distributed processes isn't a good answer.

I think the supplement market is a great example as well. Many low value treatments saturate the market and the responsibility for making decisions is democratized and difficult. Canonical sources of efficacy information might not be needed as barriers to entry, but reputation, trust, and canonization are valuable heuristics in decision making processes and this leads to power.

If Google doesn't link you, you die.


> Two years from idea to animal trials to safety trials (self-experimentation) to human trials to Nobel Prize. That was when pharma moved at the speed of software; that is what a landscape free for innovation can produce.

Well, we'd get new treatments a decade faster, but a lot of these treatments would not work and/or would kill people. But, as I said above, I don't think this would dramatically increase the speed of innovation, except for diseases where we don't have effective animal models. It's faster to run experiments on animals than people. For these diseases, removing regulations let people try treatments that worked on animals in humans faster. But the problem is really that there are many diseases we can't treat effectively in any organism, and letting people try any treatment they want in humans isn't going to fix this.

I think you are vastly overestimating what society has to gain by deregulating medicine. You'll get a one-time gain of 10 years of progress at the cost of an unknown number of lives.


First off, I am happy that we both seem to agree on a qualitative fact: there is indeed a tradeoff between what statisticians call type I and type II errors. At one extreme, you can let everything through, advance technology rapidly, and suffer some side effects (type I bias). Or you can block everything, stop technology, and suffer no side effects (type II bias). If we agree on this qualitative point, the key is whether we are currently at a Pareto optimum. Is our current system optimizing the type I vs. type II tradeoff? I have a numerical scenario below which you can critique, but first to your points.

  It's faster to run experiments on animals than people. 
I'm not gainsaying the utility of animal models. I just think the goal needs to be to get to humans as soon as the safety data is in, because people are dying.

  I think you are vastly overestimating what society has to 
  gain by deregulating medicine. You'll get a one-time gain 
  of 10 years of progress at the cost of an unknown number of 
  lives.
Well, the reason sulfalinamide/thalidomide were heavily covered in 1938/1962 respectively was that those were relatively rare events. So I would somewhat disagree that the number of lives would be unknown. But, ok, let's take as a given that some would die. On the other side of the ledger, we both agree that tens of millions of people each year are dying from cancer and heart disease. So let's consider two scenarios for a cure for condition X, which kills 1 million people per year.

In scenario I, we do it status quo and safe, with no deaths. Very generously, let us grant that a cure appears in 10 years. This is generous because a regulated market may never iterate upon the cure if it is radical/different (e.g. Barry Marshall and H. pylori).

In scenario II, we accelerate the cure in a deregulated market. The R&D phase takes 1 year and costs us 100 deaths from test pilots / early adopters; the scaling phase takes 2 years and costs us another 900 deaths from volunteers. These numbers are vastly in excess of any reasonable safety testing paradigm in a deregulated space (no one died in Banting & Best's experiments) and I cite them as extremely conservative upper bounds.

Ok. Then in scenario I, the status quo, you had

  - 0 die from testing
  - cure appears at end of 10 years
  - 10 million people die over those 10 years
  - 10 million deaths
In scenario II, you had

  - 1000 die from testing over 3 years
  - 3 million die from disease over those yeers
  - cure appears in year 3
  - no further deaths
  - 3 million + 1000 total deaths
So scenario II saves ~7 million lives. Feel free to play with the numbers, but that's the kind of calculus I think we need to engage in, one that explicitly reckons with the cost of delay. In reality, the number of deaths attributable to R&D won't be close to 1000, though it won't be zero. But there is no reasonable scenario in which R&D actually consumes anything close to as many lives as the disease itself.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: