Hacker News new | past | comments | ask | show | jobs | submit login

That would a terrible idea.

Here is a simple example why:

Finasteride is a compound that is used in two drugs. Proscar is used for prostate enlargement. It is old, out-of-patent, and has cheap generics. Propecia is used for hair loss. It is a newer, and (at the time) very expensive. The only difference is that Propecia is a lower-dose formulation.

What people did was to ask their doctors to perscribe generic Proscar, and then break the pills up to take for hair loss. Doctors would then justify the prescription by "diagnosing" enlarged prostate. This would enter the patient's health records.

If you apply deep learning without being aware of this "trick", you would learn that a lot of young men have enlarged prostates, and that Proscar is an effective, well-tolerated treatment for it.

Health records are often political-economic documents rather than medical.




This is a really dumb way for doctors to prescribe finasteride, by the way.

If we assume that the reason patients get finasteride this way is to get insurance to cover it, and the doctor has no evidence to suggest the patient actually has an enlarged prostate, and further doesn't document a physical exam confirming the enlarged prostate, it's basically insurance fraud.

If this is simply a way to get the patient pills via prescription regardless of the insurance coverage (you said yourself that it's cheap, and hair loss is cosmetic and therefore shouldn't even be covered by insurance), this is even more stupid, because doctors are well within their rights to prescribe or administer medications that are FDA approved for one condition "off label".


Different countries have different health systems and doctors face different constraints. This was in Israel in the early 2000s.

In this case it was done just to save the doctor the hassle of arguing with the higher ups. It's been a while, but if I recall correctly, Merck was aggressively enforcing its patent rights. It was even raiding compounding pharmacies that sold lower dose formulations of finasteride instead of the brand name Propecia.

https://news.walla.co.il/item/932056 (Google Translate does an almost reasonable job)


I think this article actually agrees with you. The very first NOTE is this:

""" Note: Be cautious about using data that was primarily created for insurance purposes. Often, it's not truly reflective of patient's condition but rather encompassing for billing / profit. Luckily, there are clinical reports, like radiology, diagnostic imaging, pathology reports, etc., that are intended for physician use and are more reflective of true patient conditions. Unfortunately, most of this data is not readily available in APIs because it's largely unstructured. This is a ripe space for ML to take raw, unstructured data and produce structured, computable data. """


Insurance is just a one part of the problem.

Large practices have treatment standards on which physicians are evaluated. Reporting side-effects might be politically inconvenient in some cases. Medicine is also, like other human endeavors, subject to fashions and fads.

At best, applying machine learning to health records will generate a hypothesis that must be checked in a properly controlled trial.


So it’s not a “terrible idea”?


Many organizations have been attempting to use AI to extract coded concepts for years. Even the best systems have a relatively high error rate, so if you want to use the output for anything important you still need a trained human medical coder to fix the errors. But that AI rough draft does have value since correcting its errors is still faster than entering all the codes manually.


doctors don't have to make fake diagnosis to justify off-label prescriptions. It would place them at liability and it's not required. Do you haave any reasonable citations to support this?


Agreed, would love to see some cites.

I work with insurance claims and post-marketing adverse event reporting data for pharma drugs, and I don't recall seeing anything resembling what OP described when looking at offlabel usage, but it's been awhile. (note: this is not EHR data, but it relates)

Anecdotally I've had a few off label RX's that weren't attached to any DX code in my EHR, same with my wife.

* edit: said 'you' vs. OP


OK I looked into it more. It's more about reducing time spent by doctors appealing health insurance non-payment (IE, if the insurance doesn't allow prescribing medicine X for insurance code Y, the doctor would have to write up some appeals form).


>Health records are often political-economic documents rather than medical.

Wow!!! Great observation.

One thing though, if ML/AI can detect when they are political-economic vs when they are medical then maybe you could apply some of the techniques of ML/AI


There is no ultimate source of truth so I don't see how AI/ML could make that distinction.


Yeah I agree. This is another example of our own biases/errors being injected to the data, thus poisoning any models we try to build with it. On the surface it seems unlikely there's much of a way to compensate for that, and if there was, it would probably have to be tailored to every specific type of bias/error.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: