Hacker News new | past | comments | ask | show | jobs | submit login
‘Why didn’t you think this baby was ill?’ Decision-making in acute paediatrics (bmj.com)
88 points by DanBC on Aug 28, 2018 | hide | past | favorite | 72 comments



Emergency doctor here, I have to say that diagnosing a sick child is one of the hardest things you can do in medicine (aside from figuring out why someone is feeling 'dizzy'). The limiting step here is that we can't do blood tests and x-ray/CT on everyone that walks in the door (which itself can lead to harm: https://emergencymedicinecases.com/overinvestigation-emergen...)

I work in one of the busiest ER's in Canada, and maybe one child out of hundreds will have something hidden... Many illnesses, particularly meningitis, are so rapid in killing the patient that you may miss the early symptoms (which are often mild).

What many posters have eluded to about 'gut feeling' is well described in medical education literature, where 'system 1' is well thought out, algorithmic reasoning, and 'system 2' is pattern recognition and based on experience. Many cognitive biases can affect both meeting (described will in this paper: https://www.ncbi.nlm.nih.gov/m/pubmed/12915363/).

Your gut feeling can be affected from anything from unusual vital signs, to the way a patient talks, to beads of sweat on their forehead, and a million other things that take years and years of training to catch.

Anyone who can build an ML model to catch these clues is going to make billions.


> Anyone who can build an ML model to catch these clues is going to make billions.

You'd be surprised. IBM poured billions into Watson and appears to have been pretty successful in nearly reaching parity with a certified oncologist, but the results were dismissed because it didn't outperform them.

> At first, Manipal used Watson to recommend treatment options for all cancer patients, said oncologist S.P. Somashekhar. It found the software agreed with doctors most of the time, so Manipal stopped using Watson on every patient, he said.

https://www.wsj.com/articles/ibm-bet-billions-that-watson-co...


That's not a great definition of parity. We'd want accuracy and specificity numbers linked to outcomes, not concurrence. The times when Watson agrees with doctors is effectively irrelevant - results would be the same whether or not he was added. We need to highlight whether, given a disagreement, Watson was better or worse for outcomes.


The trick is whether the cases of disagreement were themselves predictable!

Given a fixed number of oncologists and deploying Watson only to support those oncologists, yes, you're correct that its only useful if it outperforms them. But I think of it more like Watson is a single hive-mind team of like a thousand med students near the end of residency: they get most things right but there are a few places where more experienced doctors will be better, though the scale with the hive mind is far higher.

You have one of two reactions to that. 1) Hire fewer senior oncologists and have them focus on the more difficult cases and leave the hive-mind to deal with thousands of routine cases, or 2) ignore the hive mind until its literally better than a typical senior oncologist.

The medial profession seems to repeat this cycle of "only full doctors can do anything because even seemingly routine cases might be hiding something more serious" to "maybe some routine things can be done by people with less training and full doctors should focus on the more difficult cases". See nurse practitioners, dental assistants, and, in my mind at least where we are going with things like Watson.


I think part of the problem for Watson is that it needs someone to gather the data, which is usually a doctor. So if you're pairing each patient with an oncologist for intake anyway, it's not clear that "examination plus enter all data into Watson" is a benefit over "examination plus make a decision".

I guess the ideal outcome for Watson (if it doesn't outpace expert oncologists) would be something like "experience nurse practitioner does an exam, and enters data into Watson" or maybe even "special oncology-trained NP does an exam with Watson".

The other part I don't know is what oncology accuracy rates look like. If the reason to not majorly expand screening is cost and availability, Watson could be huge. If it's false positives from our existing rate, there's a lot less value.


Let's look specifically at mammograms. (Stats from CTFPHC, a division of PHAC, part of the Canadian government.)

https://canadiantaskforce.ca/tools-resources/breast-cancer-2...

Wider screening isn't great. Essentially, people with medical problems self-select pretty alright already, and wide early-screening initiatives for most cancers introduces as much or more false positives (that persist through screening!) than actual cases of cancer it catches - and moreover, it's not even clear that the early screening is effective - the false negative rate is high enough that the overall incidence of advanced cancer is unchanged even with early screening. (Bleyer and Welch, NEJM, 2012)

Basically, we need better screening, not more screening at our current levels, and it's not clear whether watson can provide that.


Precisely - since mammography is the standard "counterintuitive Bayes rule" primer, I remembered those numbers don't support wider screening.

I'm not discounting the possibility of a useful role for Watson in wider screening, but it's not clear to me where it would be. If it happens after any kind of extensive examination, doctor-hours are being committed regardless and there's little gain. If it happens at a population-level screen like mammograms and colonoscopies, "almost as good as oncologists" isn't enough to add any value.


> The times when Watson agrees with doctors is effectively irrelevant

If your question is "can Watson improve treatment over an oncologist?" (ignoring issues of expense and availability), then it doesn't matter.

When you're talking about how close it is to "parity", it matters a lot.

If Watson exactly matched the oncologist in all but one out of a million cases, and in that millionth case caused the patient to explode, that would be extremely impressive, and Watson would be extremely useful. It would be nonsense to exclude all the matching cases.


> When you're talking about how close it is to "parity", it matters a lot.

I guess the question there is how valuable 'close to parity' is. If you can treat more patients, or treat them faster, or even much cheaper, then close to parity is a big deal. If examining patients and avoiding destructive false positives are the limiting factors, it's not clear that parity-level decisions are a significant benefit.


I think better diagnostic testing is one way to solve this. Oncology in some ways is easy since the data input is usually a set of biomarkers particular to that tumour (i.e. estrogen receptor, her2/neu, etc). It's probably the most rapidly advancing field in medicine. Same with radiology and dermatology where the input can be images which can be standardized. Dealing with analog humans is another challenge entirely!

Better biomarkers for acute illnesses with fewer false positives would help you better test 'accuracy' versus an ML model.


A recent post here suggested Watson was recommending highly disadvantageous treatments and MDs became reluctant to continue because it wasn't working.


It is used to recommend treatment options, not diagnosis


Treatment oriented medicine is a problem in and of itself.

There is no treatment for medically diagnosed Celliac disease other than don't eat wheat for the rest of your life; its hard to profit off that in the medical biz, therefore, there being no treatment, mental gymnastics to disprove anything treatable are mandatory across several pediatricians and gastroenterologists and oncologists we talked to for a year or so WRT our son. Finally after ruling out child abuse and cancer, almost out of desperation, they did a quick blood antibody test leading to a stomach cilia biopsy and they're like "sorry its not good news like treatable cancer, its untreatable gluten allergy". Somehow the poor little guy must have given blood for a hundred tests before the antibody test came back with higher levels than the ENT had ever seen in a test result, and the biopsy confirmed it. Remove wheat from his diet and in about a week he was the healthiest kid ever.

The insane part is about one percent of the population is somewhere along the Celiac spectrum of symptoms; my kid somewhat along the extreme edge, most not so bad. But docs will do anything to avoid a simple blood test / biopsy diagnosis because there's no profitable treatment or pill to push.

Somehow, almost accidentally or unintentionally, western medicine in the USA can provide good outcomes, but the average citizen playing the game doesn't understand that its primarily a legal CYA game secondarily a treatment commissioned salesmen game, also a money making racket, and only a distant minor goal is actually helping people live better lives. If people are helped by the medical establishment its almost by accident as a side effect, helping people is definitely not the primary purpose of the establishment.


That’s just liability-related verbiage


>...Anyone who can build an ML model to catch these clues is going to make billions.

Yea you would think so... Unfortunately, in medicine, just because a computer program is better then a Dr at diagnosing a patient is no guarantee it will be used. The classic example here was the MYCIN expert system developed in the 1970s. MYCIN was shown to outperform infectious disease experts by 1979 in a blind test:

>... Eight independent evaluators with special expertise in the management of meningitis compared MYCIN's choice of antimicrobials with the choices of nine human prescribers for ten test cases of meningitis. MYCIN received an acceptability rating of 65% by the evaluators; the corresponding ratings for acceptability of the regimen prescribed by the five faculty specialists ranged from 42.5% to 62.5%. The system never failed to cover a treatable pathogen while demonstrating efficiency in minimizing the number of antimicrobials prescribed.

https://jamanetwork.com/journals/jama/article-abstract/36660...

https://en.wikipedia.org/wiki/Mycin

If MYCIN hadn't been rejected by the medical profession, I am sure by now that we would have developed software to assist Doctors in all areas of medicine...


Someone else in the comments made a good point about capturing the common words patients tell providers and being able to parse that. If you could transform what a patient is telling you from their words into a concise set of standardized symptoms, then you have essentially done what any other provider does when they obtain a focused history. That could allow a model to prompt the specific questions needed, or request the provider to look for pertinent physical exam findings.

On the other hand (maybe easier?), I think we could also improve the use of some of our less costly and non-harmful techniques, such as ultrasound, to improve our screening.


This hits close to home for me. My 3 year old daughter was recently diagnosed with a rare form of cancer (with a rather poor prognosis).

We brought her to a pediatritian in two states, both thought it was nothing but "growing pains". Her limp got worse, she was in incredible pain for days. She would develop fevers with no other symptoms.

She would mysteriously get better, then a month later it would happen again. Each time it got worse and her pain increased, the limping was more pronounced.

They misdiagnosed her and wanted us to see a specialist for it. We refused to wait a couple months to get an appointment with one and demanded to see someone, anyone else.

We eventually got to see a doctor (not a pediatritian) that at least done their done and ordered proper tests. Unfortunately they found bone lesions. Upon closer inspection they found a primary tumor in her abdomen, the cancer has already spread to her bone marrow (hence the bone pain/lesions and limping).

She's doing pretty good now-- still along ways to go for treatment though.

All in all-- as a parent you need to follow your gut and don't trust everything doctors say or feel. We lost a few months due to poor pediatricians that couldn't do their job (these were pediatricians at well known places).


Not pediatrics, but I had hand issues and went to a handful of doctors over the couse of two years who dismissed me after poking or prodding my hands a couple times.

AFAICT there's no way to _know_ a doctor has experience and wisdom, but it's crazy that doctors won't even order relevant tests done. It's not a matter of costs - all they need to do is ask first.

In the end I went to the Mayo clinic in Minnesota where they performed a thorough battery of tests and gave me a proper diagnosis within a couple days. If you're at your wits end, at least I can recommend them as being diligent.


My GF's gone through the same windmill, over the span of seven (or more) years before finally getting a diagnosis. Multiple ones actually.

Her abdominal pain issues and discomfort were blamed on period pains first, then IBS (because her mother has that too), until finally, after she herself asked for it, she was diagnosed with endometriosis (and an ultrasound revealed a 10cm cyst).

Likewise, she's struggled with being punished for being a naughty child during her upbringing, school problems, and later depression and suicidal tendencies when put on medication for either depression or birth control. It was only once her son was diagnosed with ADHD that she recognised those symptoms and got herself a similar diagnosis. Both are on proper medication for that now and it's a huge improvement for both.

TL;DR there's still a lot of ailments that aren't easy to diagnose.


Your girlfriend also made the "mistake" of being a woman, and I wish I were kidding about that.

The fact that "borderline histrionic" is diagnosed much more often in women echoes back to the Victorian diagnosis of "hysteria", and pain reported by women is often discounted.

And unfortunately for her, ADHD also falls into the category of doctors ignoring women. It's slowly getting better, but essentially, all diagnostic tools were tailored to boys, and the first studies for girls' symptoms were beginning to come out in the early 2000s.[1] If you read that article, you'll also find that her way to discover ADHD is common for women - you find out because your child is diagnosed.

It's not only that ailments are hard to diagnose, it's that most medical research is tailored towards white males. (And most experiments are geared towards the college age group, due to ready availability).

The further you're outside of that group, the more work you'll have to do on your own.

[1] http://www.apa.org/monitor/feb03/adhd.aspx


First, I'm sorry to hear about your daughter's diagnosis. I'm also sorry you and your daughter had to go through that experience in the first place. If you're comfortable, I'd love to hear more about your experience.

For example, what were your expectations going into the appointments with the first two pediatricians? What were you hoping they would do that they didn't? What did they miss that the eventual doctor got right? Did you and your daughter felt heard or ignored by all of them?


I think another solution is forcing doctors to work more off of check lists and less following their gut.

When I was a baby I also had some weird symptoms and my mother took to be several pediatricians who couldn't identify the problem. Eventually one evening my mother ended up in the emergency room with me and got to see a really nervous young doctor who, as my mother describes it, it felt like we where the first patients he'd ever seen on his own. Anyway this 'kid' literally had a folder with lecture notes on his desk and worked through them step by step and ordered the right tests and nailed the diagnosis on his first try (which fortunately turned out to be nothing serious).

So as much as it's vital for parents to follow their gut, I feel like doctors often just follow their gut far too much and need to take a more rigorous approach to diagnostics.


Similar situation, our sick child was passed off by initial doctors as having something minor. We were fortunate to try a different ER where a nervous young doc took a more skeptical approach and ran down the checklist fully before correctly diagnosing with bacterial meningitis. The doctor had never even seen a meningitis case before.

Per discussions with the infectious disease specialists this timely diagnosis likely saved our child's life.


Let us reflect that medical practice is one of the textbook examples for people making bad decisions due to Simpson's paradox. That the sheer number of highly opinionated people who are confident and wrong is staggering (take nutritional information, for example). That the risk of being embroiled in a good game of political or legal football is high.

On top of that, this is a field that historically has had poor exposure to good engineering practice or mathematics. A field where a researcher's have both the intelligence and lack of exposure to plausibly rediscover integration without being clued in by anyone in the know [1].

There is almost certainly more to be gained by working with evidence and solid statistical practice than there is by gut and guessing. It is hard enough just keeping everyone focused on the measurable evidence and weeding out stats mistakes without pretending that doctors should somehow diagnose based on their 6th sense. That isn't a healthy expectation for any party, including the doctor.

Efforts should be made to figure out what variables they are actually using to make decisions, if those variables are actually useful predictors and then how to systematise them. It is tautologically impossible that an experienced clinicians is detecting undetectable variables in their decision making process.

[1] https://academia.stackexchange.com/questions/9602/rediscover...


"the risk of being embroiled in a good game of ... legal football is high."

That's my personal experience as a parent, unless the pediatrician can absolutely rule out all possibility of any theoretical disease then all conversations with the pediatrician's office always end with an official recommendation to CYA by spending a day at the ER.

Two anecdotal heat exhaustion cases:

Me, like a quarter century ago, in the Army after setting a new personal record on a daytime hot summer windless humid PT test while stationed temporarily in the deep deep south; typical heat exhaustion, no energy, puke, dizzy, gray world, squadmates turned me in for "wobbling on my feet too much". PS and medic look at me, "well, you're still sweating, so ..." dump a canteen of water on my head and torso, force me to drink a canteen of water, wait to see I don't puke for a couple minutes, escorted by another soldier to sit in front of an air conditioner with him ordered to watch me drink another canteen of water for an hour. Treatment cost approx five minutes of medic labor and one hour of chaperone solder labor to observe me while I recovered, and three canteens of drinking water.

Preteen daughter gets heat exhaustion on the hottest most humid day of the year after a long day working hard outdoors despite being well hydrated, no energy, puke, dizzy, but still conscious and sweating, wife gets nervous resulting in call to pediatrician resulting in spending rest of day and most of night in the ER running EKG heart tests and blood tests and MRI the skull looking for tumors. She wasn't even dehydrated enough to need an IV, so they actually treated nothing, merely immense amounts of diagnostic screening to rule out every possible ailment that could cause dizzyness (which seems to be practically everything). Treatment cost in the five digits, paid by everyone else via insurance premiums. But, at least the pediatrician CYA from legal standpoint.

To some extent the real cost of "malpractice insurance" isn't directly paid as "malpractice insurance" but is funded by immense insurance premiums.

I looked it up for fun and the odds of a teen girl being diagnosed with a brain tumor is approximately one in ten thousand per year; the odds of a teen girl getting dizzy on the hottest sweatiest day of the year after working hard outdoors is somewhat higher, but legal CYA is very expensive.


As an anecdote, I recently went to the hospital suspecting a kidney infecting. I had all the signs (pain on both sides under the skin, smelly urine, a bit of fever that came and went) so I told the nurse I was pretty certain I had a kidney infection.

When the doctor saw me, she took my urine only and when it came back "perfect" she said I definitely did not have a kidney infection, that my pain was probably mechanical.

Two nights later and I couldn't move or think from the fever, the pain in my kidneys was so severe I could barely breathe, and I was fearing I was on the fast-track to sepsis (which I've had before).

Apparently the tech testing my urine sample fucked up, which is why the results came back perfect. The strange thing is, the doctor herself said she was surprised it was perfect, but took it at face value despite my insistence that I definitely had a kidney infection, especially considering I have a history of such infections. Also, when she said my pain was most likely "mechanical", I told her I hadn't done any intense activity lately, and she said I probably slept wrong, ignoring the fact I said I'd had a slight fever-- when they tested me at the hospital I had no fever.

It was a really weird experience. I understand skepticism against a patient's self-diagnosis, but I was claiming a kidney infection-- this is a common diagnosis for women with my symptoms and I've had many kidney infections before so I know what it feels like. Still, the doctor trusted that her three quick tests were perfect (urine sample, temperature, mechanical inspection) and her mis-diagnosis could have caused me permanent damage (kidney scarring or sepsis).


> Physiological values are complex in paediatrics. They are more than complicated because although there are published ‘normal values’ which will guide the clinician in knowing whether a child is tachycardic or tachypnoeic, these are in fact reference ranges that were initially based on expert opinion. Although meta-analysis shows these reference ranges to be at least somewhat valid, it is probably safest not to think of them as ‘normal ranges’ at all since much of the data includes children in abnormal circumstances. Indeed, the clinician who relies on these values will by definition be unlikely to have a ‘normal’ child in front of them.

Relatedly, I'd imagine that (akin to the story about choosing what parts of a fighter plane to armour) the worst health emergencies, esp. in children, are the ones where the child does not make it to the ER alive or is either unresuscitable (word?) or deteriorates rapidly and uncontrollably before a diagnosis can be made.


Yes, but I assume that all children who die in e.g. the States will automatically get an autopsy and a cause of death will be determined, no?

(Note: haven't read the article, so I'm not sure if I'm contradicting anything here or just agreeing with it!)


No, generally the parents have to request an autopsy. The provider can always recommend getting an autopsy, or requesting one, but ultimately that decision lies with the parents.


As far as I know autopsies are never automatic in the US.


The title was too long, so I took off the quote marks.

'So why didn’t you think this baby was ill?' Decision-making in acute paediatrics

I'm submitting this because people on HN are interested in cognitive bias, and this paper discusses biases in the context of healthcare.


There is probably something wrong in the way medicine is being taught. Unlike physics, medicine might be still a largely empirical occupation, devoid of rigorous experimentation and evidence based science.

It is slowly changing. For many years, the cause of peptic ulcers was thought to be stress - countless doctors repeating this mantra to the patient. It took a heroic act of Barry Marshall [1] to conduct an experiment (on himself) and demonstrate that it is, indeed, false, and identify the root cause.

I wish we approached medicine more like physics and less as a closed and secretive (and lucrative) skill.

[1] https://en.wikipedia.org/wiki/Barry_Marshall


Much attention (because Taleb) is given to "having skin in the game" - and rightly so. It's an important and actionable heuristic.

Also of tremendous import is "gut feeling" which is called out specifically in the article:

"Gut feeling has been defined as an intuitive feeling that something was wrong even if the clinician was unsure why."

The disorganized, indescribable, cumulative knowledge of a human practitioner who has seen thousands of cases is so valuable that it should be given very particular nomenclature and afforded tremendous esteem - even more so than the physicians with their decades of schooling.


I don't deny that Taleb brings some great insights, but specifically to the phenomenon you describe, a more relevant book could be Kahneman "Thinking, fast and slow".

The "gut feeling" is framed as the result of expertise + many hours of practice with short feedback loops. Intuition is very valuable and esteemed, much like a chess Grandmaster can say "Mat in 4" just by glance at a game, where you would need many hours of thinking to arrive to, at best, a similar conclusion.

Thinking slow is where we start when we learn a skill (from learning to count when we are toddler, to learning to code, play piano or chess, or practice medecine). It is a conscious, slow, energy-hungry process that leave us tired. After many hours of practice, and, importantly, short feedback loops to feed our internal pattern-matching algorithm, we start to "Think fast".

Thinking fast is quick, easy, not easily described by words as a lot of it happens below consciousness. It is very efficient but also error-prone, as it is based on heuristics. A lot of cognitive bias come from it.


> The disorganized, indescribable, cumulative knowledge of a human practitioner who has seen thousands of cases is so valuable that it should be given very particular nomenclature and afforded tremendous esteem

“Clinician gestalt.” Non-inferior to almost every clinical decision rule we’ve tested, or otherwise incorporated therein (eg, Wells’ “most likely diagnosis” criterion).


I work with a lot of physician colleagues (as well as veterinarians) and one of the things I wish we could capture was what words someone uses to trigger those gut feelings. The way a patient describes their pain, how we try to wrap our language around complex disease concepts...


If you are focusing on words, you are missing the big picture. 'Guts feelings' are what you get when your brain process a butt load of informations in a fuzzy way. This includes subtle smells, colors, moves, timing, shapes, sounds, chain of events and how they interact with each others.

Words can be a tiny part of it, but usually gut feelings are about all those things you can't process as easily with rational thinking and so rely on a different, less precise and more general, method of analyzing. This does not play well with language, which is very accurate and precise, very intellectual.

It's why we can easily walk, but have a hard time describing how we walk.

A commenter on HN talked about the book "The inner game of tennis" not so long ago. I highly recommend it to get a gentle introduction to this part of us. Especially on this site, where a lot of us are geeks who are more used to leverage their rational thinking than their feelings.

Last year, many commenters talked about meditation. While I do recommend the practice, starting from the sport point of view is way easier to swallow and make a better starting point for people with strong affinity with precision and step by step logic.


The gut feelings a health care professional gets will vary based on the language the patient uses.

The gut feelings a doctor has to a patient saying "I have crushing chest pain" will be different to the patient saying "I have burning chest pain".


It is interesting to talk with a family member who is a physician mainly for male veterans of the armed forces. He's got roughly the same training as someone who would treat families, but because he's worked with this older/sicker/male/poorer population with particular shared experiences, he's really developed an ear for what these guys are saying and not saying. It's a whole set of para-medical skills that do not transfer to children, women, guys who come from a very different demographic or cultural profile.

So I sort of agree that gut feeling varies based on the language a patient will use, but knowing your patient population well can help you tune in beyond the words.


Your example perhaps doesn’t lend itself to a discussion of “gut feelings”, since it describes different symptoms quite specifically. In the absence of other information, the first sounds like a heart attack, the second, heart-burn (acid reflux).

Gut feeling relies on more abstract concepts, particularly in the context of pediatrics where a child may be unable to verbalise their symptoms.


I agree the specific example is somewhat explicit, but the point being made is valid. Whether it's a subtle choice of words, lack of eye contact, or constant fidgeting. They can all have some additional meaning (or none!).

One example that I've heard is common is about reading body language that might indicate discomfort in talking about a subject, which might suggest under/mis-reporting of an issue.


> The gut feelings a doctor has to a patient saying "I have crushing chest pain" will be different to the patient saying "I have burning chest pain".

Those are quite direct description of symptoms.


The gut feelings a doctor has will be affected by words used by the patient or their relatives.


I chose words for two reasons:

- First, they're super-hard to capture. "What did a patient say to me?" doesn't end up on medical record systems.

- Second, that's what they often describe to me. That the patient was describing something, and they weren't sure why, but it made them worried.


As a non medical professional, I can't even imagine the difficulty in trying to diagnose a severely sick child that is too young to speak or articular symptoms/where it hurts... Scary.


Also, 'fever and a bit of a rash' are the symptoms of just about every childhood illness ever, and perfectly harmless 99% of the time and fatal within 24 hours 0.1% of the time (or something).

Every few years you get some indignant news story banging on about how some awful hospital sent a poor sick child (presenting with fever and a rash) home with some Panadol instead of ordering a full battery of tests, and then the child turned out to have meningitis or something.


I feel like if you want to call yourself a doctor you should be able to tell the difference. If you're only good for diagnosing simple and common maladies then you're likely to be replaced by an algorithm soon.

I may be biased. In my own experience, doctors largely just seem to exist to get you out of their office and billed as quickly as possible, dismissing anything you say that doesn't align with whatever theory they've latched on to.


One of the things my veterinarian colleagues talk about a lot is how do you make a diagnosis with "My chicken is acting funny" as the only information you have.


Being a vet is a tough job; they need to be able to diagnose problems with patients that can't speak and sometimes have very, very different anatomies. In farming communities in particular they can sometimes be critical to whether a livestock farmer succeeds or fails a season.


One of my favorite moments eavesdropping on some vet students taking their boards was listening to them complaining about the question involving ultrasounding a shark.


"My car's making a funny noise."

"My computer's acting weird."


"My chicken is fried."


But you can't take apart or reboot the chicken.


And this is why I write software for a living. Because "Take the chicken to pieces to find out why it's broken" makes sense to me, so I'd be a rubbish chicken doctor.


From what I understand, that actually is common for chickens because they're usually livestock rather than pets. If one chicken might be sick, you care more about whether it's contagious than the outcome to the one chicken, so you do the most thorough diagnostics you can even if that requires an autopsy.

Now dogs on the other hand...


I like working with vets for my stuff (epidemiology) because there are really interesting constraints.

As you mention, one chicken is expendable, but you're worried about tens of thousands of birds.

A sheep is...mostly expendable. You have a low, market driven cap on what you can spend.

A dog or cat, less so - there's an emotional connection, but there's also an upper threshold (with a wide range, and who has what threshold is often surprising).

A race horse...well, let's just say equine ICUs are really pretty nice, all things considered.


> But you can't take apart ... the chicken

Well, you can, it just won't "work" afterwards!


Are you sure you can't boot the chicken?


You forgot: "It's not working!"


If I may ask, what exactly is it that you're doing ?


I think this is directed at me. If not, I apologize. I work on developing computational models for hospital infection control - both in human and veterinary settings.


Yes, it was :)

Interesting, public or private ? How closely do you work with practitionners ? Is their knowledge of value regarding your job ?

I ask because I happen to be a veterinarian who somehow ended up being a developer.


Public - I work for a state university in the vet. college. I sometimes get to work with practitioners, but I'll admit I have better contacts for that on the human side of the equation, though we're working to build it for vets as well.

I'm faculty, so the notion of value is sort of a funny one, and it's less obvious than it might be at medical schools, but it's getting there. The biggest constraint, as with everything in vetmed, is money.

I believe my email is in my profile - feel free to reach out if you want to chat more.


Cool ! It's not actually that's why I asked here.


Apologies - I went on vacation right after you posted this. Eric <dot> Lofgren <at> wsu.edu if you're still reading this.


Vets can’t talk to their patients either. I would be curious of how their approach compares to pediatricians’.


I'd say we have 3 advantages over pediatricians.

First we often deal with adults who are a bit less susceptible to die very fast than infants, which gives more time.

Second we have the owner's wallet as a set limit. Sometimes you do with what you're allowed, and one can only go that far with a limited panel of tests or when hospitalisation is refused.

Third an animal death is more acceptable than a human, let alone a child.


I thought that probabilistic decision trees had been around for a long time in other parts of medicine

e.g. http://www.seilevel.com/requirements/visual-models-save-live...

So I'm surprised there isn't a general one for paediatrics.


Medicine seriously needs a data-science approach.



can you elaborate on that? I'd love to hear your thoughts.


Well, it's quite simple.

Right now the quality of a diagnosis depends heavily on the experience of the doctor. One of the problems is that even an experienced doctor has little experience with rare conditions.

In this age of data-science, we can replace the doctor's diagnosis with a number of standard diagnostic tests (observations), which can be performed by a nurse and/or lab-worker. The diagnosis is then determined based on "correlation" with a data-set, collected over an entire population.

The best possible treatment plan can also be computed based on this diagnosis (which could be more refined than a doctor could possibly make, because it can work with probabilities instead of a single outcome).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: