Hacker News new | past | comments | ask | show | jobs | submit login

A LLM trained on the complete corpus of 19th century medical knowledge would be pretty useless (they had no infectious disease model). It's a fair argument that the state of modern psychological knowledge is comparable to early 20th century infectious disease knowledge (rather poor).

In terms of training corpus, I wonder how that is managed? Training LLMs on out-of-date scientific papers doesn't seem like a good idea. It might make sense to exclude primary research reports entirely and instead stick with current reviews and textbooks?




LLMs are great at producing “free undergrads.” By that I mean it’s now easy to train a model that can produce textbook answers to textbook questions, that is well-defined solutions to well-defined problems. Modern LLMs will not be able to replace or augment physicians much because so much of medicine comes down to understanding the patient’s context.


LLMs understand context pretty well, that's their magic, and one thing i've noticed is that they are much more thorough than a person: they won't forget the context in the next moment or the next month. A human doctor can do better but they have to really care a lot to do better. Also, they'll only be able to do some things without attending to all necessary tasks (for example one usually overlooked is communicating with caring words).


> they won't forget the context in the next moment or the next month.

GPT-4 have a context window of 32K tokens.

> (for example one usually overlooked is communicating with caring words)

If you meant keep saying "Sorry" like GPT-4 do. No, thanks, that's not caring.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: