Hacker News new | past | comments | ask | show | jobs | submit login

LLMs are great at producing “free undergrads.” By that I mean it’s now easy to train a model that can produce textbook answers to textbook questions, that is well-defined solutions to well-defined problems. Modern LLMs will not be able to replace or augment physicians much because so much of medicine comes down to understanding the patient’s context.



LLMs understand context pretty well, that's their magic, and one thing i've noticed is that they are much more thorough than a person: they won't forget the context in the next moment or the next month. A human doctor can do better but they have to really care a lot to do better. Also, they'll only be able to do some things without attending to all necessary tasks (for example one usually overlooked is communicating with caring words).


> they won't forget the context in the next moment or the next month.

GPT-4 have a context window of 32K tokens.

> (for example one usually overlooked is communicating with caring words)

If you meant keep saying "Sorry" like GPT-4 do. No, thanks, that's not caring.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: