Hacker News new | past | comments | ask | show | jobs | submit login
Benchmarking LLMs against human expert-curated biomedical knowledge graphs (sciencedirect.com)
41 points by Al0neStar 10 months ago | hide | past | favorite | 5 comments



Academic writing 101: The abstract is NOT meant to be written as a cliff-hanger!


You will not believe what it is all you need!


Due to the cliffhanger abstract, here is a part from the discussion that may help.

> In our case, the manual curation of a proportion of triples revealed that Sherpa was able to extract more triples categorized as correct or partially correct. However, when compared to the manually curated gold standard, the performance of all automated tools remains subpar.


I didn't see UMLS in the paper, but I've tried some of their human-created biomedical knowledge graphs, and they were too full of errors to be used. I imagine different ones have different levels of accuracy.


i was right; LLM needs two major components added before we can swan dive into humanistic aspect of medicine/pyschology/politics using a form of LLM.

1) weighting of each statement for probability of correctness and

2) citation for each source.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: