Hacker News new | past | comments | ask | show | jobs | submit login

Sure, when we've got something approaching AGI that has in-depth knowledge of a field of research and some "intuition" from experience, then yeah why not.



You don't need AGI. GPT-4's intelligence is good enough. The problem is context window size.

Heck for a lot of things you don't even need ML at all. Regular expressions are good enough:

https://www.irit.fr/~Guillaume.Cabanac/problematic-paper-scr...

Inability to replicate is a very general and vague way to phrase the problem, because it implies that non-replicability is a sort of abstract issue that just randomly emerges. But in practice papers that don't replicate, don't for a reason, and often those reasons can be identified in advance given just the paper.

For a trivial example of this see the GRIM and SPRITE programs. If the numbers in a paper aren't even internally consistent, that's a good sign that something has gone wrong and the paper probably won't replicate.

For a less trivial example where you'd benefit from a tool-equipped LLM, consider asking a GPT-4 level AI to cross-check the claims in the abstract, body and conclusion against the data tables. If the claims aren't consistent then you can already assert it won't be possible to replicate because it's not even going to be clear what claim you are attempting to check in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: