I'm sorry but what the fuck is this product pitch?
Anyone who's done any kind of substantial document research knows that it's a NIGHTMARE of chasing loose ends & citogenesis.
Trusting an LLM to critically evaluate every source and to be deeply suspect of any unproven claim is a ridiculous thing to do. These are not hard reasoning systems, they are probabilistic language models.
o1 and o3 are definitely not your run of the mill LLM. I've had o1 correct my logic, and it had correct math to back up why I was wrong. I'm very skeptical, but I do think at some point AI is going to be able to do this sort of thing.
Anyone who's done any kind of substantial document research knows that it's a NIGHTMARE of chasing loose ends & citogenesis.
Trusting an LLM to critically evaluate every source and to be deeply suspect of any unproven claim is a ridiculous thing to do. These are not hard reasoning systems, they are probabilistic language models.