Hacker News new | past | comments | ask | show | jobs | submit login

I don't just mean that, although that is part of the goal.

I want AIs to automatically find conflicting papers/hypotheses, and propose experiments that resolve the ambiguities.




But wouldn't a different data structure/database be better suited for this approach, than LaTeX? I mean, you can still just babble but style it in LaTeX ... and the AI would have to find out, that your're saying nothing. This would require true AI.

I mean, I don't know much about LaTeX, but I doubt there are Elements for "Hypthese", "Definition", "exact reference" etc. If you would have those, described in a structured, simple language - then I guess, it will be much easier to process those Information for a KI, when the context is clear.


True. I work at Google now, and my advice would be to just write standard XHTML and let Google's parsers do their best job at inferring the meaning of the text.


But writing xhtml in plain text can be quite a pain ... you would need at least good tools, to be efficient ...

Or something pythonlike (also supported by a IDE):

hypothesis:

(indent) blablabla link:"link_to_Element_in_paper"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: