Hacker News new | past | comments | ask | show | jobs | submit login

What he/she is saying is that the pace of research & publishing is such that it's impossible for researchers & academics to stay current the only fashioned way (reading, writing and attending conferences). It would be beneficial to have an intelligent bot that could be trained to browse for content of interest as a mechanism to augment an individual human's capacity to do this manually.



I don't just mean that, although that is part of the goal.

I want AIs to automatically find conflicting papers/hypotheses, and propose experiments that resolve the ambiguities.


But wouldn't a different data structure/database be better suited for this approach, than LaTeX? I mean, you can still just babble but style it in LaTeX ... and the AI would have to find out, that your're saying nothing. This would require true AI.

I mean, I don't know much about LaTeX, but I doubt there are Elements for "Hypthese", "Definition", "exact reference" etc. If you would have those, described in a structured, simple language - then I guess, it will be much easier to process those Information for a KI, when the context is clear.


True. I work at Google now, and my advice would be to just write standard XHTML and let Google's parsers do their best job at inferring the meaning of the text.


But writing xhtml in plain text can be quite a pain ... you would need at least good tools, to be efficient ...

Or something pythonlike (also supported by a IDE):

hypothesis:

(indent) blablabla link:"link_to_Element_in_paper"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: