Hacker News new | past | comments | ask | show | jobs | submit login

What if i told you language models are able to generate novel function protein structures based on the purpose you give it ? https://www.nature.com/articles/s41587-022-01618-2

A neural network's training objective is often deceptive. Doesn't matter how simple it seems. What matters is how complex fulfilling the task is, because that is what it will learn. The question you should be asking yourself is, what does it take to generate paragraphs of coherent text that display recursive understanding ?




In that case, let’s have an LLM learn how to reverse all sha256 hashes? With large accuracy / low loss?

Seems a bit like the “Cargo Cult Science” article by Feynman but done with computers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: