Hacker News new | past | comments | ask | show | jobs | submit login

I'm not pretending to understand half the words uttered in this discussion but I'm constantly reminded of how much it helps me to articulate things (explain them to others, write them down, etc) to understand them. Maybe that thinking indeed happens almost entirely on a linguistic level and I'm not doing half as much other thinking (visualization, abstract logic, etc.) in the process as I thought. That feels weird.



Or is the real thinking sub-linguistic and “you” and those you talk to are the target audience of language? Sentences emerge from a pre-linguistic space we do not understand.


I do find it funny that this discussion thread has tried to represent language as a universal form of thought when it would be messy to encode the inner workings of a LLM (the weightings/relationships) themselves as natural language.

You could sort of represent the deterministic contents of an LLM by compiling all the algorithms and training data in some form, or maybe a visual mosaic of the weights and tokens, or what have you...but that still doesn't really explain the outcome when a model is presented with novel strings. The patterns are emergent properties that converge on familiar language--they're something deeper than the individual words that result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: