Hacker News new | past | comments | ask | show | jobs | submit login

I think it should be marked as being from 2017. Also, I don't see much point in this article. It just butchers Norvig's article into a bunch of quotes even though his article is quite accessible and not very long.



Not sure about the article itself, but from an epistemological point of view, i believe this debate will remain one of the most famous of the 21st century (provided NN still keep giving new fantastic results, and doesn’t stop there).

Never in history did we manage to achieve so much about a given complex problem (let’s say producing meaningful text) while at the same time understand so little (chatgpt didn’t provide any single useful result to the linguistic science department).

If this approach spreads to other fields, it will lead to an immense scientific crysis.


Actually this explains a lot. Language might be the stuff dreams are made of, but not the stuff consciousness is made of, so a specialized form of perception, a layer on top of visuals and hearing, with just another layer of logic on top of it.

We still have a coarse understanding of brain processes, but if they use parallelism, they could be probabilistic in nature, so these language models would be more similar to ours than it seems.


I feel like that point was equally made with “colorless green ideas sleep furiously”

Moreover, in linguistics 101, students are often introduced to case studies of people with aphasia and similar issues which illustrates how humans can produce coherent and grammatical speech without meaning (just like chatgpt) and how people can lose the understanding of certain classes of words.

Lastly, NN are often seen as a way to model functions (again, students are often asked to produce different logic gates by hand, to convince themselves that NN can be Turing complete) so rather than language being inherently probabilistic chatgpt might just have reasonably inferred the rules of language.


Thank you, I didn't know about that sentence.

Anyway my point is not that language is inherently probabilistic, but that the way our brains implement language could be. Or more precisely, one layer of language could be this way, with other "watchdog" layer on top of it filtering when the less rigorous layer goes rogue and spits nonsense.

The base layer could be the graphical one, middle layer language, top layer logic. Between layers, the jokes.


I agree. Start with reading:

https://norvig.com/chomsky.html





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: