Hacker News new | past | comments | ask | show | jobs | submit login

Reminds me of the Chinese room [1] argument: Does a computer really understand Chinese language if it can respond to Chinese inputs with Chinese outputs?

[1] https://en.wikipedia.org/wiki/Chinese_room




> Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

> The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

(Emphasis added)

If we were to make an analogy to contemporary machine learning, we're talking about the difference between an LLM (with context) and a Markov Chain. 'Understanding' requires novel reuse of recollections. Recollections require memory (i.e. context), and the novel reuse of those recollections require a world model with which to inference.


> The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

I don't know why anybody thinks this is a profound question worth spending time thinking about. All it boils down to is whether or not we reserve the word "understand" for beings with souls or whether we allow it to describe machines too.. and BTW there's no empirical test for souls (not even for people!) so it's really just asking if "understand" is some sort of holy word reserved for people who have faith and believe in souls.

The question isn't about machines and their empirical capabilities at all, it's just about how humans feel about that particular word.


I am personally in favor of the use of the word 'understanding' with LLMs, but it should be noted that many people strongly disagree with that use of the term.


You're saying that LLMs meet the stronger definition of "understanding?" I disagree: You're confusing necessary with sufficient. [0]

Take the original analogy of a person with an if-then opaque phrase table, and make it a slightly fancier two-step process where they must also compute a score based on recent phrases or words to determine the proper output.

So now the system has "memory" and gives better output... But was that sufficient to create understanding? Nope.

[0] And perhaps not even necessary, if we consider an operator who does understand Chinese but their short-term memory has been damaged by an injury.


I think the confusion comes here in believing that humans have some exceptional mechanic for understanding. Ultimately human intelligence is the product of neural networks. The only thing that's separates us from ML is both scale (in terms of compute and memory) and agency, the latter giving us the ability to train ourselves. We take inputs, we output behaviors (including language). The idea that we are, in all actuality, deterministic computers is really at the heart of the existential panic around AI.


> Ultimately human intelligence is the product of neural networks

Citation needed?

Sounds like "human intelligence is the product of <current tech hype>"


> Citation needed?

"Speech and Language Dysfunctions in Patients with Cerebrocortical Disorders Admitted in a Neurosurgical Unit"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6417298/

> Sounds like "human intelligence is the product of <current tech hype>"

What? Neural networks is a biological/medical term. Are you confusing it with _artificial_ neural networks?


In a weird way "hype" undersells how history repeats itself: Rene Descartes promoted the idea of the brain as a pump for the spirit, then in Freud's era it was the steam engine, etc.


You can only be trolling if you say brains aren’t made of neural networks. Poe’s law in effect


> if you say brains aren’t made of neural networks.

Except biological brains are not made like hyped-up artificial neural networks. The latter are from a simplified hypothesis then massively twisted and cut-down to fit practical electrical engineering constraints.

> The only thing that's separates us from [machine learning] is both scale [...] and agency

... And the teensy tiny fact you keep refusing to face which is that they don't work the same.

Declaring that the problem is basically solved is just an updated version of a perennial conceit, one which in the past would have involved the electrical current flow or gears.


> simplified hypothesis

There's no hypothesizing at this point. Neurons have been studied in the lab since – checks notes – 1873. Modern neural nets have largely taken Occam's Razor rather than precise biomimicry, mostly due to 'The Bitter Lesson' that basic neural networks show more generalizable emergent behaviors when scaled vs being clever about it. e.g., dendrites themselves have been shown to behave something like a multilayer perceptron on their own. So it's really perceptrons all the way down when it comes to brain circuitry.

https://en.wikipedia.org/wiki/Golgi%27s_method

> electrical current flow or gears

Perceptrons were built to be mathematical models of neurons. When gears or 'electricity' were first created/harnessed, there was no intention to build a model of the mind or to mimic neurons whatsoever. There really is no weight to this argument.

> Declaring that the problem is basically solved

I'm not making that declaration for whatever you might be terming 'the problem' here. I'm just stating that 'understanding' is still (incorrectly) rooted in our belief that our means of computation is exceptional. As far as anyone can tell, 'understanding' isn't substrate dependent, and as far as we know, our 'understanding' comes from neuronal computation.


You do realize that artificial neural networks are based on the organic ones? Unless you have some dualist interpretation of intelligence that involves some ethereal intangible soul that somehow eludes all detection, nor holds any explanatory power for intelligence?

If you were obliterate your own neural network, by, I dunno, shooting yourself in the head, might that have some affect on your intelligence? Head trauma? Brain cancer? Alzheimer's? Senescence? Do I really need a citation that your brain is where intelligence comes from? *scratches head*


Artificial neural networks have departed any biologic principles.

So comparing ANN to NN no longer makes sense to me.


> Artificial neural networks have departed any biologic principles.

ANNs are perceptrons. Perceptrons mimic biologic principles.


Isn't a large company more handy model than a "Chinese room"?

Hordes of these in practice own our species. They come up with the decisions on carbon footprint, on whether to colonize Mars, on where to shift collective attention.

Although these entities are only one of achievements of our culture, for now they dominate on global level.

Big achievement. "Consciousness" not strictly needed. Whatever the definition of "consciousness", a big company is driven by a glorified stack of excels and definitely couldn't write Chinese (assuming none of employees does, after Searle).

So, running a planet and overriding any individual human is not enough, let's focus on entertaining an individual human with an interesting conversation in Chinese? What do you even seek with that model? /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: