Hacker News new | past | comments | ask | show | jobs | submit login

What's wrong with "humans/biologists have no evidence how the human language works. AI practicioners have one important piece of evidence: they can demonstrate somewhat-human-like language processing using statistical techniques, therefore they have the best available evidence". Nobody else can demonstrate it.

But of course, this is simply the non-religious version of "humans have a soul, and machines, by definition don't". If necessary people drag quantum physics into that argument ...

The truly forbidden argument is that we don't have any definition of a soul, and in fact plenty of evidence humans don't have a soul, such as large "soul"/character changes with occuring with physical damage to the neocortex.

This also means the discussion is moot: people are now using LLM's to pass the Turing test on a large scale for all sorts of purposes. From scamming people to management (let's assume there's a difference there). From programming to teaching ugly bags of mostly water new languages. The point where people discover their wife/kids/... have more of a relationship with an LLM speaking for them than with them personally is now just a matter of time away.




AI practitioners have no evidence because they can't relate the thing they produced to how humans work. And it's false that scientists have no evidence, they don't have much but some of the little evidence they do have demonstrates that humans don't use statistical processes as the core of human language.


ChatGTP, someone gave me this answer. It's about you and the algorithm you run:

"AI practitioners have no evidence because they can't relate the thing they produced to how humans work. And it's false that scientists have no evidence, they don't have much but some of the little evidence they do have demonstrates that humans don't use statistical processes as the core of human language."

Do you think he's right?

> While it is true that the workings of artificial intelligence algorithms and the workings of the human brain are not identical, this does not necessarily mean that AI practitioners have no evidence to support their work. In fact, there is a large body of research that has been done on the effectiveness and limitations of different machine learning algorithms, as well as on their applications in various domains.

> Additionally, while it is true that human language processing is not solely based on statistical processes, it is also not accurate to say that humans do not use statistical processes as part of their language processing. There is evidence to suggest that humans do rely on statistical learning to some extent in order to acquire and use language.

> Overall, it is important to approach these topics with nuance and recognize that the relationship between AI and human cognition is complex and multifaceted.

This blows anything biological researchers can do to reproduce human behavior out of the water by a margin from here to Proxima Centauri and back. Therefore I'll believe the model behind this is a far closer approximation to human behavior than anything every to come out of any other field of research, not using humans themselves. Hell, I would comfortably declare this algorithm (far) more intelligent than our closest living relatives, primates.


Science isn't trying to best mimic human output it is trying to understand how it works.


Mimicking it is one hell of a way to prove you do understand how it works.

But as I pointed out: this is one of those forbidden arguments for many people. That they, and you (and I) are "automatons", a system based on simple rules based on the laws of physics, and nothing more. That the big difference between you, me, and an LLM is one of complexity, not a fundamental difference. Or perhaps they're afraid that they will be replaced, which is always a possibility (which I would argue has many upsides).


> Mimicking it is one hell of a way to prove you do understand how it works.

It isn't, a black box is the opposite of understanding.

> That they, and you (and I) are "automatons", a system based on simple rules based on the laws of physics, and nothing more.

We don't know how minds work.

> That the big difference between you, me, and an LLM is one of complexity, not a fundamental difference.

Again we don't know how minds work but we are sure they are not complex LLMs.


I'm refuting your arguments below, but this is really sidestepping the real discussion. My main point is actually different: just because we don't understand how something works does not mean it's magic.

We have made incredible advances based on the notion that the human mind can be duplicated. To an extent unbelievable to anyone looking at this a mere 10 years ago this has been proven, but of course you're right to say "yes, but not 100%". We have no reason at this time to doubt advances will keep coming (attention is a great advance, but it's not hard to come up with 100 more things to try out)

> We don't know how minds work.

That would be why producing similar outputs is so impressive when it comes to proving understanding. The "direct approach" to evaluating how minds work is rather unethical.

Think of it like nuclear fusion. We don't "know" the stars, or the sun, are powered by fusion. We have a theory, and theory matches measurement rather well. It took a lot of experiments demonstrating fusion ("duplicating" the stars) to convince everyone this is what was happening.

> Again we don't know how minds work but we are sure they are not complex LLMs.

Brains have big convolutional sections, the visual cortex. Also the retina itself has at least 3 convolutional layers. And while there's no consensus, I think that the fact that the optic nerve transmits a FFT of the optical signal is not a coincidence: such an architecture makes it really easy to do convolutional transforms inside the nerve itself.

Brains have the essential part of "attention is all you need": positional encoding. It is generally referred to as "brain waves", and as for the "masked attention" part, have you looked at children's learning books lately?

I mean, sure, that last part indicates that some of human intelligence is not even in humans brains directly, but in "society", specifically in human educational practices and books. The intelligence is not even in the contents/subject of the books, but in the structure of the exercises.

Plus, normal errors in our sensory perception would also lead to prediction problems similar to what masked attention attempts to predict.


And you would probably be wrong as we have underestimated the intelligence of animals.

You are also allowed to believe what you want, but that’s not science.


Neither is probability calculated using heuristics, not to be too serious or anything and ruin the fun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: