Hacker News new | past | comments | ask | show | jobs | submit login

I never thought of the I in AI as a comparison to a human of average intelligence. I always understood it means intelligence as in "capable of reasoning", regardless of whether it's "kinda dumb" or "super smart" - the same way we speak about animals not being intelligent, and are looking for "intelligent alien life" in space - the aliens might not be very smart, perhaps even totally dumb, but still intelligent. The same applies for AI, perhaps it doesn't match even the least performing humans, but it's still intelligent.



I guess my parent comment was lead a bit by the fact that nowadays AI is often conflated with superhuman intelligence. You're certainly correct in that even a "dumb" AI could still be intelligent.

The interesting question is of course if that applies to LLMs or not. Are they actually intelligent or do they just look intelligent (and do we even have the means to answer those questions)?


>The interesting question is of course if that applies to LLMs or not. Are they actually intelligent or do they just look intelligent (and do we even have the means to answer those questions)?

It's not an interesting question. It's pretty meaningless.

Are birds really flying or do they look like they are flying (perspective of the bee)?. Are planes really flying or do they look like they're flying ?

"Mimic Intelligence" is not a real distinction.


This question is essentially thought-terminating in most contexts as most of not all people can’t answer it given how littler know about how humans work.

Nerds will also get hung up on this because they can’t stand the notion that any aspect of their job doesn’t require their immense intelligence.

For most contexts, “will this tool help me”’is a much more appropriate question. Anyone conflating the two is doing themselves a disservice.


I don’t know if either question is more „appropriate“. One is more scientific and philosophical, the other is practical.

I mean a power drill is also helping me a lot, without being intelligent.


I think you're correct although I'd like to point out that a lot of different animals are capable of reasoning. For example: https://www.cnet.com/tech/watch-a-wild-crow-tackle-a-complex...


A key problem is the many different readings of the word intelligent. I wouldn't call what we currently have as "capable of reasoning" for instance, though that might not be the intent and that is instead a property of "general intelligence". Of course that has linguistic issues to as it makes general intelligence (artificial or otherwise) a subset of intelligence - i.e. more specific despite adding "general" to the name.


Part of the problem is that we have very vague notions of what "reasoning" actually means. If you mean simple deductive logic ... LLMs can often perform such operations today, albeit highly inconsistently. If you mean inductive reasoning and working through a problem through first principles, then they usually fail. The state of the art their are tricks to get the system to extract out the assumptions and base knowledge and then work deductively.

But every time we have an advance in machine learning, we seem to redefine intelligent activity to be beyond that. At a certain point, what is left?


Humans are the only thing capable of reasoning, AI isn't capable of it and its very rare that an animal other than a human is capable of even the most basic reasoning.

Animals act on instinct that is hard coded based on the probability of survival AI essentially does the same thing it follows hardcoded probabilities not reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: