Less intelligent than average, given that 100 is¹ calibrated to be average. Assuming your use of the word intelligence takes the concept as a sliding scale not a boolean is/isn't which is implied by quoting IQ results.
The way the “I” in AI is usually used seems to me to imply achieving average or better, so the aim is mediocre & upwards. steve1977 is agreeing with an opinion that results so far are at best “up to average”, maybe not even that, so more like mediocre & downwards. The phrase Artificial Mediocracy does not seem at all unfair in this context.
The opinion that current models can at best achieve average results overall seem logical to me: that are essentially summarizing a large corpus of human output rather than having original thought. While the systems may “notice” links average humans don't due to not being able to process large amounts of data like that, bringing the average quality of their output up, they are similarly likely to latch onto bad common practise/understanding bringing it back down again. Average results are mediocre results, by definition. Not bad, but not outstanding in any way.
--
[1] supposedly, there are strong views about IQ being a flawed measure/measure in some quarters
I never thought of the I in AI as a comparison to a human of average intelligence. I always understood it means intelligence as in "capable of reasoning", regardless of whether it's "kinda dumb" or "super smart" - the same way we speak about animals not being intelligent, and are looking for "intelligent alien life" in space - the aliens might not be very smart, perhaps even totally dumb, but still intelligent. The same applies for AI, perhaps it doesn't match even the least performing humans, but it's still intelligent.
I guess my parent comment was lead a bit by the fact that nowadays AI is often conflated with superhuman intelligence. You're certainly correct in that even a "dumb" AI could still be intelligent.
The interesting question is of course if that applies to LLMs or not. Are they actually intelligent or do they just look intelligent (and do we even have the means to answer those questions)?
>The interesting question is of course if that applies to LLMs or not. Are they actually intelligent or do they just look intelligent (and do we even have the means to answer those questions)?
It's not an interesting question. It's pretty meaningless.
Are birds really flying or do they look like they are flying (perspective of the bee)?. Are planes really flying or do they look like they're flying ?
This question is essentially thought-terminating in most contexts as most of not all people can’t answer it given how littler know about how humans work.
Nerds will also get hung up on this because they can’t stand the notion that any aspect of their job doesn’t require their immense intelligence.
For most contexts, “will this tool help me”’is a much more appropriate question. Anyone conflating the two is doing themselves a disservice.
A key problem is the many different readings of the word intelligent. I wouldn't call what we currently have as "capable of reasoning" for instance, though that might not be the intent and that is instead a property of "general intelligence". Of course that has linguistic issues to as it makes general intelligence (artificial or otherwise) a subset of intelligence - i.e. more specific despite adding "general" to the name.
Part of the problem is that we have very vague notions of what "reasoning" actually means. If you mean simple deductive logic ... LLMs can often perform such operations today, albeit highly inconsistently. If you mean inductive reasoning and working through a problem through first principles, then they usually fail. The state of the art their are tricks to get the system to extract out the assumptions and base knowledge and then work deductively.
But every time we have an advance in machine learning, we seem to redefine intelligent activity to be beyond that. At a certain point, what is left?
Humans are the only thing capable of reasoning, AI isn't capable of it and its very rare that an animal other than a human is capable of even the most basic reasoning.
Animals act on instinct that is hard coded based on the probability of survival AI essentially does the same thing it follows hardcoded probabilities not reason.
Technically they are. My comment was also meant to be a bit tongue-in-cheek of course (and, hopefully, obviously).
I wouldn't use a score like IQ to define a treshold of "intelligence" in absolute terms. By definition, if you can score somewhere on the IQ scale, you have some intelligence. Otherwise your IQ would probably be N/A? (not sure, never looked that deeply into IQ tests=.
I guess the problem is that the term "intelligent" is ambiguous and overloaded.
So in one usage someone who is kinda dumb would still be intelligent, just less intelligent than others.
In another usage, we use the term to describe someone of above average intelligence (which is technically not really correct and actually not very intelligent).
From the main Wiki entry on IQ tests: “The raw score of the norming sample is usually (rank order) transformed to a normal distribution with mean 100 and standard deviation 15.”
So, you’re not too far off numerically. 80 IQ is only a -1.33 Z-score. So, 9th percentile. 91% of people score higher than 80 IQ.
My point is, it's not a comparison at all. Intelligence is a trait, you either have it or not. We also use the same word for "how smart you are", but that measurement doesn't change anything about AI being intelligent or not. It can be dumb, but intelligent.
Intelligence is IMO an inherently comparative measure.
It's not "on/off", it's "smarter" (than a pile of rocks, than a slug, than another human).
So, yeah, you can be a dumb human but you'd be a smart chimpanzee. But we want to be comparing apples with apples in the context of this topic.
When people say "AI", everyone implicitly assumes the comparison with human intelligence. So "AI" needs to be as smart as the average human to be actual AI. Ok, AGI, if you prefer that term.
There's a reason AI is moving farther and farther away and we're creating new, finer, terms like ML, shape recognition, etc.
I don't know anybody who implicitly assumes that the I in AI is a comparison, people around me understand it as a term for a set of traits, regardless of the level of performance. That's how the term AGI came to be, to distinguish between "intelligent but not necessarily like a human" (AI) and "generally intelligent [like a human]" (AGI).
Do you also think that "intelligent alien life in space" means comparable to humans? What if we find something capable of reasoning, abstract thinking, rationality, adaptibility, etc - but much, much dumber than humans? That's intelligence, comparison to humans doesn't change anything about that.
https://en.m.wikipedia.org/wiki/Intelligence - how could there be animal intelligence if "intelligence" means comparable to humans? "Crows are intelligent but nowhere near human-level" - this statement wouldn't make any sense if you're right, but it actually does make sense, IMHO.
> There's a reason AI is moving farther and farther away and we're creating new, finer, terms like ML, shape recognition, etc.
That thing with AI is called moving goalposts. And the finer terms - yeah of course we need to be able to be specific about our software, doesn't mean that's not AI. We talked about shape recognition in neuropsychology for much longer than in AI, same for many more terms that will certainly be reused soon.