The problem of not having a definition is over-emphasized. There is a widespread consensus that there is something that is an important part of human behavior and being human (and exhibited to some degree by many other species, not to mention extinct hominids), and that concept is well-enough constrained that we can study it. Defining it is something we will work towards as we come to understand it better.
I suspect the idea that we need definitions first comes from education, where much of what we know is presented in this manner. This is rarely, however, the way our initial understanding was achieved - just consider how our concepts and definitions of 'energy' and 'matter' have changed over time.
We have perfectly fine definitions for intelligence amongst creatures that are not closely related to humans, and it mostly centres around problem solving, even when introduced to problems that the creature and its ancestors had never grappled with.
In these situations does the creature try the eager solution despite it obviously not working near the end?
For example, does it take the close bridge that gets it 99% of the way across the river or does it not even bother and travel to the farther one that actually gets it across.
If it uses small sticks in nature will it employ artificially introduced tools to solve artificially introduced problems?
Etc.
The key here is to apply the test without breeding for it first. That is how you tell the individual is intelligent rather than the breeding mechanism. For example, I don't think the individuals are at all intelligent in OpenAI's[0] Hide and Seek. It's basically burned in instincts since the introduction of a button that would, say, swap positions with the farthest enemy agent wouldn't be utilized for hundreds of rounds of play. The learning is the brain and replication together. Reminds me of a story about aliens on LessWrong[1] about baby eating aliens.
The definition marks the goal. We can reach the goal without definition, but would we recognize it? And how much time and effort will be wasted by blindly tinkering around in the dark? The definition is a light beacon. Having it, will speed up progress significantly.
>We can reach the goal without definition, but would we recognize it?
You can very easily shoot a porn, even without being able to define what is pornographic and what is not (a proverbially notoriously difficult to define thing, often times debated legally in various countries, for example "movie where sex acts are performed" or "movie meant to arouse the viewer sexually" both don't cut it).
>Because people know how sex works and how filming works.
Not exactly, because both sex and filming could be done in a non-porn setting - including those two together (e.g. a sexual art film). And the question mentioned wasn't what is a "film that has sex", but what is porn - and even more so, what is it's definition (the same thing you asked for in the original comment, as necessary to "marks the goal").
The answer here (but how do we know we made a porn if we can't give a strict definition of what porn is) is that we can tell intuitively. Well, the same holds for intelligence.
That's perhaps the case for many things. Famously, we don't need to know what quadratic equations are to catch a baseball. Even though formally predicting its trajectory means solving one.
There is no problem with convergent evidence of various different measures correlating positively with general intelligence, which is also measured by IQ tests:
Intelligence doesn't seem that complex a thing to define, to me. Oxford Dictionary says "the ability to acquire and apply knowledge and skills".
I generally go for a slightly wider "The ability to create models of the world around them, and make predictions based on those models" - where "knowledge" would be "models" and "making predictions based on those models" is a sort of proto-skill.
Sure, the kinds of things that different people find easy to model vary. One person might find it easy to model mathematical theorems, others the internal working of car engines. But in both cases there's an underlying ability to make a mental model of the thing you're learning about, and use that to predict how it will function, and work out what you can do with it.
We're only as intelligent as the models we learn. Where we don't have models we tend to learn by simple correlation. Discoveries are rare, even researchers only grind at that 0.01% beyond the known modelling space.
A human in a primitive society, even with the same IQ with a modern man, would be much less intelligent about solving a variety of tasks because they lack the mental "furniture". Most of our intelligence does not come from the brain, it comes from the culture, which is an evolutionary process.
> Oxford Dictionary says "the ability to acquire and apply knowledge and skills".
And what is knowledge? When I say I know p, then p is a proposition. Thus, p is intelligible. Anything intelligible is conceptual. Indeed, if we analyze what a proposition is, we see that it entails predicates and predicates correspond to concepts. However, concepts are abstract and universal, that is, they are not concrete or particular, they are not mere images. Triangularity is not this or that triangle, but that which holds of all triangles, and you will not find triangularity out and about in the world on its own, but only instantiated in particulars, and any particular is not any of the other particulars for which the same predicate holds.
When we implement propositions in computers, we really only simulate the formal via mechanical manipulations. When I create a negation operation on a string of symbols, I am only moving symbols around in a way that corresponds to what negation would produce. But the computer is not strictly speaking negating anything. Furthermore, the symbols that stand for predicates are just placeholders at best. There is no concept in the machine. Deep learning does not somehow magically transcend this limitation.
> "The ability to create models of the world around them, and make predictions based on those models"
Modeling and prediction is not intelligence, but a consequence of it. It is also central to modern science because the purpose of modern science is, to a large degree, less about understanding nature than it is about mastering it for practical purposes (prediction is presupposed by control).
I think the problem is coming up with a definition of intelligence that is measurable across contexts. I’m worse at creating mental models when traveling in a country with language and cultures I don’t understand. Does that mean my IQ is 100 when I travel and 120 at home? Functionally probably. It would probably measure as such.
Also my oral communication tends to be more around vague ideas, not specifics, and I have trouble only communicating one idea at a time linearly. I’m sure I would score differently on the ability to model something depending on how the test was conducted.
In general I would say communication, focus and executive functioning will all get in the way of measuring raw intelligence.
Does memory count as intelligence? I've seen people with great memories and people think they are very intelligent. They people I'm thinking about were actually intelligent, but their great memories really set them apart. For example, the guy at work who remembers every project, every shortcoming, and the reasoning behind every technical decision.
You could also say that mathematics is a model of how Sets behave. It is a generalized model of reality. We don't create models of mathematical theorems, the theorems are a property of the very generalized and abstract model which is mathematics.
My dog is very intelligent. Each time I stop and let him play he remembers the place we stopped at. So going back home with my dog is really hard since he tries to stop in all the previous places he remembers, and usually they are like fifty or so. Also he always tries to go as far as possible from home, taking exactly the direction in which global distance increases, so my huskie dog should become a great mathematician if he desired so. I think one day we will discover how much more intelligent are dogs compared to what we think. Some more hints for dogs intelligence: My dog is a master in dodging other dogs while running. He takes a long time analyzing the pee he smells. Furthermore, to pee over other dogs pees, he usually spends about 15 seconds trying to find the best position to do it well, it must be a difficult task for him or perhaps he foresees and enjoys the real pleasure he will obtain doing so.
The belief that an airtight definition is required in order for a concept to be meaningful is just philosophically confused. That isn't how concepts work. Conceptual boundaries are always fuzzy, with confusing edge cases and ambiguities, and that's ok.
I'd challenge people who think definitions are necessary to grapple with coming up with a definition of a chair, in the form of a list of necessary and sufficient conditions. Make sure you don't exclude anything commonly thought of as a chair, or include anything not commonly thought of as a chair. For an extra challenge, come up with a definition most people would agree on. This endeavour will be a struggle.
And yet we manage to discuss chairs without difficulty. We have a generally understood concept of what we mean by a chair, with central examples like dining chairs or lounge chairs commonly held to be chairs. In the case of ambiguities of communication, we can clarify on a case by case basis ("Did you want me to take the ottomans to the other room too?")
The simplest definition of intelligence I use is one's capcity for abstraction. Is a mind capable of generalizing accurately and then refining it with some good tools for feedback?
Pattern recognition is close, as it's abstracting things into symbols and then comparing the symbols, but it's not sufficient. "Smart" I define as the ability to effect intentions, or to get what you want. In this sense, a lot of intelligent people are not very smart, and a lot of very smart people are unhindered by intelligence. Animals are perfectly smart for their environment without needing much intelligence. Humans are poorly adapted to our physical environment, and require a great deal more intelligence to have survived. Language is useful for many things, but the things it isn't good for are anti-smart (e.g. I think our ego as a refining filter for experience is an artifact of language)
Anyway, the Turing Test as a thought experiment isn't really a measure of intelligence so much as it is an economics model of an indifference curve, which is how much the observer cares (or not) about whether they are dealing with a machine.
I'm actually more bullish on the possibility of AGI for some admittedly very strange reasons, even though I am harsh about people who anthropomorphisze code and fall for animism. My view reduces to a kind of theistic argument where if we can create conscious life from rude materials, it is logical evidence that we ourselves may also have been the expression of some similar intention. If AGI is demonstrably impossible within our physics (like incompleteness theorem level proof), then we exist within a hard ontological boundary, and the best we can do is infer what that boundary is made of (probably time/gravity). The reason I think AGI is plausible is because I have theistic axioms that create a kind of circular reference where if we can Create life, then we could also have been Created with the intent to discover and appreciate the meaning of that, and if we cannot Create, we were not meant to experience that Creation. Maybe even if there is something on the other side of death, we may still just be programs or epiphenomena that aren't intended to reflect or apprehend our substrate, sort of like the one-way directional relationship between an instrument and a song played on it. An AGI would make the leap from song, to software, to operating on its environment using abstraction, generalization and feedback. It wouldn't be "us," but I think it could certainly become a them that could eventually exist independently of us.
Interesting article. I personally don't think we'll ever have a definition of intelligence because it is an emergent behaviour. We'll just keep working on it, all the while believing that what we create is not intelligent, until one day it is and kills us all.
Much as how Kolmogorov solved the problem of defining randomness, I feel like Legg and Hutter solved (or at least mostly solved) the problem of defining universal intelligence. Loosely put, it is an agent that makes the best possible decision about the future with the information that it has available. “Best” can be defined however you want; it’s less about the particular cost function and more about achieving optimality. And no, the oft-cited yet frequently misunderstood NFL doesn’t rule this out.
> makes the best possible decision about the future with the information that it has available
So would a good thermostat be considered intelligent, assuming it takes great decisions? Intelligence needs to be defined with respect to a range of tasks and a range of priors. It's skill acquisition efficiency, not just taking good decisions.
I recommend "On the Measure of Intelligence" by François Chollet
>We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience.
> Intelligence needs to be defined with respect to a range of tasks and a range of priors.
Yep, you’re right. I forgot to mention that in my “loosely put” sentence, although the work by Hutter and Legg does mention that and discusses algorithmic information theory heavily.
Humans seem to do a million different things. And all of them were kids growing into adults and do by learning to do things.
Isn't that good enough if a machine did just that - ready to learn if taught, to be classified as being generally intelligent ?
I think approximately nobody is studying AI with the end goal of making a machine that is (say) as smart as a dog. It is a good intermediate goal but the hope has always been to surpass humans. If we want more doglike intelligences we can just breed more dogs.
Generally intelligent is only the start, and not a very interesting part. Animals can obviously learn things and they are "generally intelligent", but most are not all that bright. We are not all that clear on why either. Elephants have larger brains than us, so clearly it's not raw size but rather the complexity of the brain that makes us so much smarter. Which parts of brain complexity matter and which are accidents of evolution is almost entirely unknown.
IQ tests test pattern recognition which is a proxy for intelligence. All possibility of racial and cultural bias has been removed by limiting test questions to geometric patterns that have no cultural significance.
A bench press is simply a test of how much you can lift. But no one will say that it doesn't measure how strong your arms are.
A foot race is simply a test of how fast your legs can move. But no one will say that it doesn't measure how fast you are.
I think the thing that trips people up is that they often confuse knowledge tests with intelligence tests.
Reciting the periodic table is not a sign of intelligence, it is a sign of knowledge. As is knowing geography, languages, etc. If it requires a recitation of knowledge, it's most likely a knowledge test. Like most of those Facebook quizzes "Only the smartest can answer all these questions!!!". And you go to the test and it's just trivia. Trivia tests knowledge. Knowledge tests can be biased, they can be 'gamed' (except the best way to game them is to memorize the knowledge), etc.
Intelligence tests are something different. They test our ability to reason. Which is harder to measure. Because often there is a baseline of knowledge required. People like to dump on the old "brain teaser" and less old "leetcode" style interview questions. But they're probably better than quizzing people on esoteric language features. Language features are trivia. Questions with no real answer or no known answer are better proxies for intelligence. Not great and if you stick with the same questions, you'll eventually turn them into trivia as the answers become codified. But, you know, proxies are never the real thing.
I also happen to agree with the article. We do not have a clear definition of intelligence. Or consciousness. Or sentience. So unclear, we kind of use all three interchangeably. We have a fuzzy feeling. We don't think Lamda is intelligent/sentient because we don't feel it is. We feel that each other is intelligent/sentient because of course we are. We can't prove that to each other. And we can't come up with a definition that both includes everything our fuzzy feeling says is intelligent/sentient and excludes everything we think is not.
That's true -- there are tests of brute strength, for instance, and there are also vocabulary tests. But the kinds of tests life throws at you aren't often so narrow -- they're more general tests. Once in a while you'll be faced with an immediate test of whether you can lift a heavy car off a human being in the next five minutes because there has been an accident and someone is pinned under a car, but the generalized form is "can you move a heavy object", for which the answer is something along the lines of "invent a crane".
And most important: IQ is predictive. If all you were allowed to know about a set of people were their IQs, then you could make money all day using that information to bet on a variety of life outcomes.
"For example, we often say that dogs are intelligent. But what do we mean by that? Some dogs, like sheep dogs, are very good at performing certain tasks. Most dogs can be trained to sit, fetch, and do other things. And they can disobey. The same is true of children, though we’d never compare a child’s intelligence to a dog’s. And cats won’t do any of those things, though we never refer to cats as unintelligent."
I, for one, definitely think of cats as unintelligent. A cat is like a potted plant compared to the companionship and relationship a dog offers.
Having grown up with both, I strongly disagree. You can build very strong bonds with cats, but you have to earn their trust first. Just because they don't feel that the purpose of their existence is to please their owner doesn't mean they're not intelligent.
A spermwhale's brain is about the size of a basketball. An infographic for all brains and how their neurons are divided up in special domains, the ratio of brain size to body size, etc, would be interesting to see.
Yes, but African Greys also have among the largest brains of all birds.
You can't necessarily compare bird brains to whale brains, but within birds (or really any similar class of animal) neurons seems to directly correlate with perceived intelligence.
Yeah my brain is a netburst. Not even joking. Context switches make me have to completely flush my pipeline. But once I get going on something my pipeline is pretty deep and my clock rate can get pretty high.
The inability of a cat to communicate with a human being is what makes cats unintelligent. People like to think that cats are above communicating with humans when the fact is that they're just too stupid. It's not like cats could follow instructions if they wanted to but they just don't want to -- it's that cat are dumb and they can't follow instructions.
> It's not like cats could follow instructions if they wanted to but they just don't want to
You can teach your cat to use the toilet, come when you call its name, roll and jump. They can even open doors which requires understanding how a door knob/rail system behaves. We had a cat that would follow us everywhere even if we went for a walk outside the house for several miles.
Additionally, cats are not nurtured like puppies are. There are considerable amounts of information and know-how from breeders on how to socialize puppies to develop their ability to follow instructions and respect their master. Kittens on the other hand are mostly raised in small cages in pet shops and treated as glorified hamsters that can jump high and purr. Going further, canines are pack animals which are naturally predisposed to work with others and understand complex hierarchies. Felines are mostly solitary and only really get in groups during mating season.
Another counter-example (unrelated to cats) are birds. Birds are incredibly intelligent, they can learn to solve puzzles, talk, and answer to their name. Yet if you ever get a budgie (or similar) you'll notice that your animal seems dumb as a rock. That's because birds require a lot of work in order to learn those tricks because they are naturally distrustful of humans. If you put ~1hr a day on training your budgie as you would with a young pup, you would see incredible improvements.
As a final note, I would concede that a cat's intellect is no match to a dog's, but using "follow instructions" as a metric to define intelligence and concluding "cat are dumb" is an oversimplification.
Cats will let you know if they want to be petted, brushed, for you turn on the sink so they can drink from it, or of course be fed. They will also make it clear if they are excited or if they feel threatened. They can communicate just fine when they want to.
There are animal handlers and zookeepers who have trained cats to perform complex tasks on obstacle courses repeatedly & reliably, just like dogs and other animals. I saw this first hand at a zoo myself (I think it was San Diego but I'm not positive).
On top of all this, if you have interacted with a lot of cats, it becomes obvious that there is a ton of diversity in their personalities & behavioral patterns, even between siblings. There is no point in discussing 'smarter or dumber' here, their intelligence is just different-- it's unique to their species, as with pretty much every other animal capable of being trained.
Forgot to mention, in a house with 10 cats where I once lived, each would respond individually & separately to their name. So they can understand verbal direction.
That's not what that article says. It says that dogs have been objectively measured to have twice the number of neurons as cats. And that is speculated to mean they are roughly twice as intelligent.
With that in mind, dolphins have 2x and elephants have 3x the number of neurons as humans.
I think it's reasonable to go back to the original inquiry: How are we defining intelligence here? Is it reasonable to define intelligent as the ability to communicate with human beings?
This is just wrong. My cats can operate light switches, open doors, recognize patterns, etc. They know the difference between me going to bed and me going to the bathroom. They'll know when I'm in my office. They'll demonstrate awareness of basic commands, etc. They have preferred people. One of our cats absolutely adores me. There is a very clear difference in how he treats each member of the family.
The thing I've heard is that cats are more like humans, in that cats operate on consent. Cats have to want the thing you are offering and they will make their desires known. They are conditional.
If you want worship, get a dog. If you want a low maintenance roommate/pal, get a cat.
Dogs can herd sheep, navigate subway systems with blind people, serve as scouts in the military, recognize seizures and call for help, and so on. It's just night and day how much more intelligent dogs are than cats.
Doing tricks is neat, but I think dogs get too much credit for doing tricks and following commands.
A dog (or cat!) can show impressive signs of intelligence without having much inclination for tricks. Our dog knows who we are talking to on the phone, our daily routines and many "intelligent" things, like spoken instructions on how to behave the next morning, but hardly know any "standard" dog commands at all except "sit" and "stay".
The way cats work is that they're much more intelligent than they ever let on, apparently. A cat is just too smart to stoop to the level of a human being!
Nah, they let on just fine. It's just that cats communicate in their own terms and tend to think we're kind of stupid, which from their perspective is fair enough. They don't look to us for validation the way dogs do, or go out of their way to make themselves legible; they don't much care what we think of them. Some people have trouble with that.
I never have, but I've also known cats most of my life, starting when my mom decided the best way for me to learn their manners was to let them teach me themselves. It worked well enough to make me one of those people who gets along perfectly well even with somebody's cat who "doesn't like anybody!" - I can't count how many times I've heard that or variations on it. So maybe I'm biased, or maybe I'm speaking from greater knowledge. Make of it what you like, I suppose.
I can respect a cockroach, though I don't welcome their company. You could say the same about wasps, and I get along fine with them too. Granted, like cats, wasps don't play on easy mode for humans. Nor should they; it's good for us to be reminded every now and again that we don't actually own this planet, no matter how much we like to think we do.
But really, we could say the same about almost any other animal on Earth, because almost no animal on Earth has been as extensively modified to suit human preferences and purposes as has C. familiaris. Absent the most extreme provocation, and sometimes even in its presence, they like us no matter what we do. Why not? We've spent several thousand years seeing to it that should be so.
So I'm not really sure what you're expecting you're going to prove with this whole line of discussion beyond that you like dogs and have some very strange notions about cats, neither of which points I think by now requires any further elaboration.
> these discussions tend not to go anywhere, largely because we don’t really know what intelligence is.
40 years of academic cognitive science would beg to differ.... This sounds like it suffers the common tech dunning-kruger approach to pre-existing literature
I suspect the idea that we need definitions first comes from education, where much of what we know is presented in this manner. This is rarely, however, the way our initial understanding was achieved - just consider how our concepts and definitions of 'energy' and 'matter' have changed over time.