Hacker News new | past | comments | ask | show | jobs | submit login
Rats have an imagination, new research suggests (sciencedaily.com)
113 points by jdmark 11 months ago | hide | past | favorite | 77 comments



it's important to do the research, but this is one of those headlines that any rat owner would say "Yeah, obviously." to without a second thought.

They strategize, they're mischievous, they test you, and ultimately they're quite cunning. These kind of behaviors are predicated on having an imagination to a certain degree.


These headlines only work if you are stuck in a behaviouralist view of biology where all animals behave essentially like dumb robots. It was a common view in academia until the 1950s and even later in psychiatry. These days things are much better. No serious animal cognition researcher today should be surprised by rats or mice having imaginations.


Everyone who owns a dog knows they have intelligence and self-awareness.


I find it interesting how we can be so aware of how dogs and cats are living being who experience life and should be protected, but at the same time, we ignore that cows, chickens, pigs, etc... all have those same experiences.

We end up caring about dogs and cats, while paying for the other animals to be killed and torn apart. If the same happened to dogs, people would become violent.

It would be nice if people held consistent moral beliefs.


"I care about what I like" is a consistent moral belief. The issue is that people don't realize that that is thier belief and create anlternative beliefs which sound better to them which then become inconsistent.


The universe recycles itself and has no feelings.


I am (at a minimum an aspect of) the universe and I have feelings, ergo the universe has feelings.


You are still being recycled in spite of your feelings.


I've met a few who don't.. I feel sorry for their dogs.


LoL


> Everyone who owns a dog knows they have intelligence and self-awareness

To be fair to them, I've occasionally anthropomorphized fish and my Roomba this way.


I wouldn’t sell the fish short either. The more we learn about them the more multitudes we learn they contain.


Funny that this should come up, 30 minutes afer I put our mischief of 3 young girl rats back in their habitat, but yeah, a few concrete examples:

  * Pudding rat plans escapes to the top of the habitat, whenever the door is open,

  * Rats are super trainable, and will speculative do tricks for treats (ours get gerber baby puffs, or rice krispies for spinning, going through a hoop, shaking hands etc.)

  * Pie rat likes loose clothing and will run into sleeves, pants, and general enjoy herself,

  * Rats also love certain styles of place, depending on the rat, and will happily just chill on your shoulder, or against your tummy, or under an arm and nibble a treat,

  * Crumble rat is the baby, and more timid, less curious, and prefers to hide, but will run around and climb on the humans if her sisters are doing it.
Personality-wise, rats are all across the spectrum, and that personality is visible at weaning when most rats are sold by breeders, and so far it's inherent to the rat, and not something that really changes much.

I've not ever noticed them dreaming as ours prefer to snuggle in a hammock, with some rags pulled down, but they clearly have goals, and imagination, not just 'behavior'.

Rat's aren't dogs, but a few rats can make a wonderful replacement if you're somewhere where larger pets aren't permitted.


They also have shorter lifespans, so they are not a 10+ year decision and commitment compared to a dog, but that's also a negative :(


This is so interesting. So do these domesticated pet rats have a litter box or designated area for relieving themselves, or are they like other rodents and just go where they please?


They're just like rabbits and many other rodents. They will pick a preferred bathroom spot. You can either just put a litter box there, or train them to use a specific spot. Like rabbits, they do leave the occasional poo around as a territorial marker, and males sometimes dribble urine for the same reason. But for the most part, they're pretty good about litter boxes. I've never seen one of my rats actually relieve themselves out of their cage. Just the territorial markers.

Rats are probably my favorite animal that I've kept. They really are as intelligent, personable, and expressive as dogs. But they're very clean animals, one of their man activities is social grooming. Males tend to be lazy and want to cuddle while females are more driven to explore their environment. They're also not strictly nocturnal or diurnal, but will adjust their sleep.schedule to match your activity. They sleep while you're at work, but want to play first thing in the morning and when you come home.

The only downside is that their maximum lifespan is about three years, and they need a specialty vet. Otherwise, excellent pets.


They do indeed have one (or more depending on cage size) designated areas for doing that. The specific place you put the litter box is something you teach them, but they don't need much training to learn as they do seem to naturally do this with spots they choose themselves too.

Of course it might happen occasionally outside of those areas too, but if that starts happening frequently it can be a sign that something is wrong with them health wise.


This is genrally good pet owner advice, if a pet stops eating or pooping (or doing it differently) see a vet!


Thanks for the reply. It seems like it is similar to how a cat is.


> Pie rat likes loose clothing and will run into sleeves, pants, and general enjoy herself

Eeek!


Yeah, and they can be very scratchy. :D

  me: "(apologetic) A small amount of urine escaped. The rat was not to blame."
  wife:  "(sternly) The rat was *totally* to blame!"
Just woken up and seen this after picking up my laptop, but only after feeding the cat, taking dog out to poop, and feeding the rats a few puffs.

They are such little characters, full of so much love and joy.

Just the short lifespan is the biggest downside. :(


You don't even have to be a rat owner.

I'd say any creature that dreams has an imagination, and dreaming is clearly visible when they're twitching in their sleep.

Also just watch Mark Rober's squirrel obstacle course to see how rodents can imagine.


Something like dreaming is present very far back in evolutionary history. The last common ancestor of ray-finned fish and terrestrial mammals (450 million years ago?) presumably had it, as both branches of the tree of life show REM-like sleep and brain activity.


Twitching isn't proof of dreaming.


Disclaimer; I'm not a scientist of any biological field. But OP was talking about the common perception of the article. So I chimed in as a pet owner.


Also if you've ever watched a rat sleep, they very obviously dream.


I think our own brains have fooled us into thinking that cognitive processes are more clever/complex than they really are. The recent successes from LLMs/Generative AI give a hint that things don't need to be so complex and mysteriously difficult to be quite like human intelligence.

"Imagination" seems like just an indirection mechanism: simple animal can do "go towards the food you can sense". A more complex animal can "remember where the food was sensed, then go towards that place later". An even more complex animal can: "From other sensed information (e.g. habitat type), construct the same food-memory object that the more primitive animal used, then go towards that". By the time you get to that third level of sophistication we're calling the mechanism imagination.


I think you severely underestimate how sophisticated life is.

Boston dynamics is trying to make a robot "go towards some stuff you can sense" for more than a decade, and what these robots can do after all this time doesn't even compare to what a rat can do. It is still several orders of magnitude off.

Textual AIs can do text really well. But text is not biology, it is a simple, linear system that _humans_ invented. Identifying food, planning to get it, mapping a habitat, etc... these are very different tasks.


Textual AIs can do text really well.

I would argue that even that they can’t do well at all. Look at the training set for a human. Humans learn speech, reading, and writing with a training set so small it’d be a rounding error from zero compared to what they feed into LLMs.

Chomsky called this the “poverty of the stimulus” in his argument against behaviourism. Yet LLMs are fundamentally the product of Skinner-style reinforcement learning. And it shows. They have succeeded to a wonderful degree at producing a facsimile of language understanding. But possession of language? Absolutely not!


Humans have a huge base model trained over billions of years of evolution. It’s impressive how quickly we learn, but it’s arguably comparable to fine tuning an existing model rather than starting from scratch.

Conversely, I find it insanely impressive how human-like LLMs have become by doing little more than feeding a vast quantity of reading material into a very basic neural architecture. I wouldn’t have ever believed how successful that approach would be.


The human model isn’t just trained in language though, it’s trained on the world. There’s such a huge amount of tacit knowledge we have about how the world works and how other people work. We have models for recognizing faces and reading body language, emotions, and intent. We can tell when a person is looking at us from across the room and tell that apart from someone looking just over our shoulder at someone else.

I think most of what we ascribe to intelligence in LLMs is a kind of textual pareidolia. We infer a mind behind the text because our brains are built to recognize minds. That’s how our theory of mind works.

But these LLMs know nothing about the world. They might capture some features of the world as patterns that help them better predict what text to generate for us. But that is not the same as knowledge of the world. They have no sensory contact with the world so they are unable to synthesize true knowledge about it. This is why they are so prone to hallucination.


>The human model isn’t just trained in language though, it’s trained on the world.

It's not trained on "the world". It's trained on a small slice of it, a few senses that are themselves slimmed down and then fabricated at parts.

To the bird that can feel and sense electromagnetic waves intuitively to guide travels, you're not trained on "the world"

>This is why they are so prone to hallucination.

No it's not. LLMs can distinguish between truth and hallucination. They just don't care to communicate that.

GPT-4 logits calibration pre RLHF - https://imgur.com/a/3gYel9r

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback - https://arxiv.org/abs/2305.14975

Teaching Models to Express Their Uncertainty in Words - https://arxiv.org/abs/2205.14334

Language Models (Mostly) Know What They Know - https://arxiv.org/abs/2207.05221

The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets - https://arxiv.org/abs/2310.06824


Text might be indirect, but it's crucially still about the world. I could say that being ale to read is one of the senses, and no sense reflects the world in itself, but rather is an indirection.

But I won't do it. Instead, I'll just replace a few words in your last paragraph to reveal that it doesn't discredit LLMs as much as it seems.

* * *

But these historians know nothing about the distant past. They might capture some features of the past as patterns that help them better predict where to start an archaeological dig. But that is not the same as knowledge of the past. They have no sensory contact with the past so they are unable to sunthesize true knoledge about it. This is why they are prone to hallucination.


Text might be indirect, but it's crucially still about the world. I could say that being ale to read is one of the senses, and no sense reflects the world in itself, but rather is an indirection.

Two things:

Justification for belief isn’t a binary state, it’s a spectrum. When we have a small amount of evidence supporting a belief, we have a small justification. When we have a large amount of evidence, we have a large justification. We describe this in terms of our confidence in our beliefs.

Without getting too far off into the weeds, this extends into all areas of science. We can’t see, touch, hear, smell, or taste electrons (9V battery tests notwithstanding), but we can gain confidence by replicating experiments with them. Working with a cathode ray tube, for example, we can manipulate the voltage on the plates and directly observe the deflection of the point of light on the screen. We can use the equations from a physics textbook, combined with voltage measurements on a multimeter, to make predictions about the electron beam’s deflection and then test them directly on the device.

Secondly (and crucially), the text consumed by LLMs isn’t about the world. It’s text of unknown provenance. Some of it may be about the world, some of it may be complete fiction. The LLM has no basis whatsoever for fact-checking the text. All it can do is perform a statistical test on the commonality of a text and then infer from that. Unfortunately that does not get you any closer to reality. A falsehood repeated many times does not become the truth.

As for historians? They have way more than text to go on. They have physical evidence (artifacts), they have radioisotope testing, they have DNA testing. They have geographic locations where texts were found. None of this real world sense data is available to an LLM.


Regarding text and other kinds of sensory data, human senses are of unknown provenance as well. Hallucinations happen, and can be complete fiction. Does that means that humans have no basis whatsoever for fact-checking? Other senses would correspond to other pieces of text, so I answer this as "no". If this was true, humans would have been detached from realty.

The other problem with discounting LLMs based on the form of sensory input that's available to them is that there is no qualitative comparison of validity of different senses. Sure, getting more visual input counts as more evidence about the world (bar hallucinations). But can part of it be substituted with text without losing the quantitative benefit? Can all of it be substituted with text? You seem to answer "no", but I think it's an oversimplification. Not that I have a definite answer.

The one thing that LLMs cannot currently do is design and perform experiments - or if they can, then they can't integrate the results. I'd look for ways LLMs cannot reach some level of justification for beliefs there. But I would not presume that this level is automatically lower than what humans can reach. I have not compared the size of a human training set to that of an LLM.


Again, justification is not a binary state. When you walk around your house, do you say "I might be hallucinating, so therefore I have 0 confidence that any of this is real"? No, of course not.

On the other hand, if you're talking to someone you've never met over the internet and they're telling you the shed in their back yard is painted red, what confidence do you have in that? How do you even know you're talking to a person?

LLMs are trained on large amounts of text that's just been scraped off the internet. Some of it may be high quality (if you want to call Wikipedia high quality) but much of it (Reddit) is not. When you ask an LLM chatbot a question, how do you know you're getting the Wikipedia answer and not the Reddit one? You don't. It's all mashed up. Until LLMs are able to reliably provide citations for their answers, they ought to be relegated to the epistemic dustbin.

Yet I see high schoolers relying on them all the time. They will ask ChatGPT a question and write down its answer verbatim. No second-guessing, no fact-checking, nothing. Blind trust. It's appalling!

My professors have even told me they've had students hand in papers full of citations. Turns out, all the citations were completely fabricated by ChatGPT. One professor even said the paper a student submitted cited HIM, the professor receiving the assignment, for a completely made up paper that never existed. When confronted, the students admitted all of the papers were ChatGPT-generated nonsense.

Even worse than the Reddit problem above is the problem of LLM-generated text proliferating throughout the internet. The next generation of LLMs are going to be polluted by that nonsense. Garbage-in, garbage-out. They'll be no more trustworthy than a psychiatric patient rambling to themselves in a padded cell.


> epistemic dustbin

I think this is a good idea if LLMs justification was 0. It's not, and it's not clear what it is. That people talk to a storyteller thinking that they talk to a scholar is on the people, or on whoever misrepresented the interlocutor, not on the interlocutor.

It's marvelous already that we can't rule out whatever nonzero amount of justification in the case of LLMs.


rule out whatever nonzero amount of justification

We absolutely can rule it out. The justification is zero. The LLM has not and cannot verify any of the factual claims in its text. It can't even tell a claim apart from a non-claim. In legal parlance, the entire training set of an LLM is hearsay. LLMs do not produce any first party information. It's 100% third party.


If you ask an LLM "did Alice on the X forum write Y?", you can get a justifiable answer. That's the text that the LLM got directly. Remember that speech is sensory input for an LLM, and it's as direct as your human senses.

Asking an LLM about the physical world is not unlike asking a historian about the past. All information they have past a certain point is 3rd party, mediated by tens/hundreds of years. Can historians say anything justifiable about the past? So can LLMs about the physical world.

If you want to rule out historians being able to justify their knowledge, I'd like to hear that.


If you ask an LLM "did Alice on the X forum write Y?", you can get a justifiable answer.

Sure, but that’s not a very interesting question and search engines tend to be much better at answering it anyway.

Historians have access to more than just texts about the past. They have access to physical evidence. Radiometric dating, artifacts, geographic data about where things were found.

At the same time, if you ask historians about (say) the Peloponnesian War they will tell you about the account of Thucydides. What they won’t say is that they know about it personally.

That’s a mistake a lot of non-scientists make about science. They’ll hear scientists say things like “we know this” or “we know that” and think that they are committing to some privileged knowledge of the world. They aren’t. They’re using the words “we know X” as shorthand for “our best theory about X.” In fact, many scientists identify as scientific anti-realists [1] which means they don’t believe science has any true facts about the nature of reality at all and that the only job of science is to make better predictions about the outcome of experiments.

Of course, if you take anti-realism to its logical conclusion then you have to admit that fossils are only evidence that you may find more fossils in the future, and that you have no basis to talk about the existence of dinosaurs.

Personally, I think our actual knowledge of things directly outside our personal experience is very limited. But then I want to encourage people not just to believe everything they read. People should be exploring and trying their own science experiments and trying to build things to understand how the world works.

[1] https://plato.stanford.edu/entries/scientific-realism/#AntiF...


That a question is not interesting does not rule out the justifiability of the answer, which you seemed to assert.

> Historians have access to more than just texts about the past. They have access to physical evidence. Radiometric dating, artifacts, geographic data about where things were found.

Correct. That's still all indirect, "third-party" evidence. Does that make answers based on it impossible to justify?

To make this analogy even closer to LLMs, let's take historian scholars (linguists? undergrads?) who have nothing to study but books. No experiments, no carbon dating, just texts. Is their entire knowledge about the past impossible to justify?


Again, you're thinking in binaries. Justification is a spectrum. Sometimes you have no evidence, sometimes a little, sometimes a lot.

Historians who only have books to go on have frequently said the accounts they're using should be treated as only that: accounts, not facts about what happened.

For example, even a figure as popular as Socrates is "known" only through the accounts of Plato and Xenophon. What do we know about him? The two accounts are in conflict about many details of his personality. So we can attribute very little in the writings of Plato to actual opinions of Socrates. Scholars treat these books as the opinions of Plato, expressed through dialogues of Socrates.


I'm questioning your binary. As far as I can tell, you're saying that LLMs cannot make justified statements because all evidence they have access to is indirect text. Therefore, you're stating that no amount of text can justify a belief.

Am I misrepresenting your position?

But when I present an analogous situation, except with historians playing the main role, you seem to accept the justifiability of their statements on a spectrum, rather than rejecting it based on the evidence being textual and indirect.

This is the crux of the discussion, without explaining which we won't come to a mutual understanding (much less acceptance).


Historians who don’t have access to anything other than text (which is your hypothetical situation) cannot justify any claim about real world events in the past.

However, this does not happen in real life. Text does not fall from the heavens into the minds of historians. It comes in books, manuscripts, stone tablets, etc. Unlike digital text (which is the only thing LLMs have access to), physical texts have physical properties which can be examined experimentally. They can be subject to radiometric dating, they typically are found in a particular place (which is usually recorded), and they have version and copyright information which enables them to be traced back to the original. In the case where the original manuscript is lost (such as Thucydides’ history of the Peloponnesian War), we have to rely on copied manuscripts by other scribes, often made centuries later.

All of this is to say that it can be very difficult to trace a book back to its original author, especially without the original manuscript, but the evidence historians use to piece all this together is generally not in the text itself. It’s physical evidence. Exactly the thing that LLMs have no access to.


Thanks for the clear answer. If I understand you correctly, text alone cannot provide any justifiability, but text in addition to other sensory evidence (location, seeing the radiometric device's results, etc.) can provide justified knowledge.

Let me protest against this broader point.

If I'm a historian, do I have to find the physical text in the original place myself, or can this information be supplied to me in the form of text? Do I have to actually see the radiocarbon dating machine work myself, or is it enough that the operator sends me an email with the results? What you describe doesn't really strike me as evidence that cannot be supplied through text - you mentioned that it's recorded yourself. It's still text about the past, maybe even with an extra level of indirection (it describes the ~present which has some information about the past).

It follows that some of LLMs statements about historical artifacts should be justifiable - those coming from the written word that our hypothetical historian would study. Is that not correct?


I don't think this is correct. It could (at least theoretically) be able to verify chains of pure reasoning bounded by a fixed length.


If I ask ChatGPT "is the sky blue?" it replies "Yes, the sky is often seen as blue during the day, particularly on clear days." How does it verify this? It cannot. There is no chain of reasoning by which the LLM can verify this statement. All it can do is defer to others. In other words, the utterances of ChatGPT are entirely hearsay.


"Working with a cathode ray tube, for example, we can manipulate the voltage on the plates and directly observe the deflection of the point of light on the screen. We can use the equations from a physics textbook, combined with voltage measurements on a multimeter, to make predictions about the electron beam’s deflection and then test them directly on the device."

I would argue a small proportion of humans have verified the physics they've learned as you describe. The rest learn from textbooks and ultimately there is an element of trust that they have that most of the books are not false. Ultimately from a Bayesian perspective, I think it is possible to obtain very high confidence about certain attributes of the world even if your sources are very contaminated with fictions.


Anyone who majors in physics will have performed lots of experiments in the lab, including this one. The average person may believe what they read in a high school science textbook but their justification rests on the word of others and the general institution of education. Occasionally, we have seen situations where this institution is under attack [1]. Nevertheless, I would not consider something to be knowledge if you only read it in a book. Heck, even experiments can lead people astray [2] as later experiments may contradict earlier ones.

All of this is to say that it is very hard to know anything with a high degree of confidence. The confidence many people have is a false one. However, in the case of LLMs, there is no basis for any confidence whatsoever. The text is the claim and LLMs have no means of independently verifying that claim. All they can do is statistically evaluate the internal consistency of the text, and even that is fraught with difficulty. "A Lie Can Travel Halfway Around the World While the Truth Is Putting On Its Shoes."

[1] https://en.wikipedia.org/wiki/Kansas_evolution_hearings

[2] https://en.wikipedia.org/wiki/Geocentric_model


So we are model ensembles, probably capable of real-time training of short term memory.

LLMs are maybe a type of mind. They may carry and operate on basic thoughts, ideas, books, pages, sentences, words, letters….

They can probably problem solve and write code that runs sometimes. What they know about the world was told to them through stories, so whatever way we see it, they will too, maybe.

I don’t have to imagine anything in mind to process thoughts. All I need is an attention loop.


What they know about the world was told to them through stories, so whatever way we see it, they will too, maybe.

Knowledge is justified true belief. If you park your car on 5th avenue, you know it is parked there because you witnessed yourself parking it there. You have strong justification for your belief that you parked it on 5th avenue.

But if you then go to work and your car is out of sight, your justification fades. If you look out the window and see your car parked there, you reaffirm your belief. However, if you look and your car is not there, you realize you don’t know where your car is. For a period of time you were mistaken in your belief about the whereabouts of your car. During the time when your car was out of sight, you went from knowing to not knowing where your car was, but nothing changed inside your mind. To update your belief, you needed new information about the world.

An LLM has no justification for any of the information in its training set. It has no experiences with which to connect words and facts to real world objects and events. Therefore it cannot be said to have knowledge of any of it.


I think this is an incredibly strict definition of knowledge that would preclude many experts from many fields--and very different from how it is normally used.


It's not my opinion, it's the standard philosophical account of knowledge [1]. Do you have any examples of "experts" whose basic knowledge would be precluded by this definition?

[1] https://plato.stanford.edu/entries/knowledge-analysis/#KnowJ...


> I think most of what we ascribe to intelligence in LLMs is a kind of textual pareidolia

This is a great way of expressing a thought I’ve been unable to fully articulate.


I think it's somewhat of a stretch to say our "base model" had billions of years of evolution. Billions of years ago, mammals didn't exist and the only things around were more like plankton or algae and had nothing like a "base model" we could say we somehow inherited. The earliest ape-like creatures appeared around 10M years ago.

The first mammals appeared around 225M years ago so you could potentially argue that our "base model" first started evolving around then but I still think it's something of a stretch to compare this kind "training" to the ways we are training modern neural networks. The "base model" at this time was simply survival, eat, reproduce, survive, and enough circuitry to manage your base biological functions.

We are essentially running the entire volume of human knowledge through a neural network through billions of iterations and the model itself has 175 billion plus parameters. Humans nor any of our evolutionary ancestors never received this kind of "training", it's simply not comparable at all. Our mammalian ancestors were exposed to "basic" natural environments, they were not "pushed" into artifical situations to learn tool usage or language.

If we look at when apes first came about (10M years) ago and let's say since then the average ape or humanoid lived to 30-40 years, and estimate the average generation length for apes at 20 years (which is roughly accurate according to the latest research). This means that since the first recorded apes there have been about 500'000 generations of apes and humans. (12'000 generations for humans only).

So now if you compare how we are training our models, GPT-3 at 175B parameters and billions of iterations of training, GPT-4 we don't know. And again, extremely focused and specific training, feeding the entire human generated corpus of language, mathemetics, logic, etc etc into it, and we get something that does pretty well at human language.

Humans have a "base model" as you put it which really hasn't been trained for many generations and has been mostly exposed at random to external stimuli in an ad-hoc, unfocused way, and no single individual has ever been exposed to even a fraction of a fraction of a percent as much stimuli as a GPT model. So there is something different going on with our brain and neural networks and I think it can't really be compared at all: the mechanisms, numbers, and crucially, the results, do not match up in the slightest.


I was being slightly facetious in parts. The point I was driving at is that a human mind has had a very long time to do something perhaps akin to hyperparameter tuning, and we know that even imperceptibly minor changes in brain architecture can be the difference between struggling to put your socks on and being a genius that’ll be remembered through the ages. So those 500M years since the first neuron can’t just be written off.

Ultimately I agree with your final conclusion. You can’t really compare a LLM and evolved human directly. Even just a neuron in an ANN is nothing remotely comparable to a biological neuron. Of course it isn’t surprising that humans and LLMs are different given that they are built to do completely different things on fundamentally different hardware.

It just seems like many people are keen to write off the significance of GPT just because it’s not yet quite as good at everything compared to the world’s most marvellous example of engineering we all have in our skull. We didn’t even have transistors 75 years ago, but now we have a pretty believable facsimile (until you really interrogate it) of human intelligence that’s improving million times faster than evolution was ever capable of. But now the criticism is that it learns in a fundamentally different way to humans and doesn’t generalise fast enough. It’s true, but.. really?


Using AI behavior in an argument about psychology seems like a slippery slope though.


Chomsky didn't talk about AI at all in his argument about human language acquisition. Poverty of the stimulus refers to the very limited language exposure children are given when learning to speak. It's an extremely powerful argument in favour of an innate capacity for language, and more specifically an innate knowledge of grammar, in the human brain.


I see now, thanks!


I often think about how something as simple as a cockroach is miles ahead of any robot that exists with regard to self-sufficiency. It can explore, find food, avoid predators, mate, never get stuck on stuff and tons of other behaviors with a brain and nervous system of only about 1,000,000 neurons. A fruit fly does it with 250k neurons.


I suspect that a neuron is far more complex than a perceptron.


Why does a cockroach have 4 times more neurons than a fruitfly? What tasks do the cockroach need to achieve that require more "intelligence" than that of a fruitfly?


Cockroaches are pretty impressive and live a long time, at least as far as insects go. Fruit flies measure life in the order of hours, but roaches can live for a year or more. I suspect you need a more complex nervous system to assess and survive in an environment as a prey animal for long periods of time, whereas fruit flies can likely just brute force it.


Fruit flies can live for well over a month. The hours belief is a myth.


That’s not how evolution works. A cockroach might have 4x neurons because it just does.


This question makes me wonder if larger bodies have more complex physical/mechanical dynamics to solve for. Do physical laws become harder to get along with as your body plan expands? I wonder if that’s a factor.


Brain size usually correlates with body size, and cockroaches are much bigger than fruit flies.

I believe it’s something to do with operating the larger body.


> The recent successes from LLMs/Generative AI give a hint that things don't need to be so complex and mysteriously difficult to be quite like human intelligence

"The recent successes from LLMs/Generative AI give a hint that human "intelligence" may too frequently be culpably not so complex".


This American Life has an interesting episode on rats.

https://www.thisamericanlife.org/801/must-be-rats-on-the-bra...

Act One, “Fifty First Rats” (22 minutes), is particularly interesting in that it showcases the inventiveness and awareness of rats.


St Thomas Aquinas noticed that animals have these powers:

"Now we must observe that for the life of a perfect animal, the animal should apprehend a thing not only at the actual time of sensation, but also when it is absent. Otherwise, since animal motion and action follow apprehension, an animal would not be moved to seek something absent: the contrary of which we may observe specially in perfect animals, which are moved by progression, for they are moved towards something apprehended and absent. Therefore an animal through the sensitive soul must not only receive the species of sensible things, when it is actually affected by them, but it must also retain and preserve them."



I can picture it now.

An imagination of humans locked in cages while they experiment.



How else could they plot to take over the world every night?


The results of this research are not in mice, but in a life form much closer to humans.


The Cat was always plotting my demise. I could see it in her eyes.


* in mice




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: