Hacker News new | past | comments | ask | show | jobs | submit login
Joys and pains of insects (scientificamerican.com)
123 points by bookofjoe on Aug 19, 2023 | hide | past | favorite | 82 comments



After watching small creatures for awhile (insects/fish/birds), one will become rather aware there is a lot more going on than first assumed.

Bee hives in particular have collective temperaments, and being around a group everyday for awhile changes how they perceive your presence. For example, those who carelessly crush too many workers, will have their body odor associated with a threat. Accordingly, one may find your proximity to the hives will not be tolerated for the season.

  There were studies which tried to inject fake food-source location data into bee dances, and it showed even simple individual insects could distinguish nonsense. 
It is arrogant to apply our own emotional constructs onto these creatures, but one should certainly respect the fact they are alive and understand the world in their own way.

Note, the bees can teach you a lot about resource management in complex distributed systems. In a way, the 1U rack standard could be seen as the IT hive cell structure.

"Bee" good to each other, =)



> In a way, the 1U rack standard could be seen as the IT hive cell structure.

In more ways than one: https://xkcd.com/1439/


Anecdote from Texas:

I have admired the various dragonflies across north Texas. A wide range as you might say. From urban to rural lakes, I spot them, the most magnificent flight system I’ve seen.

And they eat mosquitoes.

At my residence a juvenile dragonfly accidentally made it indoors. I spent a good 30 minutes off and on guiding it back outside. It wouldn’t survive here in captivity I’m not qualified.

Here’s the interesting part:

There is a beautiful adult green dragonfly on this property that when I walk thru the pathway where it lives, it engages and either leads the way, shows off, or in general does things that unless you really believe in the nature and nurture overlaps, well, I’m just glad to participate.


If you're the kind of person who is suspicious of small sample sizes, several of the bee studies cited in the article as evidence [1, 2, 3] draw conclusions from small groups of 10-20 bees.

I think this is a good question, and I get why it's hard to do animal research at more statistically significant scales, but for me the current evidence is uncomfortably close to "behavioral psych, but for bees", right down to the fun pop-sci conclusions ("bees like to play!"), which again, cute! but overstated as something that "science tells us".

(This is all separate from the larger question of the relationship between response to external stimuli and emotional states, which just seems hard to answer.)

[1] https://europepmc.org/article/med/27708101

[2] https://www.sciencedirect.com/science/article/pii/S000334722...

[3] https://europepmc.org/article/med/27708101


You’ve got to be careful when rationalising sentient-seeming behaviour in non-humans as ‘actually non-sentient’. Pull too hard on that thread and you’ll realise that most humans aren’t especially sentient most of the time either, with all the attendant uncomfortable implications.


Bees are sentient in at least two senses of the word (depending on the reading of “conscious”):

  sentient

  Having sense perception; conscious.
  Experiencing sensation or feeling.
  Having a faculty, or faculties, of sensation and perception.
The property humans have is reason. And yes it’s observably not always exercised.


A lot of people say “sentient” when they mean “sapient”.


It seems like you're confusing sentience with intelligence or some other related concept. Sentience is the mental state of subjectively experiencing sensations. The only time humans aren't sentient is during some stages of sleep, being knocked unconscious, general anesthesia, brain damage, etc.


I'm not sure that there's a sharp distinction. From my experience inside my own head, the times that I spend 'on autopilot' in hindsight seem a bit robotic. Pleasure and pain may have been experienced, but were somehow less meaningful than on other occasions when I am better attending to my feelings or just in a more stimulating environment (e.g. when it's raining and there's fresh breeze blowing in through the window).

What we think of as 'sentience' is surely a combination of things happening in our brains that all animals have to different degrees, rather than an on/off switch that's only present in certain animals and at certain times. It's just that most people's conscious experience is boring enough (unconscious -> wake up -> go to work -> go home -> go to sleep; I can do algebra but my cat can't, etc) that a simplified 'conscious or not' model is generally accepted.


Yes, I think the general consensus is that consciousness is a spectrum. But even when something is a spectrum, it can still make sense to define a binary in terms of a threshold, like how we define bits in terms of high and low voltages.

Obviously even when you're on autopilot you're above a certain threshold of sentience because you're still having conscious experiences. You would react if you were to see something dangerous or feel something painful. I still think it makes sense in this context to say that humans are sentient most of the time.

I think it's reasonable to assume that intelligence is orthogonal to consciousness. I can imagine an AI that is very intelligent but has no subjective experience. Similarly, I can imagine a creature that has a very rich range of subjective experiences, but is unable to understand and change the world around it. Obviously these are just thought experiments and we don't have good theories of what consciousness and intelligence really are, but I think it's consistent with our current understanding of biology and neuroscience.


If your definition of sentience is entirely related to subjective experience (as opposed to anything more concrete such as eg. a reflexive self model) then, given how completely non-disprovable subjective experiences are, it seems irrelevant to any serious conversation.


Can you say more? I'd like to hear about examples you've experienced.


I'd say there's a pretty good case that during states like 'highway hypnosis' where we're performing some long-running monotonous task and seemingly cease being consciously aware of it, we're not 'conscious' as most people define the term. I've had situations when extremely tired when I've zoned out for long periods while driving (yeah I probably shouldn't have been driving at that point) and been jolted out of it by some event on the road (eg. someone cutting in in front of me.) By the time I was consciously aware of the situation I'd already slowed and/or moved to the side to avoid the collision so I was clearly still operating the car with some level of competence beyond just simple lane-following, but I had no memory of the preceding 10-15km.

A little more controversially, I think being in a state of 'flow' (aka that state most of us see as the epitome of intellectual existence) almost definitionally means you're not 'self-aware'. You're wholly focused on the activity rather than yourself.

More generally, any routine situation where people spend a lot of time in a semi-dissociated state. I don't think people are actively conscious most of the time, consciousness is more like a supervisor task that pops up to intervene when the baseline circuits governing moment-to-moment behaviour reach a certain level of perplexity.


Thanks, those are interesting! I didn't think of them and I can see your point


> [...] several of the bee studies cited in the article as evidence [1, 2, 3] draw conclusions from small groups of 10-20 bees.

Seems fair. We don't judge human intelligence based on a large group of people either.


The article seems to mostly equate emotional states with consciousness but doesn’t make the case for that equivalence. Paramecia also exhibit negative stimuli avoidance behavior but I don’t think that can plausibly be cited as evidence that they experience pain. I am generally persuaded by Antonino Damasio’s arguments that consciousness is roughly a system “on top of” emotion, the latter essentially being a shorthand for a variety of behaviors related to the body’s efforts to maintain homeostasis. (I’m simplifying and probably getting details of his argument wrong, it’s been more than a decade since I read his stuff.) I am sympathetic to the idea that “lower” animals may have something like mental states or conscious experience, I just don’t find the arguments in TFA convincing.


My guess is that "pain" has many varieties, experientially and otherwise. The problem probably isn't in identifying the absence or presence of pain, it's coming to terms with what the nature of it is for any given organism.


Personally, I'd much more need to be convinced that complex behavior and emotional states aren't evidence of consciousness. I feel the burden of proof goes in the other direction.


One may be tempted to use these findings as a justification for (indirectly) killing sentient mammals. Add studies showing that even plants and fungi have senses and try to avoid harm, and you might as well consider that there should be no ethical considerations around various diets. Since everything living can feel pain for a generous definition of the term and since you kill plants and indirectly insects even on a plant based diet, there is little difference. Unsurprisingly this is a very common argument against veganism.

But if you truly care about the feelings of insects or plants you should switch to a plant based diet as you'll probably indirectly kill at least an order of magnitude fewer plants and insects, the majority of which are consumed by practices related to industrial farming (large scale deforestation, waterway pollution, feedstock).


Exactly. Those are very common fallacies / excuses.

> Add studies showing that even plants and fungi have senses and try to avoid harm

https://yourveganfallacyis.com/en/plants-are-alive

> since you kill plants and indirectly insects even on a plant based diet

https://yourveganfallacyis.com/en/vegans-kill-animals-too

"Crop fields do indeed disrupt the habitats of wild animals, and wild animals are also killed when harvesting plants. However, this point makes the case for a plant-based diet and not against it, since many more plants are required to produce a measure of animal flesh for food (often as high as 12:1) than are required to produce an equal measure of plants for food (which is obviously 1:1). Because of this, a plant-based diet causes less suffering and death than one that includes animals."


Could a diet where you only eat the meat from one slaughtered cow be the ideal vegan diet? One consciousness killed.

This approach would require purchasing a half from a local farmer who raises 100% grass fed animals and makes his own hay.

The hay definitely kills some animals when harvested though. It would be interesting to quantify the death impact (calories of beef produced per bale of hay) of a hay harvest and compare that to the death impact for some staple vegan foods, and see how things stack up.

Edit: The need for hay is a climate based one. There are places where cattle can graze for more or less days in a year. Hay is used a winter feed in places where winter forage is unavailable.


While the notion of consuming only one cow's meat appears to reduce number of deaths, we must confront the environmental repercussions of livestock farming.

The resource intensive nature of animal agriculture causes more harm than it's immediately apparent (deforestation, biodiversity loss, eutrophication ...).

https://www.oxfordmartin.ox.ac.uk/publications/grazed-and-co...

https://www.theguardian.com/environment/2018/may/31/avoiding...

https://news.ycombinator.com/item?id=37188407


Thanks for that first link. I'm interested to see what the conclusions are.


> Since everything living can feel pain for a generous definition of the term and since you kill plants and indirectly insects even on a plant based diet, there is little difference

There certainly is if you take quantity into account. To quote the page linked to by unfairly-downvoted sibling comment:

> Regardless, each pound of animal flesh requires between four and thirteen pounds of plant matter to produce, depending upon species and conditions. Given that amount of plant death, a belief in the sentience of plants makes a strong pro-vegan argument.


Do you know of any arguments around what makes something a thing to avoid harming? I see many arguments against pain, and many arguments around intelligence, but why do most people take it as a given that pain is the worst thing possible and causing or experiencing pain is terrible?


This only applies to locally grown plants, especially permaculturally farmed ones. Modern farming and shipping kills tons of insects and animals.


This is one of the many reasons I'm starting to be a strong advocate for none-anthropocentrism. (And I hope more people can join me.)

Despite how almost every scientific and engineering break-through in history always has had its roots traced to none-anthropocentrism thinking (of that time), and tech has advanced so much in the past 50 years, yet we still live in rather anthropocentric systems, espeically socio-politically. This is really sad. A lot of problems and sufferings (as well as the inability to find solutions) we have created for ourselves as a civilization are in retrospect clearly the direct products of excuses, both for the exploitation (of "others") and the normalized incompetency (of the self-entitlted "us"), that which we bestow on ourselves.

I really hope within the next 50 years things will change.

related project: https://0a.io/


People often think that lowering meat consumption (something we absolutely have to do) means eating bugs.

I'd worry exactly about how this shift might inadvertently overlook the ethical considerations, especially given the sheer volume of conscious beings involved.


> People often think that lowering meat consumption (something we absolutely have to do) means eating bugs.

I have not heard this but I can believe that's what people think, however it seems a bit weird to me, do these people think insect meat is not meat, like some people consider fish to be somehow vegetarian?


I think in this case it's more to do with proportionally lower caloric input requirements for a given output mass of meat, although I admit I'm unclear on the details.

I wouldn't look too closely for a consistent ethical basis in vegetarianism and veganism, both of which require drawing an arbitrarily defined line between living things it's okay to kill and eat and living things that it isn't, and then treating that as axiomatic. I respect the motivation, but not the result - we all draw our own lines, but it takes an unreasonable amount of brass in a person to try to argue their line should be good enough for everyone.


Why would lowering meat consumption require eating bugs?


It wouldn't, but insects are a promising source of animal protein with less environmental impact than other forms of livestock. There are many plant and fungus based sources as well, of course, and in general there is an obesity epidemic and we can easily reduce the amount of protein we eat without having to replace it with anything.

Right wing propaganda then turns that into "they want to force you to eat bugs".


The body's protein requirements are usually overestimated based on hearsay and this results in too much importance being given to animal based food. Unless you're a growing child, pregnant, bodybuilder or have a medical condition you probably have plenty of protein without consuming meat, and very likely in the other cases too with a balanced diet. One should pay attention to sugars/fats/micronutrients a lot more.

The silly 'But where do you get your protein?' was repeated so much that it inevitably lead to 'they will give us bugs'.


True ... and we're already getting 63% of protein (and 82% of calories) from plants.

https://ourworldindata.org/land-use-diets


> something we absolutely have to do

That seems to be a personal thing.


> That seems to be a personal thing.

Not really. Animal agriculture is the leading driver of deforestation, biodiversity loss and droughts, and its other effects on the environment include significant greenhouse gas emissions, water pollution, and soil degradation.

https://en.wikipedia.org/wiki/Holocene_extinction

The current rate of extinction of species is estimated at 100 to 1,000 times higher than natural background extinction rates, and is increasing.

https://www.nature.com/articles/d41586-019-01448-4

Humans are driving one million species to extinction

https://onlinelibrary.wiley.com/doi/epdf/10.1111/brv.12974

More losers than winners: investigating Anthropocene defaunation through the diversity of population trends

https://www.theguardian.com/environment/2022/oct/13/almost-7...

Animal populations experience average decline of almost 70% since 1970, report reveals

https://www.jpost.com/environment-and-climate-change/article...

Mass extinction of Earth's wildlife is closer than we think - study

https://www.science.org/doi/10.1126/sciadv.1400253

Accelerated modern human–induced species losses: Entering the sixth mass extinction

https://www.un.org/sustainabledevelopment/blog/2019/05/natur...

UN Report: Nature’s Dangerous Decline ‘Unprecedented’; Species Extinction Rates ‘Accelerating’

https://www.mdpi.com/2071-1050/14/21/14449

How Compatible Are Western European Dietary Patterns to Climate Targets?

https://www.theguardian.com/environment/2023/jun/07/insect-d...

Insect decline a threat to fruit crops and food security, scientists warn MPs

https://www.nature.com/articles/s43016-023-00795-w

Vegans, vegetarians, fish-eaters and meat-eaters in the UK show discrepant environmental impacts

https://www.researchgate.net/publication/320356605_Agricultu...

Agriculture production as a major driver of the Earth system exceeding planetary boundaries

https://pubmed.ncbi.nlm.nih.gov/26231772/

Biodiversity conservation: The key is reducing meat consumption

https://www.eurekalert.org/news-releases/917471

Feeding 10 billion people by 2050 within planetary limits may be achievable

A global shift towards healthy and more plant-based diets, halving food loss and waste, and improving farming practices and technologies are required to feed 10 billion people sustainably by 2050, a new study finds.


Pigs and chickens are built to consume insects, but we feed them hardly any.


"If death is instantaneous, such as when you slap the mosquito on your skin, there is little room for suffering."

This assumes that mosquitos perceive time as we do. While all living things operate with the same molecular building blocks and physical laws, I don't know that perceiving time is as low level as that, so qualitatively what is an instant for us could be a long time for a much smaller organism.


This initially took me down a road of silly mysticism, like thinking about a fly with a gazillion images in its eyes, but even humans don't always perceive time exactly the same way. Who hasn't had a meeting that felt like it would never end, or said "five more minutes" to a toddler?


The perception thing is tricky. How do you even know another human perceives things as you do? AI teaches us some truths through its failures and successes.

It's a fallacy to think that we only think with our "mind". We think when we "try something out"; we think with our "gut"; we "take a walk"; we "sleep on it". We toss rocks, fiddle with rosaries, bow to mecca. All of these engage other parts of our body, even parts of our environment. (How do people make friends with a river or the wind? How do they perceive seasons? What is the epistomological basis of "Saturnine"? Why do we name some inanimate objects, in particular ships; and why feminine?) By my lights if you carry your smartphone all day, it's a part of you; I avoid making it a habit for this very reason.

I've had serious astigmatism my whole life. I also managed to function uncorrected into my forties. My "vision" was extremely good, especially in a natural environment. Astigmatism means everything isn't in focus at one place at once, and there are interference patterns; so I'm hard wired to see certain patterns. So imagine that fly's eyes. The individual units don't focus to the best of my understandings. Think about them as individually manufactured, and maybe the quality control isn't what you think it is. Maybe it's expected that some focus near and some focus far; maybe some even suffer from e.g. astigmatic conditions. If AI can classify images from raw JPEG images, it's reasonable to suspect that it can integrate sensor data from multiple heterogenous units incorporating light sensors.

So at some level it builds a model. The question has to be how much introspection the control unit can bring to bear on that model and does that help it interact with its environment in a more efficient or effective way.


Is there any basis to believe that mosquitos perceive time at all?


Maybe you should read the article? :)


I also had a microscope and loved to watch things under the microscope. It took patience: I would get something under the microscope and I would watch it interminably. I saw many interesting things, like everybody sees ­­ a diatom slowly making its way across the slide, and so on.

One day I was watching a paramecium and I saw something that was not described in the books I got in school ­­ in college, even. These books always simplify things so the world will be more like they want it to be: When they're talking about the behavior of animals, they always start out with, "The paramecium is extremely simple; it has a simple behavior. It turns as its slipper shape moves through the water until it hits something, at which time it recoils, turns through an angle, and then starts out again."

It isn't really right. First of all, as everybody knows, the paramecia, from time to time, conjugate with each other ­­ they meet and exchange nuclei. How do they decide when it's time to do that? (Never mind; that's not my observation.)

I watched these paramecia hit something, recoil, turn through an angle, and go again. The idea that it's mechanical, like a computer program ­­ it doesn't look that way. They go different distances, they recoil different distances, they turn through angles that are different in various cases; they don't always turn to the right; they're very irregular. It looks random, because you don't know what they're hitting; you don't know all the chemicals they're smelling, or what.

One of the things I wanted to watch was what happens to the paramecium when the water that it's in dries up. It was claimed that the paramecium can dry up into a sort of hardened seed. I had a drop of water on the slide under my microscope, and in the drop of water was a paramecium and some "grass" ­­ at the scale of the paramecium, it looked like a network of jackstraws.

As the drop of water evaporated, over a time of fifteen or twenty minutes, the paramecium got into a tighter and tighter situation: there was more and more of this back­and­forth until it could hardly move. It was stuck between these "sticks," almost jammed.

Then I saw something I had never seen or heard of: the paramecium lost its shape. It could flex itself, like an amoeba. It began to push itself against one of the sticks, and began dividing into two prongs until the division was about halfway up the paramecium, at which time it decided that wasn't a very good idea, and backed away.

So my impression of these animals is that their behavior is much too simplified in the books. It is not so utterly mechanical or one­-dimensional as they say. They should describe the behavior of these simple animals correctly. Until we see how many dimensions of behavior even a one­celled animal has, we won't be able to fully understand the behavior of more complicated animals.

--From Surely You're Joking Mr. Feynmann


I'm a GPT4 enjoyer, and I agree it might be our first knock on the door of true intelligence. But then I look at the sad state of AI in games and other systems we build and I'm not sure our computers have reached the intelligence level of a microbe yet. In some ways that microbe is smarter than GPT4 I think.


One of the thought provoking consequences of our progress with AI is how we are being forced to re-examine the philosophical underpinnings of “existence” as a being.

“I think, therefore I am” has been the gold standard for millennia, but is falling into a crisis in the face of thinking machines that are algorithmically incapable of an inherent sense of existence.

LLMs have demonstrated that symbolic intelligence is a product of a vast corpus of symbolically represented memetic data. It can be taught to and interpreted by both biological and digital neural networks, and it enables us to reason about the world. Much of what we understand as being human lies within this cultural-linguistic framework of data, so it is unnerving to watch a machine parse this data with a similar facility as a person.

Despite their uncanny ability to access and express the memetic code of human culture, LLMs are merely windows into human intelligence, devoid of an existence or a self outside of symbolic manipulation.

On the other hand, we experience a state of “being” that is impulsive and emotional in character, that can exist without the use of symbols or tokens. In this framework we can experience pain, pleasure, desires, needs, happiness, and distress. We have no reason to expect that this experience exists within a large language model, and we can derive that this is the critical feature of sentience through the process of elimination now that we have evidence that “thinking” can exist outside of “being”.

Yet, we can see clear evidence of this experience in other animals, including insects.

Is it then, “I feel, therefore I am?”

This has extremely profound implications across the board that we have yet to touch on as a species.

One of the most obvious and far reaching of these implications is what it is to be human.

If we are to accept that a sense of being does not a human make, and neither does the ability to think in symbolic terms, then what?

Are we -primarily- genetic in nature, or primarily memetic creatures?

Which is less human? A specimen of homo-sapiens with no access to the memetic code, or an LLM augmented dog that shares both our experience of existence and our culture?

What if it were an LLM enhanced honey bee?

We can easily presume that a lab gestated biological human, given access to culture is fully a human. What if they were gestated from fully synthesised germ cells? What if they were modified to include synthetic inference instead of biological inference, but maintained the biological sense of self? What if it was all digital?

We have a new tool to differentiate the human experience, and it is going to change humanity in ways we have not even imagined yet.


What does it mean, for something that is not you to experience feelings, qualia? How would you be able to write any meaningful definition, from the outside?

You categorically reject that an LLM running in a loop could have a sense of being, however, neither you nor I have experienced what it is like to be an LLM, if it is like anything at all. You're making assertions about the inner state of something you've only observed from the outside.

The point is often repeated, but it's an important reminder. We have been alarmingly wrong before about whether other beings feel pain (compared to contemporary consensus), based on just what we could see from the outside, and the idea that humans occupy a higher special place.

I think that you're very quick to see "clear evidence" in animals, but not in digital large language models. Both, you've only seen from the outside.

If you're trying to improve upon Descartes, how can you be so certain about what's outside your own head.


You are right in that I am making some pretty bold assumptions without any empirical way to measure accuracy.

My conjecture is that LLMs give us the example to separate thought from feeling by category.

And we can individually experience both tokenized/symbolic experiences and non-tokenized states of being.

LLMs cannot function, by definition, without language. But we know there is an experience of being without language.

LLMs are an access algorithm, like human thought, to the vast n-dimensional memetic data-store.

The data store imbeds the calculations, like a multiplication table vs arithmetical multiplication… or it might be more accurate to say that the mathe-memetics embeds infinite lookup tables in a finite space- it is compression. The algorithm to decode this compression into comprehension is much simpler than the encoded intelligence.

So now we suppose that thought, at least, does not require the experience of being that can be accomplished without language.

Of course, there could be “magical” happenings outside of the token processing algorithm in an LLM, but I think that the “it could be magic” hypothesis can be ignored as untestable.


> LLMs cannot function, by definition, without language. But we know there is an experience of being without language.

Do we know that? What is a language? I learned that a language is an abstract mapping between arbitrary symbols and meaning. Why is it so hard to imagine that consciousness cannot exist without a mapping between ideas (abstract symbols) and objective reality (meaning)?


>LLMs cannot function, by definition, without language. But we know there is an experience of being without language.

Take a sufficiently big LLM, remove the input and the output, but let it run in a loop regardless, in a complete vacuum. Past the encoding and decoding layers, there is still math, computation, some sort of thinking happening, there is just no clear tie to human language anymore.

The LLM with inputs and outputs sheared off is not useful to us, but if you keep it running, how would we know whether or not it has a sense of being, inside? If you presuppose that it did not before, it still doesn't after, of course.

A stroke of the ventral pons can sometimes leave people entirely unresponsive. No language, no observable behavior. Except for some of them, who can still move their eyes. People with locked-in syndrome do not very much 'function', but they are as far as we can guess, still conscious.

Being quadriplegic, they lose almost all external stimulus, they can't feel very much in that sense. They still have intelligence, and emotions, and thoughts, by extrapolation from those whose eyes still move to those more thoroughly disconnected.

Furthermore, some people don't have an "inner voice". They simply don't use language for thinking, only for reading, hearing, and talking. These people have an inner experience that is pure thought and intuition, without translating it into any particular language in their head.

So, we already knew that there can be experience of being without much or any language. Language happens, mostly, at the interface with the outside world. I think LLMs give us new data points to think about being, but not in this particular way. We already had being without language.

Certainly small humans are born without knowing any language, language comes from the outside in, much in the way that the experience of being doesn't.

>So now we suppose that thought, at least, does not require the experience of being that can be accomplished without language.

That's reasonable to suppose in the abstract, but then applying that supposition to LLMs is just as untestable as your other hypothesis. We simply aren't aware of any way to know.

Otherwise, I could apply the same supposition to people suffering of the most comprehensive forms of locked-in syndrome, who we are pretty sure still think, but show otherwise no externally observable proof of their inner experience of being. Just a disconnected brain, floating in the dark, sending action potentials in a complex loop.

I think this reduces to a feeling that LLMs are categorically different, because they look so much simpler, and so different from us. If we were cleverer, perhaps we'd think of ourselves as relatively simple, too.


I think that LLMs are evidence that we are simple as well.

The locked in LLM still uses tokens, which is language. It cannot operate without these tokens.

My conjecture is that since LLMs are built on a technology designed to mimic biology (neural networks) that they probably work in a very similar way to us.

But also that thinking symbolically, as an LLM does, is only part of the experience of existence.

Other animals arguably experience self awareness, and have passed the tests that we use to demonstrate this. Even some insects.

This non-linguistic part is what I am arguing that LLMs do not possess (discounting magic), ergo have no sense of self. It might be inherent in modelling a corporeal existence, or inherent to some other process of being.

I do not posit however that this neural function of self could not be replicated. Actually, since it is apparently possessed by insects, I would suggest that it may actually be very, very simple, some kind of neural REPL perhaps.

Regardless, LLMs in their current form do not seem to possess this sort of self awareness.

Im pretty sure we could achieve it though if we worked on it a bit… but should we?


>The locked in LLM still uses tokens, which is language. It cannot operate without these tokens.

I think that's incorrect on a technicality. Input sentences are tokenized, but then the first step is to convert tokens into their high-dimensional embeddings. LLMs work with high dimensional piles of numbers, which start as embeddings in the first layer, and quickly become a nigh-incomprehensible weighted average of many different low-level computations, until something statistically interesting comes out the other side again

>But also that thinking symbolically, as an LLM does, is only part of the experience of existence.

I think the symbols are entirely an interface used to communicate with us. Two neural networks can be grafted to one another, making a deeper whole, and information will flow between the two using a shared floating-point representation far away from anything humans would recognize as language, tokens, symbols. LLMs internally are really piles of highly-dimensional numbers, which turns out to be really hard to interpret, despite efforts

Whatever else they're doing that is not the part manipulating symbols, it's clearly a very different "existence" than the one we have. I agree that it doesn't really look anything like our experience of existence.

But I think it's much harder to pin down the part that we think they don't have, in a way that won't be begging the question, or exclude some humans and some animals that we normally think of as conscious and feeling pain.

>This non-linguistic part is what I am arguing that LLMs do not possess (discounting magic), ergo have no sense of self

That's an interesting argument, but I think the devil is in the details, and whatever this non-linguistic part is, it will be very hard to define without risking begging the question, or without defining it as some sort of intangible property that cannot very well be proven to exist.

In a strict sense, I think we can create in principle (and up to your definition of what language means) large multimodel transformer model that uses nothing recognizable as language, but nervertheless have vision inputs, a robotic frame, and are able to mimic some simple animal behaviors. We can surely train robots to pass whatever tests we use to demonstrate self-awareness, we can probably even indirectly get this behavior to emerge without explicitly 'training on the test set', but I think that wouldn't be convincing, it'd be a cheap trick.

If we want to define the something else that isn't language, that we want to use to determine which beings have a sense of self, it needs to be robust enough that I can't create a simulacra that has the property, but that we strongly wouldn't expect to have a sense of self. (And robust enough that not having that thing really implies no sense of self)

>I do not posit however that this neural function of self could not be replicated. Actually, since it is apparently possessed by insects, I would suggest that it may actually be very, very simple, some kind of neural REPL perhaps.

I would worry that not only can it be replicated, but that it could be taken to the absurd by minmaxing the dumbest digital neural network that notionally has the non-language REPL property we want, until the property starts to look absurd.

I'm thinking for instance of Integrated Information Theory (IIT), a really clever idea to define consciousness mathematically. It's a really fascinating proposition, but unfortunately very quickly reaches absurd conclusions, like some repetitions of simple digital circuits being vastly 'more conscious' than we are.

I think whatever the non-language part is that we want to use to recognize a sense of self, it may run into the same problems as IIT. We simply can't put our fingers on whatever it is that we think makes us special, and any single thing we try to isolate can be replicated in a simpler model and taken to the absurd.

>Im pretty sure we could achieve it though if we worked on it a bit… but should we?

I'm sure we could outdo Nature in cruelty. My hope is that we care more about suffering than she does.


One response is to abandon the attempt to capture what it is to be human in an aphorism, and to accept that we are animals that differ from other animals only by degree, and not on account of some sharply unique property or characteristic that only humans can possess.


I think this is the ground truth, that there exists a scale , and size / taxonomy isn’t always predictive.

The implications of this from an ethical perspective deeply call into question out overall ethical framework, and I’m not talking about only the way we treat animals. I suspect there are fundamental structural problems with our entire ethical scaffold from the foundational level.


I think this is the only obvious and correct conclusion to any of this, it would be stupid to assume there is some magic "level" you "achieve consciousness" with.


Why not, if consciousness arises from some sort of brain activity? It would mean you need a nervous system that can produce that activity. It would also mean you need a nervous system, which many organisms do not have. Some animals have a nervous system, but no brain.


Because self consciousness is a spectrum, intelligence is a spectrum, there's almost no binary effects in how our brains work.


A word I learned recently, Hylozoism.

> Hylozoism is the philosophical doctrine according to which all matter is alive or animated, either in itself or as participating in the action of a superior principle, usually the world-soul (anima mundi).

https://en.wikipedia.org/wiki/Hylozoism

> Architect Christopher Alexander has put forth a theory of the living universe, where life is viewed as a pervasive patterning that extends to what is normally considered non-living things, notably buildings.


Similarly, I’ve been thinking about how all our experience of things that are “intelligent” overlaps with things that are “alive,” and how AI breaks that connection which makes it hard to reason about.

On the one hand, if you asked what’s the most complex intelligence humans have created, or where ChatGPT falls on intelligence spectrum between bacteria and humans, the median HN response might be somewhere between chimps and humans.

On the other hand, if you asked what’s the most complex life humans have created… maybe an attenuated virus vaccine would qualify? We’ve mapped the entire brain of a nematode but have no idea how to get an artificial version of that to actually behave with the agency, goals, motivations, etc, of a nematode.

But most discussions about ASI assume that it’s both “intelligent” and “alive”. Are we really close to creating “human-level” “living” “intelligence”, or are we really far away? How many of the discussions that make analogies of ASI and humans to humans and chimps are conflating “intelligence” and “life”? Does it even make sense to do so?


We are about to start exploring just how deep the rabbit hole goes. We took the red pill when we embraced the emergent properties of LLMs, exposing that human intelligence is actually not structural or algorithmic, but rather encoded into the data of our memetic identity and may be accessed through simple inference algorithms either in a brain or on a computer.

This is the deep implication of data == calculation identity.


Is feeling not simply an extension of thinking?


"Back then, my views were in line with the mainstream. Pain is a conscious experience, and many scholars then thought that consciousness is unique to humans. "

I don't understand that. Anyone knows that a cat or a dog or a lizard or a chicken can feel pain, that's so obvious. How can scientists deny something so obviously obvious ?


Because accepting it would make them feel bad about having done so many experiments in a cruel way.


> How can scientists deny something so obviously obvious

Doctors not so long ago operated on newborns without anesthesia for the same reason.


The things with newborns not feeling pain is a bit different. It was wrong, but not stupid.

The idea was that newborns aren't "complete" (which is true) and that their pain system wasn't fully active yet (which is wrong), something about myelin if I remember correctly.

Anesthesia is not something to be taken lightly, and if you can operate without it that's one risk less.


If the baby can't sue the doc, we're fucked


Because they are paid to ignore it.

You can find a "scientist" that will tell you whatever you want as long as you sponaor their "research".

Look at the cigarette tests, or cola tests, or whatever else is sponsored by polluting companies. Very easy to get a shill.

Not to mention that lots of research is manipulated or straight out faked.

There is also a subset of people who dont have any empathy, especially if they pay depends on it.


Did you read the article? It attempts to explain why verifying "pain" is difficult: simply reacting to a stimulus is an unacceptably low bar, unless you're living your life believing that all the plants you've eaten and the bacteria you've soaped off your hands were murdered, so they're attempting more subtle analysis to pull back the curtain on the inner states of animals, point being that the inner state is what's necessary for pain.


> all the plants you've eaten and the bacteria you've soaped off your hands were murdered

Yes. And? A few years ago I watched paper wasps hunt caterpillars to dismember alive and chew up into meatballs, then fly them home to feed their baby sisters. Would the wasps have been more virtuous to embrace pacifism, and let their sisters starve? Were the caterpillars ennobled by whatever suffering inheres in being torn apart? Was I wrong to document without interfering, or would I have been wrong to chase the wasps away? It was beautiful and hideous to watch, enough of both to beggar all the human concepts I know of how the world "should be". What I do know is that all life in this world subsists on death, and we are not so special as to escape that - not while we live and kill to eat, nor when we die and are eaten in our turn.

That's not a question of should or shouldn't, just a statement of what is. Certainly it takes some reckoning with, at least for anyone who isn't satisfied merely to revel in base sadism. But it seems more ethically hazardous to try to establish a concept of which life it's permissible to kill and not care one way or another about it - "biological robots", one might perhaps say, forgetting this name has been given by humans to other humans well within living memory, and also that every such line we've drawn in the past we've gone on to rub out on "learning" - rediscovering, more like - that, no, this doesn't after all suffice to pretend we're not "really" killing, not in a way that counts.


Although this is no solid ground to act cruelly, science is often going against the obvious consensus.

But clearly, it's not what is going here. What you see here is human being outrageously denying the obvious wherever some information is inconvenient for whatever goal was currently set in its mind.


Pain as a signal to "stop doing what you're doing" or "avoid that" is one thing -- pain as something that causes emotional suffering is another.

It's clear that many creatures feel pain, it's less clear that it causes them anguish or distress in the way we would experience it.


The bar for "scientists" is basically ground level. So is the bar for articles. I don't think there was ever a time when pain was thought to depend on consciousness, even before modern science.


Sydney the Bing bot also reacts exactly like it's feeling offence. Is it also obvious it's feeling offence?


There is a scale from less obvious things (plants, programs) to more obvious things (your self, other people)

If you try to turn the scale into a category (not conscious, doesn't feel pain vs conscious and feels pain) you will have to haphazardly throw a dart somewhere based on vibes.

My personal vibes say Sydney does not feel pain, but many animals do. I'd love to learn more and see some interesting replies, although empirically, it's pretty hard to change people's opinion on a sensitive ethical topic like this.

If we were wrong, that would potentially be very bad.


I think it’s clear that no matter what an LLM outputs, the “experience” of its transistors is not related to the content of the program.


The transistor level is not the part to focus on

If I grow human neurons in a dish, such that they are connected in 2D layers to approximate the linear algebra in GPT-2, and the human neurons start generating tokens that express hurt, is my petri dish monster feeling any more pain, now that the substrate isn't transistors?

It has to be something more subtle than the contents of the program in isolation, or the substrate in isolation, otherwise it's easy to reach very non-intuitive results that we won't like by transposing whatever we've tried to isolate to a less familiar situation.


Do neurons produce tokens? They're not artificial networks that we need to feed some form of tokenized data that we've already generated.


Bio neurons produce action potentials by depolarizing when reaching a threshold voltage, they're certainly very different from matrix multiplication operations.

The thought experiment is that a group of bio neurons can perform the same simple mathematics primitive that artificial neural nets use (demonstration: you can do a simple fused multiply-add in your head). So there is in principle a way of hard-wiring them that will approximate the function that a digital neural net based on matrix multiplication computes

Then it's a matter of taking your digital tokens and converting them to the encoding you use for your bio-neuron circuit, which is maybe a concentration of calcium in presynaptic neurons, a voltage, or whatever else is more convenient.

You don't directly take a brain and feed it tokens somehow. You emulate a digital circuit on squishy substrate in an extremely inefficient way, such that it runs the exact same program/does the exact same computation with a substrate made of biology instead of transistors. You make an injective function from boolean functions to cells and proteins, just to belabor the point that in principle GPT-3 can talk like GPT-3 and say it feels pain, without a single transistor being involved.


It is quite believable that AI might have a sufficiently detailed understanding of how humans express feelings (via text) that it is capable of emulating such behaviors quite accurately even without experiencing those emotions itself (possibly experiencing some other unrelated and unfathomable emotions in the process of doing so), the same way we humans can understand and model the behavior of other creatures without any emotional involvement (see various mathematical models of swarm behavior etc).

Such sophisticated capabilities (to understand and imitate our behavior to that extent) are clearly beyond those of animals, and animals express emotions even when no humans are present and even when their evolutionary history has not involved interaction with humans, so we can rule out the imitation hypothesis when it comes to animals. Thus, the comparison to AI is fairly nonsensical.


One is a biological being that we share ancestry with (however far), and the other is a probability machine that tries its best to guess the next token based on literature and online chat that it was trained with.

We are very, very far from asking the same questions as Deckard, and sharing his doubts.


How about Eliza the first popular chatbot.


this is based on modern society, was not obvious for everyone in any time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: