My CS teacher in high school had a lesson with us about horseshoe crab's eye, which apparently does not see anything that is larger than the field of view of the eye, allowing it to consume smaller organisms and omitting larger organisms. We used a neural network model in excel for such purposes.
He was (and hope he still is!) a great person, who taught me being curious and insightful in the computer science field. Taught me about Donald Knuth, TeX and many more things.
Does not see anything that os larger than the field of view?
I can’t conceptualise this. How? Surely this is the same as humans where if you had a big enough uniformly lit object in front of you, you couldn’t “see” it?
at least the model we used was behaving like this. It was making objects larger than it visible as a line and smaller visible as a whole (like coloring fill in MS Paint). Objects that fit in the eye were colored and larger were not. However this was a neural network model (shown in excel as a matrix) - I wonder where did he get it from. When I wrote the above, I tried to find it but couldn't
He starts his lecture assuming: "suppose we start with an ancestor who didn't really have an eye at all but just a single sheet of light sensitive cells . . ."
This seems like a very basic condition, and that going from 0 to 'just a single sheet of light sensitive cells' is almost nothing. But of course that is not the case. Before that you would need (+++):
1) Photoreceptor Proteins 2) Functional nervous system or signal processing pathways 3) A machinery to translate the absorption of light into an electrical signal 4) A system for coordinating these signal with other parts of the organism . . .
I'm not sure that's correct for very basic multicell organisms. I'm no biologist, but I believe primitive systems wouldn't need these types of advanced pathways when light sensitive cells could just produce signaling compounds to drive the behaviour of the rest of the organism without necessarily requiring a centralised nervous system, akin to how single cell organisms orient themselves in their environment.
This is correct. Rhodopsin, for example, is a G-protein coupled receptor, which means that activation of the receptor triggers G-proteins inside the cell which can activate or inhibit any number of cellular processes.
I'm having trouble figuring out what your objection is.
If we're going to study an evolutionary process, we have to pick a starting point. "How an eye evolves from a single sheet of light sensitive cells" is particularly relevant because creationists like to claim that "half an eye isn't useful, so it couldn't have evolved".
"How does a single cell evolve light sensitivity" is not a particularly hot button topic. Creationists don't drone on about plants.
I'm not sure about that. At least in my experience there are two types of creationists. The ones who have integrated a non-belief of evolution into a core component of their religious beliefs. This type isn't going to listen to any argument for evolution so I don't know why time should be spent making arguments for them.
The other flavor are the ones saying "wait a second microbiology is really complex, I'm not sure the standard evolutionary process cuts it here."
Saying, "well let's just assume microbiology isn't a problem", isn't going to be compelling to them.
It is an interesting endeavor, but with respect to the whole creationism vs evolution discourse it feels like a complete waste of time.
Creationists don't argue things are generically "too complicated", they argue that there are structures in nature that aren't discoverable by progressive mutations. As Dawkins explains in the video, eyes are an extremely commonly cited example. I've never heard anyone argue that cells couldn't evolve photosensitivity, but I have heard plenty of people argue that eyes only work when they're fully functional, which is what Dawkins is explaining as untrue here (as is TFA). I don't find the idea of debating creationism especially likely to be fruitful overall, but answering the question "How does evolution explain the structure of eyes?", e.g., for an adolescent who is grew up in a fundamentalist household, seems entirely reasonable to address.
Aha! But you have neglected steps as well, fetid ignoramus!
You portray such a +++ animal as simple and obvious, but indeed in fact you would need (++++++):
1) adherins and cadherins to bind this sheet together 2) an organelle to produce and regulate the cellular proteins 3) a method to encode those proteins 4) a method to replicate and propagate that encoding so that new proteins can be encoded 5) start and stop codons and a method of duplication or recombination errors to allow for new copies of protein encodings to be created separately from old copies...
You could have 2 - 4 for other sensory cells, like heat or touch, and then be a single mutation away from sensing light.
Iirc, some snakes have some heat sensing cells arranged in pits to give them a directional heat sense. If you did that with visible light instead of IR those pits would be what halfway to an eye would look like
Not at all... If the question is "how could an eye evolve" it's reasonable to start from a baseline of a complex organism that has sensory cells but lack eyes or even light sensitive sensory cells.
A cell sensing light can use the same type of nerves etc as other sensory cells, so there is no need to explain how a cell sensing light + a nerve + a central nervous system evolves in one step.
If you wonder how a nerve could evolve for example, that is a different question. If you have two cells and they can exchange chemical signals, which is useful in itself, you have the start of a nerve.
In the book that talk is based on (I think it might be The Blind Watchmaker?) he says (paraphrasing) that he's using the example to illustrate a process, and the exact start and end points don't really matter. If you prefer an earlier starting point, that's fine. Start with a single cell that can respond to light, or even earlier. The process still applies.
If you're going to make that objection, you might as well head straight up the chain and aim for the biggest example of all: how does the first self-replicating molecule appear?
For exmaple, for #3, you say electrical signal. But cells normally use chemical, not electrical signals.
For #1- nope, just need an enzyme or other thing that makes a pigment molecule, and something that can detect a chemical (re-use of an existing protein)
For #2- no nervous system, and cells already have many internal signal processing systems.
For #4- those already existed.
Biology mostly just copies and re-uses systems that already exist. It's still incredible there was a path of mostly random events that led to fully formed eyes, though.
This is probably more a physics/CS question than a biological one, but how far off are we from having physics simulators good enough to try the types of experiments MIT did with the eye but for entire organisms? Presuming that will one-day be possible (i.e. the simulation takes earth's physics as inputs, and spits out organisms vaguely similar to those found on earth), then we could presumably enter the physics of exoplanets and find out what their creatures might be like.
I think we're quite far off if you want to model full organism complexity. But if you want to answer a research question you can model simpler versions today. Like this research team did around how vision evolves.
By the way– As of recently, we were able to model C Elegans (flatworm) in 3D with all neurons and neurotransmitters. It reacted to virtual stimuli just like a real worm (https://www.nature.com/articles/s43588-024-00738-w). So single-organisms is already possible. But the evolution of these entities in 3D will take us a bit more time is my guess:
Can the virtual flatworm’s virtual actions be mapped to the physical sensors of a flatworm in the real world? Can we get the real world’s flatworm to control any part of the virtual one?
Can these two things interface yet is the question (virtual to real bidirectional interface), especially since you are suggesting we have a clone of it.
If we do this experiment, what does that say about something doing an experiment on humans (humans control humans inside video games).
Sounds insane right? Why would we ever do this? Well … we have this perfect digital clone of a flatworm, what else are we going to do? There’s a lot of evidence that humans would absolutely go down this rabbit hole until it’s logical conclusion.
One word: Teledildonics
Anyway, the flatworm that is born into such an experiment would never know, or it would just be useless to know. :shrugs:
That's quite a cool project, but as a thought can the simple rules you started from... Are they simple enough? Like the rule which states that offspring will carry the traits of parents, so exactly what traits they carry. how are they carried. Are these rules simple enough?
So they carry neural nets and evolve using the NEAT algorithm, which introduces slight variations: new connections and new nodes.
Over time this allows fish to develop basic behavior such as searching for food, navigation a maze, etc.
The only other 'useful' gene right now is around herbivore/carnivore digestion (0 to 1), which allows them to extract more energy from either meat or plant-food. Most of the time they actually develop specific behavior according to this gene.
I don't really code in what offspring need to do beyond having slight variations to both factors described above, it kind of evolves randomly into more complexity (neural net + behaviors).
Also importantly– I need to program an energy decay system and death if they run out. So basically: Energy source, energy decay and evolving neural nets that can give an organism the possibility to survive and evolve if they get more energy. And voila– Life emerges.
Working on plants now, and again simple rules: Neural nets in the plants to mimic evolution of complex biological systems that evolve from generation to generation. And a light-based energy source and light-based energy capture system (leaves). My current (preliminary) experiments show that the plants start to look like trees over time to maximize energy capturing compared to competing plants.
Looking to publish this once I have it a bit more refined.
So the special thing about NEAT is that it allows the shape of the neural net to change. It can also do some sort of mix between two neural nets (like sexual reproduction), but I haven't implemented that.
Natural selection in this sim just happens by itself, there is a limited amount of food and only the best adapted ones survive. So the best performing neural networks duplicate themselves and create small variations of themselves. This part is not connected to the NEAT algorithm, I've just seen that NEAT performs particularly good vs more fixed-structure neural networks.
This is exactly it, _many_ such simulations and evolutions have been done already, all with incomplete/inaccurate world models, leading to predictions that _might_ be meaningful. I recall a coworker used reinforcement learning to teach a robot to walk — but instead it learned to abuse an error in the physics simulation and shuffle around in a ridiculous manner.
10 years ago when I was still in academia, modelling more than 8 water molecules solvating an ion started to get computationally very expensive lol
Depends how accurate you want your model. Pie in the sky thinking, we need quantum computing before I can imagine these kinds of simulations making sense
More generally, ML is good at approximating many NP-hard problems efficiently, so I wonder if it will be a more practical alternative to quantum computing for things like molecular simulation.
Well, from a philosophical point of view you might say that it's impossible. The idea of laws that can apply to e.g. mathematics might not apply to organisms because organisms are described in terms of teleology. The heart doesn't just happen to circulate blood, it beats in order to circulate blood to oxygenate the body and so on.
And just because, for example, organisms that live in cold climates generally have thick hair, having thick hair doesn't imply cold climates. Some mammals fill niches filled by birds in other ecosystem, or by fish in others. Likewise in New Zealand they have birds filling niches that in other ecosystems are filled by mammals.
I'm not sure biology can be a purely inductive science.
Simulating physics of muscles/skeleton structures has been done many times over, and I feel like the focus on eyes and vision is what makes this one interesting. Anyway it's quite a fun category of youtube video: https://www.google.com/search?q=simulating+evolution+muscles They tend to be two-dimensional or have other big simplifications. I guess trying to evolve a "real" organism explodes in complexity very quickly even if you ignore the "brain".
So I guess it will be very hard, in a pure Darwinian meaning, to evolve an organism from scratch. But relax that meaning a little by fixing some things and letting only some things evolve, and I suspect it's already possible.
For as far as any Greg Egan's stuff can be called accessible... I guess in a way you could call a thrilling story with the theme: "What if human observation DIDN'T cause quantum wave function collapse" (Greg Egan: Quarantine) the same as making Quantum Mechanics "accessible" :D
Fun fact about eye evolution, the dark spot you see on a dragonfly’s eye is not a pupil following you, or even a change in the eye itself. You’re actually just looking directly down into the columns of the dragonfly’s eye. They capture light so efficiently that they appear to be black. They’re close to the absolute physical limit for efficiency.
The interesting bit (to me) isn't so much that they're black, but that you can see the compound eye at work. Each lens has a very narrow field of view and only appears black when the observer is within it, while a human pupil appears black from pretty much any angle because it's always using the one lens and aperture.
Human eyes are a lot bigger. It's much easier to have a shadow in a thing that is a few centimeters big, than it is in something half a millimeter deep.
Would be interesting to know if dragonflies get red eyes when you take a picture of them with flash. Human eyes reflect some of the red light from the white flash, making it look like the pupil is red instead of black.
Went camping recently, and used one of those head mounted flashlights.
Walking around there were glowing spots in the grasses. Thought at first it was some sort of firefly.
Upon inspection, it was the light reflecting off the eyes of spiders. Dozens and dozens of spiders everywhere in the grasses. Never could get a picture.
So light does reflect from compound eyes, but in ‘silver’.
I guess you're being downvoted because people think this is obvious, but dragonflies won't get red eyes because they don't have red blood to reflect the light.
Red eyes are also caused by a particularity of vertebrate eyes, which have their blood supply in front of the retina.
This is also the reason why you can sometimes see moving dots when looking at a bright, blue-coloured thing like the sky: your eye sees the whole capillaries filled with red blood cells, but the brain processes them out because they're always there. The large white blood cells then appear as "less red" dots in the processed-out streams of red, and the brain interprets them as bright dots.
Insects have none of this, they don't have blood or blood vessels but a transparent haemolymph that doesn't really circulate through a complete circulatory system like us.
So there really is nothing in front of the insects' light sensors to reflect light.
As another commenter mentioned, this is more like a physical limit for efficiency of light capture. However they also follow a scaling law of angular resolution versus diffraction. Smaller lenses means more blur from diffraction but smaller lenses also allow better angular resolution. Many insects with compound eyes follow this scaling law, having ther same (provably optimal) ratio of eye radius to lens size, despite having very different sizes of eyes. William Bialek is a biophysicist who uses this frequently as an example.
Hmmm... From my experience writing Artificial Life simulators in the 90s, there is so much that is dependent on the parameters that the simulation coders write into the models. Yes, if you build a Genetic Algorithm where you start with a photo-sensitive cell, and then you reward agents that are able to navigate a maze, you're going to end up "evolving" agents that use a bunch of photo-sensitive cells like an eye. Has that really told you something about evolution, about life, or about the simulation that you set up?
I do think it's the third, _but_ I do also think it's different than the kitchen-sink organism simulations that really were just watching a lot of shallowly-programmed behaviors interact with each other. I think of it more of a model than a simiulation - a model that tries to explain a specific real-life phenomenon in as focused and parsimonious way as possible.
Here, the takeaway is that the emergence of two different types of eye – compound and camera-like eyes – can be modelled by a set of 3 specific tasks, in combination with a minimal set of anatomical knobs and switches. Then it might actually be _informative_ to compare and contrast the clear evidence from the model, and see how these explanations compare to the less conclusive ones we can draw from the methods of evo-bio.
(A good analogy would be to look at how gates, latches, and clocks can alone account for the "emergence" of modern superscalar microarchitectures, without having to resort to modelling the analog madness of pushing high frequencies through physical circuits, for example.)
About the latter, obviously. You can't just test it as in physics, so that's all you can have. The proper question (I guess) is, is that knowledge completely useless or may it help with something down the line.
I must say that I was expecting that a single light receptor would also have been evolved from something else so that the title would have earned the moniker of 'from scratch'.
We discuss it at the end of this blog (https://eyes.mit.edu/what-if/), but the main future goal would be to discover new types of eyes or vision systems. The results we show on the website generally agree with the conclusions from biologists, but what if we put the animal in an environment not actually feasible in the real world? Like what if we put an animal on a mars-like world, what kind of eyes would it evolve? And if we constrain that animal to only evolve eyes that we can manufacture (like a camera), would it discover a new type of camera or algorithm that's better than what humans have engineered? So here we show an attempt to recreate biological vision, but we're interested in applying it to artificial vision (i.e., computer vision, robotics, etc.).
Maybe I just can't see the big picture (pun++) but why limit the simulations to the eye? Many other senses are simultaneously working for survival. I'm not aware of anything that just uses the eye for survival.
https://en.wikipedia.org/wiki/Sense
That's very true and is also an area of interest. The number of possible animals that can be evolved increases substantially with more parameters, so adding more sensors/types of sensors makes the problem even more difficult. Also our animals are simple balls, where most intelligent animals have arms, legs, so on. So modeling more would definitely be interesting, too!
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
In order for this to work in real life, you'd have to prove a lot of other invariants:
- The mechanism to interpret the light data signal has to be in-step with the evolution of the eye. Getting light data without a brain evolving at the same time to interpret it is evolutionary recessive, i.e. a useless function. I.e. a real evolution would be more like "cat /dev/urandom > output.html", not a controlled ecosystem with a clear penalty-reward system.
- In nature, there is no 1:1 "reward / selection function" like in this simulation. In the computer, this "motivation factor" is externally given, so that the next generation is rewarded and selected out, in reality, there is no rule as to what is and isn't "better" or "fitter" or "more attractive to the other gender" (not like CS nerds would know). Sure, an organism can consume food, but beyond a certain point that wouldn't make the organism just "fat", not stronger. So there also need to be environmental mutations happening at the same time, that reinforce "more food = better evolved".
- There has to be a way for the animal to be so dominant, that the connection between light data and food can be genetically passed on and will not be associated with bad artifacts (see ChatGPT hallucinations for examples of "accidental bad artifacts in evolution" - and that "evolution" has millions of man-hours, money and R&D behind it).
- By the rule of "survival of the fittest", the next generation mutation has to be (in one single step) such a significant improvement over the last one that it won't be selected out again by recessive selection or dilution inside of the gene pool.
- The gene has to be active within 150 subsequent generations, without fail, cancer, recession and provide 150 times a dominant advantage, just to get a basic "eye" for 2D navigation with 10 light sensors. The minimum snail eye (pre-Cambrian) has 14.000 cells [1] (and a snail cannot see color).
- The real world is a 3D environment, which adds a monumental amount of complexity. Add to it the complexity of depth, color, shape, ...
- The mutation(s) have to happen either "at once" or be widespread (otherwise it's going to be like an Albino animal, i.e. some rare neutral mutation).
- All of this has to be done in an environment hostile to life in general (i.e. the edge of underwater vulcanoes, some primordial soup burning at several hundred degrees), all elements have to be at the right place, at the same time, etc. And be created out of nothing, of course.
While I do agree that it can be helpful for computer vision, computerized "evolution" is just adaptive statistical pattern matching, but it's absolutely nothing like real biology. It would be more realistic to just output "/dev/random > kernel-gen-xxx.iso" and then run it bare-metal, with no lab environment, no operating system, no programming language, no goal function, no selection / reward process, no debugging, etc.
Even Darwin had his problems with the eye. The reason I believe in God is not necessarily because I want to, but because evolution (not survival-of-the-fittest, but the "mutation creates information" aspect) requires far more faith and far more dogmas, which cannot be questioned for the sake of science. When I was in 8th grade biology, I took a stone from the schoolyard, put it on the teachers desk and said "alright, so this is a human if we wait 4 billion years". The teacher ignored me, but never told me I'm wrong.
It would be interesting to read if there are anything in particular that you disagree with? Which objections did Darwin handle and where?
Darwin's understanding of e.g. Heredity via Pangenesis turned out to be wrong, so it is not like just holding up a copy of 'On the Origin of Species' as the final judge of "origin reasoning" will take use very far.
He spends a long time dismantling objections I've heard creationists use to this day. I've often wondered if the creationist influencers who came up with these "objections" actually read On the Origin of Species.
The mechanism wasn't understood at the time yea. That's not really relevant, and not really covered in Origin of Species anyway.
This is interesting. If nothing else to just have the visualization of the idea of how eyes evolved from a single light receptor. I've often heard of that in biology but this really makes the idea more tangible.
Eyes are good at generating the signal or we wouldn't have much use for the vision they do provide us.
On a somewhat related note, due to a head injury I suffered now many years ago, I started developing small-ish blind spots "once in a while" that remain anywhere from a few minutes up to a few months.
The spots are very noticeable when they first appear, grabbing the attention all the time. The ones that persist long tend to "disappear" when my brain filters out the broken (?) signal of where the blind spot appeared, and the spot mainly becomes noticeable again if it hides or interrupts a known pattern that I'm looking at.
It's as if the brain fills the spot with the average color around it (blue sky - blue spot, white wall - white spot etc), which works well for single color surfaces and such but not great for repeating and predictable patterns. And the hiding "lags" which means if I quickly shift between colors then the spot will momentarily be visible as the old color shows up on the new color.
So the brain does "imagine" things, but when it does, it isn't perfect.
He was (and hope he still is!) a great person, who taught me being curious and insightful in the computer science field. Taught me about Donald Knuth, TeX and many more things.