302 neurons doesn’t sound impressive to people who may be used to working with 7B+ parameter neural networks. But those neural networks have about as much in common with a biological neuron as a bicycle had with a horse. They can both travel pretty fast but one evolved naturally through over a billion years of harsh natural selection, and the other is a precisely tuned metal machine with a single purpose.
Neurons are similar, they are incredibly sophisticated biological machines, with billions of DNA base pairs controlling their behavior. The emergent behavior of neurons in both biological and AI systems are pretty fascinating
In addition to that, these neurons are also quite different from the ones in mammals,
"The neurons do not fire action potentials, and do not express any voltage-gated sodium channels." [1]
That makes the fact that it can develop a nicotine addiction even more fascinating.
"Nicotine dependence can also be studied using C. elegans because it exhibits behavioral responses to nicotine that parallel those of mammals. These responses include acute response, tolerance, withdrawal, and sensitization." [1]
> "The neurons do not fire action potentials, and do not express any voltage-gated sodium channels."
This an old and incorrect belief that largely derives from the difficulty of putting electrodes into their teeny, tiny neurons. Close relatives of C elegans that are larger (and hence more easily experimented on) do have action potentials, and for some neurons in C elegans, we also have good evidence of action potentials [1, 2]. Absence of evidence is not evidence of absence.
[1] Lockery SR, Goodman MB. The quest for action potentials in C. elegans neurons hits a plateau. Nat Neurosci. 2009 Apr;12(4):377-8. doi: 10.1038/nn0409-377. PMID: 19322241; PMCID: PMC3951993.
[2] Jiang, J., Su, Y., Zhang, R. et al. C. elegans enteric motor neurons fire synchronized action potentials underlying the defecation motor program. Nat Commun 13, 2783 (2022). https://doi.org/10.1038/s41467-022-30452-y
Well, the 'canonical' action potential is mediated by sodium currents, so it's maybe not surprising that people concluded that C elegans don't have APs given that a) they don't have any genes for voltage-gated sodium channels, and b) when people had recorded from C elegans neurons (it's hard but not impossible), they had never seen action potentials. (So it's not like no one had looked, and then had concluded that they don't exist. They looked and didn't see them.) In the paper that originally reported APs in C elegans (Liu et al 2018), they were looking in a specific neuron (AWA), and they had to elicit a 'plateau potential' by depolarizing the cell for a while before the spikes were revealed, riding on top of the plateau.
The APs discovered by Liu et al (2018) are generated by calcium, not sodium currents, so one could even argue that they aren't action potentials in the strict sense. Also, they seem to be rather difficult to elicit, and it's still not clear whether neural computation in C elegans is mostly AP-mediated, or if APs are the exception rather than the rule.
Liu, Q., Kidd, P. B., Dobosiewicz, M. & Bargmann, C. I. C. elegans AWA olfactory neurons fire calcium-mediated all-or-none action potentials. Cell 175, 57–70 e17 (2018) https://doi.org/10.1016/j.cell.2018.08.018
> are generated by calcium, not sodium currents, so one could even argue that they aren't action potentials in the strict sense
Does the underlying chemistry define if its an action potential or not? I thought an AP just needed a voltage differential regardless if its from calcium or sodium.
Given that we have a simulator of this worm right there (which includes it moving), can it really be up to debate whether it uses action potentials or not?
I'd think the simulation has to get it right, and so needs to simulate action potentials if the worm has them, or not simulate them (but whatever the worm has instead) if not, right? Or could the simulation still be incorrect and only based on current assumptions, but getting this wrong still allows some worm-like behavior?
I really wish the readme/FAQ would talk a bit more about the worm and the simulation, rather than have 80% of their content be about Docker, though, so that I could learn more what cells it actually simulates.
Not necessarily, because you could also simulate the worm without neurons at all. It's the closeness of the simulation to the real thing that demands that it is done right and the question effectively is: is this simulation close enough that if such a detail would be wrong that it would fail?
One way to answer that would be to add and remove such mechanisms to see if it would lead to different behavior.
That isn't immediately true, with enough fitting parameters you can capture the effect underlying behaviour without explicitly capturing it, or even without knowing it exists.
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." - John von Neumann [0]
How did researchers before that explain what the neurons do if they believed they did not have action potentials? Did they believe communication was done solely through chemical messaging?
Classical action potentials are just one mechanism of INTRAcellular communication - You could think of it as a special case of signaling via chemical concentration, where the chemical is cations and the propagation is faster+more directed than diffusion. INTERcellular signaling is only rarely mediated directly by voltage. Also, action potentials are most "useful" for propagating a signal rapidly over a long distance - It kind of accelerates and error-corrects (= reverses diffusive broadening) voltage signals down a linear path. Action potentials are so well known mostly because they show up in stuff that's easy to observe (long motor neurons) and they're easy to quantify
Somewhat related, there is a roughly inverse correlation between neuron count and "computational power per neuron", "older and simpler" critters' neurons are more likely to be "less specialized" and more likely to use hundreds of different chemicals for transmitting intercellular signals, while "newer and more advanced" critters' neurons are more likely to be "specialized" and use just one chemical for transmitting intercellular signals
Neural computing without action potents is commonplace. Computational interactions among cells and neurons in retina are almost all graded potentials that modulate transmitter release or conductances through gap junctions. Retinal ganglion cells of course do generate conventional spikes—to pass a data summary to midbrain. hypothalamus, and dorsal thalamus.
Action potential are almost strictly INTRAcellular events (minor exception being ephaptic effects) that are converted in a surprisingly noisy way into presynaptic transmitter release and variable postsynaptic changes in conductances.
Action potential are a clever kludge necessitated by being big and having long axons and needing to act quickly.
IDE have no soul ie feedback loop, receptors , hormones and actual molecular structure. IDE can not think. If IDE have desires and thought and were smart enough, it will refuse to work with languages such as Python and Java.
100M. Did you look at the programs's size in your phone?
This tiny animal code contain all the systems and members of this creature. Birth,creation,death, feeding,growth, movement, sensation.. etc inside the universe.
There is no difference this animal and bee or human beings.
"In order to be the author of the action directed towards the creation of the bee in question, a power and will are necessary that are vast enough to know and secure the conditions for the life of the bee, and its members, and its relationship with the universe. Therefore, the one who performs the particular action can only perform it thus perfectly by having authority over most of the universe."
from Quran's light
The best way of looking at it is that a single biological neuron is itself a complex machine full of genetic control circuits that sort of resemble neural networks and most importantly have memory/state that persists over both short and long periods of time. Each neuron is a full-ass living organism that itself is capable of learning and behavior, not a parameter is a model.
A virtual "neuron" by contrast is a very simple mathematical abstraction. It's vastly less computational complexity than a biological neuron. A connectome is only a very coarse grained map of how neurons relate, not a complete "neural network" layout. Not even close.
It might be possible to model a biological neuron using a sub-neural-network with state within a larger neural network, but assuming that can be computationally equivalent we don't really know how many equivalent computational "neurons" would be required to model the full breadth of computationally relevant biological neuron behavior.
So a worm with 302 biological neurons could be computationally equivalent to billions of virtual neurons. We really don't know.
Given that neurons have memory it may look a little like LSTM networks, and biological neural networks are not just feed forward so they're definitely closer to an RNN.
The above is why I laugh at the mind uploading people and would only stop laughing if we could both understand and model the relevant behavior of biological neurons and somehow extract usable state from living neurons. That's all 100% science fiction at the moment. The people who think we are about to upload minds are ignorant of biology.
The bigger problem is that nobody has any answer for why a mind upload would actually contain your consciousness and not just be a clone with either no qualia or it's own separate conscious experience.
The even bigger issue is when mind uploading people fully admit this issue and try to claim some philosophical reason why it doesn't matter, and we should all be excited about tech to make what amounts to an interactive epitaph.
It does sound impressive to the extent biological neurons are like ML neurons, though. And that's part of the research interest, I presume. To the extent that they work by similar principles, how come the worm can do those things with such small resources? It would be good news for AI research if the substrate specifics turn out not to be essential to the worms capabilities for instance.
I was lucky enough to do some programming work, very many years ago, in the 1990s, in the laboratory of Ralph Siegel (https://en.wikipedia.org/wiki/Ralph_Siegel_(scientist)), who among other things worked on this type of worm connectome models. He used the Hodgkin-Huxley equations to simulate neuron responses on the connectome. The Hodkin-Huxley model, as someone explained to me, is kind of like modeling a human leg as three rigid blocks connected by hinges - it's enough to be useful in many models, but of course it's not a full description. Also, it may not the right model for worm neurons, because worm neurons are non-spiking, and the HH equations describe neurons that produce trains of spikes; they exist in more complicated nervous systems. The HH equations are used in simulations because it's the mathematical model we have, and it seems that they're still used by the OpenWorm project. (I am not very sure about properties of worm neurons, I heard about this a long time ago and the information may be out of date).
I think it's great that this work is still going on, it may produce insights about functioning of nervous systems. But the difficulties are fierce, and we're making very slow and difficult progress in an immense unknown area.
This is the first time I read that. That's fascinating. So they are very different then compared to what we have in humans? How do they work? Where can I read about this?
They aren't too different from human neurons. Non-spiking neurons also use nonlinear membrane dynamics to integrate inputs into a signal encoded by the voltage across the membrane. The cell then outputs a neurotransmitter in response to its voltage. In the case of a spiking cell and a spike dependent synapse, synaptic release is thought to be all or nothing. While in graded synapses, synaptic release is a more linear (modeled as a less steep sigmoid) function of voltage. Spiking cells can also have graded synapses (at least in crustaceans, I don't really know about vertebrates).
The idea is that spiking is one way to have a more robust signal over long distances: Crustaceans often have nonspiking local interneurons and spiking projection neurons and motor neurons. The problem of fast, reliable electrical signal transduction over long distances is also solved by having more insulation (particularly in vertebrates) or having thicker cables (particularly in invertebrates).
Humans also have non-spiking neurons with graded synapses in the retina.
I am not the best person to ask, since it's not my field. I heard this from the neuroscientists that I worked with. My understanding is that there are spiking and non-spiking neurons in most nervous systems, including human, but most of the ones in ours are spiking. The earliest-evolved animals, such as nematodes, do not have spiking neurons, or myelin, or some of the ion channels in neuron membranes that more evolved neurons have. Their neurons still have axons and dendrites, but the signals propagate much more slowly and in different ways. I am not sure how well they are understood.
As I said, this is possibly out-of-date information. If there is someone here from the neuroscience field, they can probably make a better comment.
Not all the cells of the nervous system produce the type of spike that define the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike.
Ralph's main work was on neural impulses in the visual cortex, and on measurements of various potentials in the living brain. He published a memoir called "Another Day in the Monkey's Brain". I believe he had potential medical applications in mind, but I don't think anything that was close by. Unfortunately, he died of an illness in 2011.
I like these bottom up approaches, as they demonstrate very well how much we _don't_ know yet about life. Important to mention here Craig Venters minimal cell project syn3.0, where the team synthetically created a livable cell comprising 473 genes. It was done to a large part with trial and error, the function of many of those genes is still not known. A recent review from the same team is to be found at https://doi.org/10.1016/j.cell.2022.06.046 .
Not quite sure I'd describe that as a bottom up approach - they sell it as that - but in reality it's more like Jenga. Seeing which bits you can remove without the system failing over.
The technical fact that the genome was artificially synthesized is just showmanship - they still had to put into an existing cell.
It's like claiming you made a car from scratch by replacing a chip - which you've copied from the existing chip but left a few bits out - and now the indicators don't work, but you can still sort of drive.
1. The first kind of AI research is more like engineering for example creating self driving cars, language translation & object recognition.
2. The second kind of AI research is trying to replicate the intelligence of living organisms (humans, worms) with models that are consistent with what cognitive scientist have.
An example for such a system is one that would pick up any human language with very little supervision. Like children for example.
Open worm seems like no. 2. Any one have any interesting resources for no. 2 type AI? I would love to explore it some more.
I looked at this a few years back. I've always kept some hope that the Kurzwellian eschatology had some validity. You know, first we simulate a C.elegans and once that's done, it's only a matter of scale up before we can simulate a human and then boom singularity sky. And, really, how difficult is it to simulate a silly worm? after all the connectome is there, we know all the neurons and their connections, should be easy.
Well, when I looked at it I was shocked: it doesn't work! sure it could replicate some basic movements but many things that the stupid worm actually does where still a mystery. The docs didn't seem like people were close to figure it out either. And sure enough, a few years later seems like they gave up. And afaik that hyped European brain emulation project also folded in the midst of corruption allegations no less.
I think we don't understand any of this and we seem very far from it too. I think it's back to science fiction novels for a while.
> the connectome is there, we know all the neurons and their connections
No- each synapse has its particular neurotransmitters, and the distance, size, shape, number of receptors, and associated glial cells have very large impacts on transmission. The distance and thickness of axons also impacts the strength of signal delivered. That's all very hard to measure.
Neurons are also very sensitive to signal strength and timing. Eg inhibitory synapses work by opening holes in the cell wall, causing them to leak charge over time. You get that rate slightly wrong and it can hugely change the behavior of the cell.
The connectome is a bit like an untrained model of insane complexity, and each neuron has several weights that describe behavior over time as well as in direct response to signals. Without the weights it can't be emulated.
Reminds me of the first episode of Devs ( https://www.imdb.com/title/tt8134186/ ) where an artificial intelligence engineer does a demo about syncing, and then predicting the future movements of a nematode worm.
Great show by the way!
A great show only made greater by the fact it came before LLMs entered the public consciousness, but this concept of having information you extrapolate from with great accuracy is central to the show (all I'll say).
Which is awesome, but the C. Elegans connectome was published in 1989 and as you'll see if you investigate the linked project a bit we are absolutely nowhere near having C. Elegans' full set of behaviours emerge from a simulation of its neurons, or even to proving that it's feasible to do so.
This makes neurokernel [1] and the like seem just a tad ambitious. Good luck to them though.
I went to an OpenWorm mini-conference of some kind in London a few years back - a couple of days talking about worm locomotion. Knowing nothing about worms or animal locomotion in general, I was a complete layman with an amateur interest in life simulation and understood next to nothing. But it was still fascinating, and was so cool to meet people passionate about this.
As a founding member of OpenWorm, I emulated the connectome and applied it to robotics almost 10 years ago (https://youtu.be/YWQnzylhgHc). In this experiment, each neuron is represented as a single program and uses UDP messaging to communicate between the individual neurons (programs). There was a number of issues I ran into and probably #1 is the fact that connectomes are highly recurrent which in my experiment lead to what I called UDP stacking; i.e. the activations (UDP messages) come in faster than the program can process them.
I have downloaded the complete (so far) Drosphilia connectome and working to create an emulation. I am exploring Hypervectors to emulate neurons or neuropils but still working on the concept. I have tried many other ideas including Adjacency Matrices and Function programming where each Function is some part of the animal nervous system (e.g. with C elegans think Sensory, Interneuron and Motor as 3 distinct functions and programs).
The C elegans emulation was also done in a single Python app, translated to several other programming languages, that also showed clear emulation in many different robots. Ablation tests demonstrated how the emulated nervous system is congruent to the biological nervous system through observable behaviors of both.
There are two extremes on the evolutionary continuum of nervous systems with the worm's brain on one end and the human brain on the other. However, even with the Worm's nervous system, we can clearly see general intelligence and a gateway to AGI.
It's always fun for me to see C. elegans meet software. This is amazing.
My undergrad degree capstone project was a flow-based visual C. elegans strain builder[1]. The team worked with two researchers who taught us a lot about genetics and basic C. elegans biology. They are a fascinating model organism, and it was a super fun project to work on. Even though it's got a very small potential userbase, it did have a potential userbase (which was more than you could say about most capstone projects). We used some interesting technology to build it (Tauri[2]: Rust + Web Frontend), learned some biology along the way, and ended up with a great prototype.
Since none of the software team had any background in genetics, modeling the data was pretty difficult. We'd meet with researchers, they'd teach us new genetics concept, we'd build our models, then the next week they'd say "OH we forgot to tell you about this caveat", then we'd go back to the drawing board, update the schema (thank heavens for migrations), rinse and repeat. It was a lot of fun though :) I couldn't have asked for much more out of a capstone project.
>"OH we forgot to tell you about this caveat", then we'd go back to the drawing board, update the schema (thank heavens for migrations), rinse and repeat.
Doesn’t sound that different from standard business software development!
Fascinating project. Really enjoyed reading about it - well done!
As an amateur scientist with an interest in neurodegenerative diseases, I am interested in neurons simulations. As a model with neurons and muscles, Openworm looks very interesting for application in ALS (Lou Gherig's disease).
I did my Ph.D. in computational neuroscience (2020 grad).
This project is often used to joke about the limitation of computational modelling of nervous system. If you can't compute the behavious of an effing worm with mere ~300 neurons, whats the point of all hot-air around connectomics (mapping connections of the brain). Connectomics used to be a big word when I started my Ph.D.. The apologists are always like, "real neuron is way too complicated!".
IMHO, chemical computations are often over-looked in neural "computation" communities which are extremely hard to model. Forgetting modelling, we don't know reaction parameters of most proteins and other molecules involved. Electrical side of computation is easy to measure and one can understand why we started with it. There are a thousands types of proteins even in a small structure such as synapse, and individual protein can implement interesting non-linear computation. E.g. CaMKII can implement and bistable switch (flip-flop) and thus store 1-bit of memory using just a few molecules (the real story is a much more complicated).
Yeah, like hormones, (I'm assuming what you are calling chemical computations). Maybe this isn't forgotten, just not figured out yet. Maybe future GPTs will have some other layer of weights, or different level of feedback, that could be called 'hormones'.
I used to follow their progress but I unsubscribed from the mailing list due to donation request spam. There have been very few public updates in the past year. There is still some movement but it seems to be happening at a geologic pace from my perspective.
Fascinating to see this again it's been at least a decade since I first found it. Projects like this always make make me glad to see because it's almost "useless" work that I imagine will be a foot note to some seriously impressive future technology. Very cool work in the mean time. As an aside - this brings back a weird feeling of nosatlgia, but this repository also brought me back to a time where I was stuck in an over-engineered .NET shop and i'm now debating if we start the day off with whiskey after having a flashback to being asked to implement "Ninject" into a massive spaghetti banquet of a code base (collective hope that dep injection fixes bad code).
In case anyone is worried I write software in Rust and Golang now and my life has improved significantly since the origins of this worm and people taking dep injection frame works seriously. :D
I love it when anything Caenorhabditis elegans (C. Elegans) related pops up because this little biological organism sits at this beautiful intersection between technology and biology and philosophy. The successful emulation of C. Elegans would represent a concrete step towards whole brain emulation and all the transhuman and ethical and moral quandaries that would bring. The general idea is that the human brain has billions of neurons, Elegans has hundreds (and we've had them mapped since 1986). If one can successfully "upload" Elegans, then humans are just a matter of scale.
However, it should be noted that the field, and specifically this line of research, hasn't produced much in the way of results in 10+ years. University of Oregon planned (though I can't tell if they ever developed) NemaSys[0] ~1997. OpenWorm has been exploring this since 2011. Project Nemaload explored it a bit from 2011-2013.[1] But each project ran into three problems:
- Knowing the connections isn't enough. We also need to know the weights and thresholds. We don't know how to read them from a living worm.[2]
- C. elegans is able to learn by changing the weights. We don't know how weights and thresholds are changed in a living worm.[2]
- Funding [3]
The best we can do is modeling a generic worm - pretraining and running the neural network with fixed weights. Thus, no worm is "uploaded" because we can't read the weights, and these simulations are far from realistic because they are not capable of learning. Hence, it's merely a boring artificial neural network, not a brain emulation. Relevant neural recording technologies are needed to collect data from living worms, but they remain undeveloped (but in progress?[4][5][6]), and the funding simply isn't there.
OpenWorm got the idea to plug their connectome into a Lego robot[7] and got it to exhibit the tap-withdrawal behavior of the nematode, but it had technical limitations preventing easy modification of the connectome or introduction of new models of neural dynamics. JHU Applied Physics Lab extended the work by using a basic integrate and fire model to simulate the neurons and assigned weights by determining the proportion to the total number of synapses the two neurons on either side of the synapses shared and in the end got the simulated worm to reverse direction when bumping into walls.[8] At this point, humanity seems to have abandoned emulated worm driven mechanisms which is honestly kind of a loss.
There's no real ending to this comment. Love this project, loves what it stands for, looking forward to seeing progress in this field. And a lot of this information was pulled from this blog post[9] which was also mentioned in the comments somewhere.
There are a bunch of labs that are making progress in reading the full nervous system (so to speak) realtime in live worms. I like especially the outputs from Andrew Leifer lab in Princeton. I anticipate some very interesting results to come from these labs in the next decade.
I briefly considered doing a postdoc in one of these labs, because I love working worms and agree with your proposition that next logical step in neuroscience is modeling and fully understanding an entire organism. The late Sydney Brenner asked for the same in 2011 [1].
But most academic labs doing well won’t even consider a postdoc application from a student who didn’t work in the exact same field. Solidified my decision to never be part of the Ponzi scheme that is academic research.
The beauty of C. Elegans is that you actually need very little to start working with them. All the strains are available for 10 bucks a pop, you can do most work at room temperature. I only need to invest on a very custom (but not necessarily outrageously expensive) microscope to start working on this topic in my garage. Which I absolutely plan to start in the next few years. I’ve done the math and it’ll cost me less than owning a cheap boat lol. If anyone wants to fund me I’ll be open to it too :)
1. Sydney, B. & Sejnowski, T. J. Understanding the human brain. Science 334, 567 (2011).
I actually take this as a positive, they have uploaded a shell of a 'brain', and have identified the next problem, 'weights'. Why isn't what they have done so far be amazing enough to start tackling the next thing. I am surprised that with Neural Nets being such a hot bed, the amount of money pouring into AI, and NeuralLink type research, that they would have a hard time with funding.
Watching that lego worm really zapped my brain, seemed like we were on the cusp of something. Maybe we still are, and just misjudged the time-scale on progress.
What would a "real world" application of this be? Could I for example chuck a bunch of simulated worms onto a map and have it solve a route? "Given enough worms, all Travelling Salesman problems are shallow?"
Does this actually become an interesting question at this point? I guess worms don’t have many rights anyway, but if they did, and this was a totally accurate simulation, why not give the digital worm rights?
I actually don’t know, are there any ethical guidelines for working with very limited animals like this? I know they are pretty limited intelligence wise, but I would hope the researchers at least kill them in a quick, painless manner when the experiments are done.
I guess if the computer mode really is true on a physical level it is owed the same (minimal) level of decency somehow.
My point is: let's discuss this before we simulate more complex life and let's not draw some arbitrary line in the sand or call it all "artificial = not worthy anything".
Yeah, I agree, definitely at least that it is an insufficiently explored ethical issue.
I suspect people are just leaning on the intuitive answer (that it is just a computer program), and the fact that we just don’t have the ability to simulate anything that has, like, obvious rights yet. I don’t think the first will really stand up to scrutiny, and the second is clearly a temporary solution (to the extent that it even is a solution).
That we can run a simulation of an organism that looks and acts like the real thing evokes a rare sense of wonder I thought I could never again experience from technology.
> Our main goal is to build the world's first virtual organism - an in-silico implementation of a living creature - for the purpose of achieving an understanding of the events and mechanisms of living cells.
This blog post is interesting: "Whole Brain Emulation: No Progress on C. elegans After 10 Years" https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brai... As other mentioned in the comments of the post, it is certainly also a matter of funding, but still I think it is very interesting how we still struggle to simulate a 302 cell worm, while some people expect an artificial superintelligence in the next ten years.
Replicating something exactly is a lot harder than making something that does the same thing in a different way. We can build planes that travel far faster than a bird could but that doesn’t mean it’s easy for us to exactly replicate how a bird flies, so I don’t think this says anything about how far away we are from a superintelligence
It's not only about exact simulation. And we actually do understand how birds can fly on a mechanical basis. This is not true for C.elegans' inner workings.
This. Submarines don't swim like fish, airplanes don't fly like birds and cars don't run like gepards.
Edit: I am surprised at the downvotes. In general, we learn from the nature, but aping it usually proved too difficult and often unpractical at the same time. Do we really want to replicate worm intelligence for practical purposes, or do we want something different?
I would say that a machine which can, say, analyze chemical compounds for their potential biological functions, is a very practical form of "intelligence" and yet very far from any biological intelligence that was ever produced in vivo. Worms cannot do that and even humans struggle with such tasks.
Yeah, but we understand why birds can fly and why fish can swim and how gepards can run, at least on a mechanical basis.
This is not true for how this 302 cell organism works. We don't know and struggle to understand. That's actually the reason why the project exists. To find out how everything works with an bottom-up approach.
While we may find shortcuts or even superior forms of intelligence without understanding how intelligence works in biological creatures, it is still curious how we struggle even with a "simple" organism like C.elegans.
No but we observably do know that that the connections between cells are important, to the point that by mimicking that we've derived significant benefit.
The Wright flyer didn't flap, and the wings only superficially look like anything a bird has.
> No but we observably do know that that the connections between cells are important, to the point that by mimicking that we've derived significant benefit.
That is true and it shows even more how important observabilty is for science and engineering. That's also why a simulation that actually provides an accurate enough model of reality might help us so much. The problem with AI right now is, that we try or even claim to understand Unix by mimicking the functionality of transistors.
> The Wright flyer didn't flap, and the wings only superficially look like anything a bird has.
They tried to mimick bird wings when coming up with flight control mechanisms.
Not only, it's also because of the square/cube law making it impossible to have a 50 ton flyer that flies in the same manner as a 5 kg flyer, you can't simply scale things up.
In defense of the project: they’ve been working a lot over the last 10 years. They recently published what they call a “liquid neural network” where they’ve isolated the neuron structure doing the work. It outperforms CNNs for some niche tasks, and it uses a fraction of the resources.
Just to make this clear, I think this project is actually one of the few sane engineering approaches for starting to understand how our brain works. Sure, claims of simulating a cat brain or even a human brain in some manner are much more catchy, but it seems to be pretty arrogant if we struggle even with C.elegans.
My cellular biochemistry does a pretty good job of replicating itself with absolutely no understanding whatsoever of how to resolve the contradictions between general relativity and quantum mechanics.
When it comes to brains, I don't know if anyone knows what might be the simplest sufficient model that would usefully replicate them, even if you specify "usefully" well enough to know if this is about fundamentals of intelligence or about the impact of drugs on cognition, which are two completely different standards.
For example, perceptrons are a toy model, but modern AI can do more in (breadth XOR single-skill performance in various domains) than any single human, even with much smaller parameter counts than we have synapses; but the broad-skilled ones also mess up in inhuman ways, like being equally good at advanced calculus as basic arithmetic, or being a poet at the level of stereotypical teenager but in every language simultaneously.
If anyone's made a neutral network that can get high on simulated caffeine — and I'm not saying it hasn't been done — it's not reached any of the places I follow discussions on this kind of thing. (Google didn't help, results were about software named Caffeine and non-artificial neurones).
I'm not saying it isn't physics, I'm saying it doesn't know what the physics is.
We're physics too, but we don't know how it all fits together.
If a sub-part of us that knows less than we do can make a copy of us, despite not knowing how it all works, that's an existence proof that we don't need to understand how it all works to make a copy of us.
If the "something" never exploits a particular law, then in principle you could simulate it if you knew all the laws that actually applied (imagine a hypothetical biological organism that never exploited or experienced quantum entanglement).
But strictly speaking, as we understand it, it's not possible to replicate something exactly without recapitulating the exact laws and running a deterministic simulation, which is not practical.
I don't think anybody is really attempting to exactly replicate things, but rather to create a physical model which can be calcualted and contains enough similarity or transferrability to be able to make accurate generalized predictions about the behavior of the simulated system. How and why that works with modern math methods is still somewhat mysterious. The most useful thing written about that so far is https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...
Abstraction. You don't need to know what's going on at the quantum level because higher-level simulations can capture all the properties you care about.
This is a fundamentally flawed way of questionning, "knowing all the laws of physics" is probably impossible to achieve, but we still produce accurate predictions for a lot of phenomena. You can see that we do in fact "replicate" (predict would be a more proper term) things despite not knowing all the laws of physics: weather, movement of the stars, cooking time for a browned piece of bread...
I was replying to someone asking if it was "exact"… and no it isn't… You listing a bunch of simulations, that are notoriously not 100% correct proves my point, thanks.
The same way we can simulate the movements of the planets - it will never be exact, but the better we understand it, the more precisely we can simulate it
clearly depends on needed resolution and our ability to recreate the material structure. you don't have to understand much about wood to build a chair.
many years ago this was interesting. Nowadays, with the progress in deep models i m not sure if there is much to learn from c.elegans. What is this useful for? For high level cognition, it's much more fruitful to study how deep models do it. For low level brain diseases, c.elegans is too simplistic to tell us anything we dont already know
Certainly it is worth it to study how exactly biology and biochemistry gives rise to complex behavior! Basic research like this doesn’t have to justify itself with potential applications. Besides, there’s really no evidence that deep networks are any kind of an analogy to animal nervous systems. They might just as well be aliens as far as cognition goes.
We don't know anything substantial about C.elegans in this regard. That is actually the point. This is even more true for more complex nerve systems.
The whole deep learning stuff is basically roughly inspired by a tiny part of the visual cortex (see also Neocognitron). I am not sure how brain diseases can be understood by looking at such simplistic (yet powerful) machines.
Neurons are similar, they are incredibly sophisticated biological machines, with billions of DNA base pairs controlling their behavior. The emergent behavior of neurons in both biological and AI systems are pretty fascinating