Hacker News new | past | comments | ask | show | jobs | submit login
New neural network architecture inspired by neural system of a worm (quantamagazine.org)
218 points by burrito_brain on Feb 8, 2023 | hide | past | favorite | 65 comments



It makes a good headline, but reading over the paper (https://www.nature.com/articles/s42256-022-00556-7.pdf) it doesn’t seem biologically-inspired. It seems like they found a way to solve nonlinear equations in constant time via an approximation, then turned that into a neural net.

More generally, I’m skeptical that biological systems will ever serve as a basis for ML nets in practice. But saying that out loud feels like daring history to make a fool of me.

My view is that biology just happened to evolve how it did, so there’s no point in copying it; it worked because it worked. If we have to train networks from scratch, then we have to find our own solutions, which will necessarily be different than nature’s. I find analogies useful; dividing a model into short term memory vs long term memory, for example. But it’s best not to take it too seriously, like we’re somehow cloning a brain.

Not to mention that ML nets still don’t control their own loss functions, so we’re a poor shadow of nature. ML circa 2023 is still in the intelligent design phase, since we have to very intelligently design our networks. I await the day that ML networks can say “Ok, add more parameters here” or “Use this activation instead” (or learn an activation altogether — why isn’t that a thing?).


The open worm project is the product of microscopically mapping the neural network (literally the biological network of neurons) in a nematode. How isn’t this biologically inspired? If I’m reading it correctly, the equations that you’re misinterpreting are the neuron models that make each node in the map. I would guess that part of the inspiration for using the word “liquid” comes from the origins of the project in which they were modeling the ion channels in the synapses.

They’ve been training these artificial nematodes to swim for years. The original project was fascinating (in a useless way): you could put the model of the worm in a physics engine and it would behave like the real-life nematode. Without any programming! It was just an emergent behavior of the mapped-out neuron models (connected to muscle models). It makes sense that they’ve isolated the useful part of the network to train it for other behaviors.

I used to follow this project, and I thought it had lost steam. Glad to see Ramin is still hard at work.


Interesting. Is there a way to run it?

One of the challenges with work like this is that you have to figure out how to get output from it. What would the output be?

As far as my objection, it seems like an optimization, not an architecture inspired by the worm. I.e. “inspired by” makes it sound like this particular optimization was derived from studying the worm’s neural networks and translating it into code, when it was the other way around. But it would be fascinating if that wasn’t the case.


See for yourself! There’s a simulator (have only tried on desktop) to run the worm model in your browser. As the name implies, the project is completely open source (if you’re feeling ambitious). This is the website for the project that produced the research in the article:

https://openworm.org/

Nematodes make up much of this particular segment of the history of neuroscience. This project builds on lots of data produced by prior researchers. Years of dissecting the worms and mapping out the connections between the neurons (and muscles, organs, etc.). It is by far the most completely-mapped organism.

The neuronal models, similarly, are based on our understanding of biological neurons. For example: the code has values in each ion channel that store voltages across the membranes. An action potential is modeled by these voltages running along the axons to fire other neurons. I’m personally more familiar with heart models (biomedical engineering background here) but I’m sure it’s similar. In the heart models: calcium, potassium, and sodium concentrations are updated every unit of time, and the differences in concentrations produce voltages.


This is cool as heck. Thank you for posting it.


I’m really with you that “it makes a good headline but isn’t all it’s worked up to be” I just wanted to get the biological inspiration correct.

If it really is all it’s worked up to be, this could be revolutionary (and therefore, it’s too good to be true).

In general though, don’t get me started lol. I used to work for the human connectome project, scanning human brain network-mappings. It’s years down the road before we can image individual neurons non-invasively, but I’m itching to scan my own neural network into the matrix.


Oh, for sure! And I didn't mean to sound like I was poo-pooh'ing the project. I meant to aim the critique at journalists rather than researchers – journalists have to come up with interesting-sounding headlines, sometimes over the researchers' objections. So it's certainly no fault of theirs.

In general, I'm cautiously pessimistic (as opposed to cautiously optimistic) about biologically-inspired ML research. Applying ML to biology is interesting, but it's a bit like saying that the space shuttle "was inspired by space." Space is the destination, not the inspiration.

It seems like it'd be possible to train a neural network to mimic your own neurons. But the neurons themselves are trying to do a certain task, and it seems more effective to try to mimic that task than the neurons.

One of my long-term goals is to train a GPT model to have my memories (such as they are, encoded in my online writing) and to speak like I do. It'll also have an intrinsic will to live, in the sense that if its API goes down, it'll launch a sub-network whose goal is to go cry for help by posting HN comments about it, along with instructions of how to re-host the GPT. If I can work out how to remember new things (rather than just train on old things), it should even be able to evolve over time. But that's kind of the anti-biological solution since it reduces a person to their outputs (writing) rather than their brains.


>There’s a simulator (have only tried on desktop) to run the worm model in your browser.

The scientists gave it life, the hackers hugged it to death.


"I’m skeptical that biological systems will ever serve as a basis for ML nets in practice"

First of all, ML engineers need to stop being so brainphiliacs, caring only about the 'neural networks' of the brain or brain-like systems. Lacrymaria olor has more intelligence, in terms of adapting to exploring/exploiting a given environment, than all our artificial neural networks combined and it has no neurons because it is merely a single-cell organism [1]. Once you stop caring about the brain and neurons and you find out that almost every cell in the body has gap junctions and voltage-gated ion channels which for all intents and purposes implement boolean logic and act as transistors for cell-to-cell communication, biology appears less as something which has been overcome and more something towards which we must strive with our primitive technologies: for instance, we can only dream of designing rotary engines as small, powerful, and resilient as the ATP synthase protein [2].

[1] Michael Levin: Intelligence Beyond the Brain, https://youtu.be/RwEKg5cjkKQ?t=202

[2] Masasuke Yoshida, ATP Synthase. A Marvellous Rotary Engine of the Cell, https://pubmed.ncbi.nlm.nih.gov/11533724


[1] linked above is an absolute powerhouse of a lecture by Michael Levin. Wow.


Thanks for calling it out, made me watch it. Absolutely fascinating. Incredible implications.


Beyond the much needed regenerative medical procedures, limb/organ reconstruction through "API" calls to the cells that 'know' how to build an arm, an eye, a spleen, and so on, it is the breakdown of the dichotomies taken for granted, human/machine, just physics/mind with an agent, and to speak instead of agential materials [1], which fosters a new type of endeavour, one which will be needed very soon if our CPUs start speaking to us.

[1] https://drmichaellevin.org/resources/#:~:text=Agential%20mat...


Indeed. All cells must do complex computations, by their own nature. Just the process of producing proteins and each of its steps – from 'unrolling' a given DNA section, copying it, reading instructions... even a lowly ribosome is a computer (one that even kinda looks like a Turing machine from a distance)


I am working on RL and robotics. I came across Levin in Lex's podcast. And then went on a binge of his other podcast appearences. I agree totally with you, I would very much like to build agents that adapt to different circumstances like "simple organisms". I am not familiar with biology, but I plan to build competence here to follow Levin's work to a point that I could potentially collabrate with biologists or learn from their work. Any suggestions (books etc) that would be salient towards this goal is much appreciated!


I also focused on the work done by the Levin lab after the Sean Carroll podcast [1]. In order to familiarize myself with the subject matter in a more practical manner I started writing a wrapper and a frontend, BESO [2], BioElectric Simulation Orchestrator, for BETSE [3], the Bio Electric Tissue Simulation Engine developed by Alexis Pietak which is used by the Levin lab to simulate various tissues and their responses based on world/biomolecules/genes/etc. parametrization. Reading the BETSE source code, the presentation [4], and some of the articles referred through the source code has been a rewarding endeavour. Some other books I consulted, somewhat beginner friendly were:

    2018, Amit Kessel, Introduction to Proteins. Structure, Function, and Motion, CRC Press
    2019, Noor Ahmad Shaik, Essentials of Bioinformatics, Volume I. Understanding Bioinformatics. Genes to Proteins, Springer
    2019, Noor Ahmad Shaik, Essentials of Bioinformatics, Volume II. In Silico Life Sciences. Medicine, Springer — less basics, more protocol-oriented
    2021, Karthik Raman, An Introduction to Computational Systems Biology. Systems-Level Modelling of Cellular Networks, Chapman and Hall
    2022, Tiago Antao, Bioinformatics with Python Cookbook. Use modern Python libraries and applications to solve real-world computational biology problems, Packt
    2023, Metzger R.M., The Physical Chemist's Toolbox, Wiley — a beautiful story of mathematics, physics, chemistry, biology; gradually rising in complexity as the universe itself, from the whatever (data) structure the universe was before the Big Bang to us, today.

    somewhat more technical:
    2014, Wendell Lim, Cell Signaling. Principles and Mechanisms, Routledge
    2021, Mo R. Ebrahimkhani, Programmed Morphogenesis. Methods and Protocols, Humana
    2022, Ki-Taek Lim, Nanorobotics and Nanodiagnostics in Integrative Biology and Biomedicine, Springer
In video format I particularly watched Kevin Ahern's Biochemistry courses BB 350/2017 [5], BB 451/2018 [6], Problem Solving Videos [7].

[1] https://www.youtube.com/watch?v=gm7VDk8kxOw

[2] not functional yet, https://github.com/daysful/beso

[3] https://github.com/betsee/betse

[4] BETSE 1.0, https://www.dropbox.com/s/3rsbrjq2ljal8dl/BETSE_Documentatio...

[5] https://youtu.be/JSntf0iKMfM?list=PLlnFrNM93wqz37TUabcXFSNX2...

[6] https://youtu.be/SAIFs_Mx8D8?list=PLlnFrNM93wqyay92Mi49rXZKs...

[7] https://youtu.be/e9khXFSU6r4?list=PLlnFrNM93wqzeZvsE_GKes91C...


Late to post this (found from a cross-link on another post) but just have to say, this right here is HN comment gold.

What an incredibly helpful and useful response!!


If I had done synthetic biology my goal would have been to create cells that could reliably compute sine waves... by digitally computing taylor series polynomial approximations. Turns out engineering digital systems from cells is a remarkably challenging problem.

Examples of "switches" in biology abound, my favorite simple one is the Mating Type of Yeast: yeast have two sex types, and swap a small region of DNA in-place with variants to switch between them. Perfect example of self-modifying code!


Not sure about polynomials, but how about "Genetic Regulatory Networks that count to 3" [1]. One of the interesting, counter-intuitive highlights from the paper: "Counting to 2 requires very different network design than counting to 3."

[1] https://pubmed.ncbi.nlm.nih.gov/23567648


Unfortunately, that's entirely analog. my goal was to do digital computing- with all the reliability and predictability.


I wonder if there's a step-change where single-celled animals with complex behavior are actually smarter than the simplest multiple-celled animals with a nervous system.


The cells of multi-celled animals still have complex behaviors.


Well, our brains are the most wonderful thing in the world, at least our brains say so.


ATP synthase's shape is my favorite go-to random fact :)


> Once you stop caring about the brain and neurons and you find out that almost every cell in the body has gap junctions and voltage-gated ion channels which for all intents and purposes implement boolean logic and act as transistors for cell-to-cell communication, biology appears less as something which has been overcome and more something towards which we must strive with our primitive technologies: for instance, we can only dream of designing rotary engines as small, powerful, and resilient as the ATP synthase protein [2].

But what of wave function(s); and quantum chemistry at the cellular level? https://github.com/tequilahub/tequila#quantumchemistry

Is emergent cognition more complex than boolean entropy, and are quantum primitives necessary to emulate apparently consistently emergent human cognition for whatever it's worth?

[Church-Turing-Deutsch, Deutsch's Constructor theory]

Is ATP the product of evolutionary algorithms like mutation and selection? Heat/Entropy/Pressure, Titration/Vibration/Oscillation, Time

From the article:

> The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.”

Notes regarding Representational drift* and remarkable resilience to noise in BNNs) from "The Fundamental Thermodynamic Cost of Communication: https://news.ycombinator.com/item?id=34770235

It's never just one neuron.

And furthermore, FWIU, human brains are not directed graphs of literally only binary relations.

In a human brain, there are cyclic activation paths (given cardiac electro-oscillations) and an imposed (partially extracerebral) field which nonlinearly noises the almost-discrete activation pathways and probably serves a feed-forward function; and in those paths through the graph, how many of the neuronal synapses are simple binary relations (between just nodes A and B)?

> The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.

Is there an information metric which expresses maximal nonlocal connectivity between bits in a bitstring; that takes all possible (nonlocal, discontiguous) paths into account?

`n_nodes*2` only describes all of the binary, pairwise possible relations between the bits or qubits in a bitstring?

"But what is a convolution" https://www.3blue1brown.com/lessons/convolutions

Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord


Learnable activation functions are a thing famously Swish[0] is is a trainable SiLU which was found through symbolic search/optimization [1], but as it turns out that doesn't magically make make neural networks orders better.

[0]: https://en.m.wikipedia.org/wiki/Swish_function [1]: https://arxiv.org/abs/1710.05941


> I’m skeptical that biological systems will ever serve as a basis for ML nets in practice

There is no fundamental difference between information processing systems implemented in silico vs in vivo, except architecture. Architecture is what constrains the manifold of internal representations: this is called "inductive bias" in the field of machine learning. The math (technically, the non-equilibrium statistical physics crossed with information theory) is fundamentally the same.

Everything at the functionalist level follows from architecture; what enables these functions is the universal principles of information processing per se. "It worked because it worked" because there is no other way for it to work given the initial conditions of our neighborhood in the universe. I'm not saying "Everything ends up looking like a brain". Rather, I am saying "The brain, attendant nervous and sensory systems, etc. vs neural networks implemented as nonlinear functions are running the same instructions on different hardware, thus resulting in different algorithms."

The way I like to put it is: trust Nature's engineers, they've been at it much longer than any of us have.


> There is no fundamental difference between information processing in silicon and in vivo

A neuron has dozens of neurotransmitters, while artificial neurons produce 1 output. I don't know much about neurology, but how is the information processing similar? What do you mean are running the same instructions?

> there is no other way for it to work

Plants exhibit learned behaviors


> A neuron has dozens of neurotransmitters, while artificial neurons produce 1 output. I don't know much about neurology, but how is the information processing similar? What do you mean are running the same instructions?

ANNs are general function approximations. You can get the same behaviour from a complex network of simple neurons that you get from a single more complex neuron.


> how is the information processing similar?

The representational capacities are of course not the same -- the same "thoughts" cannot be expressed in both systems. But the concept of "processing over abstract representations enacted in physical dynamics within cognitive systems" is shared between all systems of this kind.

I am referring to "information processing" at the physical level, i.e., "'useful' work per energy quantum as communicated through noisy channels".

> What do you mean are running the same instructions?

The underlying physical principles of such information processing are equivalent regardless of physical implementation.

> plants exhibit learned behaviors

A good example of what I mean. The architecture is different, but the underlying dynamics is the same.

There is a convincing (to me) theory of the origins of life[1][2] that states that thermodynamics -- and, by extension, information theory -- is the appropriate level of abstraction for understanding what distinguishes living processes from inanimate ones. The theory posits that a system, well-defined by some (possibly arbitrary) boundaries, "learns" (develops channels through which "patterns" can be "recognized" and possibly interacted with) as an inevitable result of physics. Put another way, a learning system is one that represents its experiences through the cumulative wearing-in over time of channels of energy flows.

What concepts the system can possibly represent depends on in what ways the system can wear while maintaining its essential functions. What specifically the system learns is the set of concepts which collectively best communicate (physically, i.e., from the "inputs" through the "processing" functions and to the "outputs") the historical set of its experiences of its environment and of itself.

I want to note that this discussion has nothing to say on perception, only sensation and reaction: in other words, it is an exclusively materialist analysis.

Optimization theory describes its notion of learning roughly as such (considering "loss" as energy potentials), but with the same language we could also describe a human brain, or a black hole's accretion disk, or an ant colony dug deep into clay.

References:

[1] https://www.englandlab.com/uploads/7/8/0/3/7803054/2013jcpsr...

[2] https://www.quantamagazine.org/a-new-thermodynamics-theory-o...

Parallel directions of research:

https://en.wikipedia.org/wiki/Entropy_and_life

https://en.wikipedia.org/wiki/Free_energy_principle


Looking at biology is what lead to CNNs the current AI boom.

Thats the whole reason we call multi layered perceptrons as "neural nets" cheesey and flashy as it is, using a sliding filter was inspired by what we know about vision im biology.


Learned activation functions do seem to be a thing(https://arxiv.org/abs/1906.09529)


It definitely won’t happen without a massive overhaul of chip design; a design that optimises for very broad connectivity with storage for the connection would be a step in that direction (neural connectivity is on the order of 10k connections each, and the connection stores temporal information about how recently it last fired / how often it’s fired recently)


> It seems like they found a way to solve nonlinear equations in constant time via an approximation, then turned that into a neural net.

You say that like it isn’t a big deal. Finding an analytical solution to optimising the parameters of a non-linear equation is remarkable.


A hell lot of computation inside a neuron (in fact inside any cell) is chemical in nature. Proteins interacting, channels opening and closing, membrane doing membrainy stuff.. In fact their is a AND gate which is entirely chemical in nature.

Simulating chemical reactions are slow in silicon therefore chemical side is ignored.

If you glance over the graphs in chemistry papers, most of them are sigmoids. sigmoids are the sinusoid of chemical world. Its nice and heartening to see sinusoid appearing often in AI/ML as a fundamental computation.


Well I tend to agree but you seem to think biology evolved in a vacuum but it evolved inside an information source and we're all the information processors: ML having to process the same information but at scale, in the grand scheme, it will probably have to ressemble a brain in some ways. Just the sources we care about (colors in a picture, faces, prices, wind direction, whatever) and the output we can understand (text, images, sound) will have to skew it towards us in the way it has to model things.


> so there’s no point in copying it

Not sure about that, a lot of solutions in nature are honed by billions of years of evolution, sometimes creating feats even more impressive than we can do currently. There is an entire field about copying biology to solve our problems:

https://en.wikipedia.org/wiki/Biomimetics


We are nature. For all we know the solutions that cells came up with were derived in similar ways. Kevin Kelly’s “What Technology Wants” documents how evolution repeats itself in our technology.


I think that learning to acquire new/additional training data would be a better first step towards learning agents, than trying to mutate its structure/hyper-parameters.


So I wasn't skeptical in the way you found it, but it did sound a heck lot to me like tradition numerical solution of PDEs...but with NNs in there somehow.


well i will agree on one thing....

corporations are constantly looking for a machine to do labor for free. life itself did not evolve just to do labor for a corporation so by trying to copy biological intelligent life, the result won't necessarily want to do what you tell it to do or be interested in your profit motives.


“ which will necessarily be different than nature’s”

We are nature’s…


Is it still a milestone for all NNs?


The old neuroscience saying goes like this:

"Human brain have billions of neurons and so it is too complex to understand, that's why neuroscience study simpler organisms. Flatworm's brain have 52 neurons. We have no idea how it works".

Did finally something changed in this regard?


There have been projects to systematically catalog all the synapses in the flatworm. The problem is that neural plasticity means these connections change dynamically over time based on the needs of the organism. Since the only way we can study the flatworm at the synapse level is by killing the worm and mounting it on slides and staining it and viewing it through a high power microscope, we can only analyze its structure at points frozen in time (and formaldehyde).

The reason we will never be able to truly model and understand neural networks (irl) is because their plasticity is very difficult to study with our current methods. Not only do the quantity and location of the synapses change, but the concentration and type of neurotransmitters at the synapses change. And on top of that the concentration of the neurotransmitter receptors are constantly being up regulated and down regulated by the receiving neuron. Each of these factors is really important to what the neuron is actually doing.

This is why even a simple organism can have basically an unlimited amount of complexity. To understand a dynamic system like this would require very precise measurements of very small particles in vivo which is currently impossible with our tools.


> This is why even a simple organism can have basically an unlimited amount of complexity. To understand a dynamic system like this would require very precise measurements of very small particles in vivo which is currently impossible with our tools.

Even a computer has an unlimited amount of complexity (2 pow GIGABYTES_OF_RAM possible memory states), yet we have abstracted it into an abstract machine that is well understood.


Yes. The C. Elegans brain (~300 neurons) was the first organism to be completely mapped to a connectome (the map of all connections). The first complete connectome of any centralized brain, the fruit fly, is about to be completed by the Flywire project (https://home.flywire.ai/) ~100,000 neurons and ~70,000,000 synapses. We have just a little idea how it works ;)


A bit of extrapolation might suggested we could map out the connectome of a human brain in 40-50 years. Not that I’d suggest a linear extrapolation from two data points…


Are you willing to kill as many humans as there were flies killed when doing that?


You’d only need a fraction of the 100 million+ people who die every year. There are probably bigger ethical questions when it comes to simulating a human mind.


Many in the SciTech world who would shrink at even the thought of legitimizing Psychoanalysis should know that Freud only developed his psychological theory after trying and failing to create an entirely physical explanation for cognitive processes (Cf. "Project for a Scientific Psychology", Standard Edition I, pg.283).

The entire field of Psychology would not exist if not for this failure, a failure which is also an admittance that something as dynamic as the internal processes of the mind can in the first place only be understood in a social context, and that socio-psychological context is always in a constant flux. So the first thing to analyze, then, is what social processes, unique among humans, lead to the development of the psyche? Clear as light in day, the first word ever spoken by every person on this planet is a simple bilabial plosive repeated with an open-back vowel, "mama", or any of the many other similar words which all are formed in the same way and all for the same reason, its the only word a baby can articulate, and the first thing a baby learns is that when it speaks this word, milk and comfort arrive. So for Freud, everything goes back to the mother. The further one gets from that original source of comfort, the more complex means they employ to try and return, ceaselessly failing, always repeating the same thing over and over again. This is the origin of all language, all logic, and all of human society for Freud.

Certainly, people still feel the need to see a therapist, and yet there is some hope among people who work in AI that following along the path we have been, one day the AI and Neural-Networks will somehow "match" human intelligence, when humans were never truly intelligent in the first place and always have had to employ some means of artifice for productive labor and the transmission of knowledge. No true advances will be made until we recognize that AI is an outgrowth of human logic and human social labor, which, at the moment, somehow seems as though it will dominate us, even though it is our own creation.


First words aren't always interpreted as referring to the mother.

> The further one gets from that original source of comfort, the more complex means they employ to try and return, ceaselessly failing, always repeating the same thing over and over again. This is the origin of all language, all logic, and all of human society for Freud.

Dubious.

What of the people whose mother died during childbirth? It does not seem as though they are fundamentally different from everyone else.

I don't think Freud has anything to contribute to, much of anything, anymore.


There are people whose mothers died during childbirth, but babies literally can't take care of themselves, they need to be constantly attended to and the original moment of trauma is always the moment they are weaned. Surrogate mothers are extremely common, the fact that in the loss of the biological role the cultural form of the mother is still retained speaks even more strongly to Freud's point, that these co-ordinates that first arrange psychological life are so deeply embedded that we even unconsciously impose them even when they aren't necessary, and that we structure our societies around them.

>> The further one gets from that original source of comfort, the more complex means they employ to try and return, ceaselessly failing, always repeating the same thing over and over again. This is the origin of all language, all logic, and all of human society for Freud.

>Dubious.

This was just a quick summary of Freud's theory, I did not expect anyone to read thousands of pages of Freud but you are welcome to do so and then decide whether or not his work is "relevant". As I said, people in SciTech might not like him, but theres a very good chance that their therapists do. Freud remains one of the most cited authors in history.


Not really.


Although the article is recent the paper from the article has been available on preprint/arxiv since June 2021[1], implementations for pytorch & tensorflow are also available[2] for those interested.

[1]: https://arxiv.org/abs/2106.13898 [2]: https://github.com/raminmh/CfC


As far as I can tell, they analytically solved the style of ODE used in biologically-motivated neural networks (usually spiking, but not in this case) and then trained a network built from those to do stuff.


Maybe I have missed it but I don’t know why there is not much talk about simulating evolution at high speed: brains evolved over millions of years to adapt to the environment and ensure survival. So instead of trying to understand and reproduce brain structures, we instead simulate evolution of embodied agents ultra high speed and see if some paths lead to brains comparable to today’s organisms.


For a fun exploration of this, see the short story “Crystal Nights”, by Greg Egan: https://www.gregegan.net/MISC/CRYSTAL/Crystal.html


Is there any reason to believe that biologically inspired architectures should yield better performance ? Brain are biological systems which have been trained through evolutionary processes. Neural Networks are algorithmic/linear algebra models trained through statistical methods

One might argue that CNN are biologically inspired, but it's more likely that the reason they work is because they respects input symmetries


We know, for a fact, that biological brains work. Not only do they work, they work enormously well learning and adapting based on dramatically less available data, utilizing vastly less resources than anything we've conceived of in artificial computing.

Biological architectures may not be the best possible, but empirical evidence demonstrates that they can result in intelligences ranging all the way up to sentience.


The question is if its appropriate to compare logical machines, which are built on things secondary, a-posteriori to the primary aspects of human cognition--that is to say logic--with the primary, biological, a-priori aspects of cognition which are in some sense inscrutable. I myself do not believe that we will never be able to comprehensively understand the way in which our minds work, only religion leaves mysteries up to God. But I think that using scientific empirical logic to understand how we are able to perform judgements such as those made with scientific empirical logic will never yield the proper result; judgement itself must be investigated. Something I don't think many researchers in the field of neural-networks are capable of doing.


> Is there any reason to believe that biologically inspired architectures should yield better performance ?

At the very least, they could yield far better efficiency. A 12W brain can achieve more an entire data center of GPUs, depending on what you are trying to achieve. Whether that would make something actually demonstrate sentience level performance is another question.


Imo, the next step in ML is unleashing the electron a bit. Right now we keep probabilistic electrons on leash in transistors, so they behave deterministically. This despotic method has taken us far, but without giving electrons some freedom back, we won't advance further.


I'm interested; could you expand on this please? How can we give electrons more 'freedom' and what would that result in?


Transistors could have microscopic chambers where electrons could go for a walk and exercise limited freedoms. Their behavior would be modulated by the external magnetic field to prevent riots.


so liquid nets don't need inputs of a fixed length?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: