Hacker News new | past | comments | ask | show | jobs | submit login
The Power of Small Brain Networks (nautil.us)
63 points by dnetesn 17 days ago | hide | past | favorite | 11 comments



One thing to mention is that as far as I know, biological neurons are much more complex than artificial ones. They spike and have a lot more internal states (I think).

I wonder if we might see more neuromorphic AI hardware (at competitive scale) soon in order to get away from the extreme energy requirements by (hopefully) packing more capability into fewer artificial neurons and/or a more efficient implementation of the neurons.

Every time I see another headline about the plans for giant AI datacenters I expect the next one to be about memristors or some new paradigm with a huge investment coming out of a lab and into a competitive chip. I think that every dollar that goes into a giant AI datacenter should be matched with trying to make new hardware paradigms for AI competitive.


> biological neurons are much more complex than artificial ones. They spike and have a lot more internal states (I think)

This is correct. Notably the famous Minsky XOR result is valid only due to oversimplification in the perceptron model. By adding a notion of location and modulating learning using it, Hopfield networks learn XOR just fine.


I also think neuromorphic computing is the future for energy efficient AI. But sadly development is getting a bit sidelined by other popular AI research areas. It doesn't help that it's much easier to train ANNs than spiking neural networks (SNNs). :/


I think neuromorphic might even be a distraction right now.

The hyperparameters for a SNN that performs a specific task are extremely elusive. If you don't even know what kind of neuron model or fan out ratio might work, how the hell can you start burning these as constants into some hardware contraption?


Biological neurons are indeed extremely complex. Even simulate one at molecular/sub-cellular level is a daunting task and the sole subject of a major research project.

[1] https://www.quantamagazine.org/how-computationally-complex-i...

[2] https://www.humanbrainproject.eu/en/science-development/focu...


I doubt it... The last, oh, ten years has been accompanied by a graveyard of dead neurotrophic ai startups. Brain neurons are what they are because they are working under a particular set of capabilities and constraints, very different from what we can do with silicon.

It's also worth asking whether the data center is really all /that/ inefficient: The human time and effort required to produce (say) one text-to-image prompt completion is enormous compared to a GPU: the brain uses less power per time unit, but seems to require a hell of a lot more time.


I'm not sure that the analogy stretches so far.

What even is an artificial neuron in an Artificial Neural Network executed on "normal" (non-neuromorphic) hardware? It is a set of weights and an activation function.

And you evaluate all neurons of a layer at the same time by multiplying their weights in a matrix by the incoming activations in a vector. Then you apply the activation function to get the outgoing activations.

Viewing this from a hardware perspective, there are no individual neurons, just matrix multiplications followed by activation functions.

I'm going out of my area of expertise here, I just started studying bioinformatics, but neurological neurons can't simply hold an activation because they communicate by depolarising their membrane. So they have to be spiking by their very nature of being a cell.

This depolarization costs a lot of energy, so they are incentived to do more with less activations.

Computer hardware doesn't have a membrane and thus can hold activations, it doesn't need spiking and these activations cost very little on their own.

So I'm not sure what we stand to gain from more complicated artificial neurons.

On the other hand, artificial neutral networks do need a lot of memory bandwidth to load in these weights. So an approach that better integrates storage and execution might help. If that is memristor tech or something else.


Cerebras uses SRAM integrated into a giant chip I think. It is extremely fast inference -- they say 70 X faster than GPU clouds, over 2000 tokens per second output of a 70b model. But still uses a ton of energy as far as I know. And the chips are, I assume, expensive to produce.

Memristors might work, to get the next 10 X or 100 X in efficiency from where Cerebras is.

As far as more complex neurons, I was thinking that if each unit was on a similar order of magnitude in size but somehow could do more work, then that could be more efficient.


How do they track the brain activity of a fruit fly? That seems incredible to me


>The scientists monitored the brains of fruit flies as they walked on tiny rotating foam balls in the dark, and recorded the activity of a network of cells responsible for keeping track of head direction.

Meanwhile I chimp out when I have to plug 4 wires in a relay module.


This power is quite obvious when you consider that a lot of answers that you get from AI these days is from Reddit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: