God I wish people would stop using the term "artificial" when they mean "metaphorical". An artificial organ is artificial if you can put it in the place where the original thing used to be, and it works. That's to say, it's equivalent in function.
These 'synapses' aren't actually synapses and they don't emulate even a fraction of the processes you find in their biological namesakes.
> they don't emulate even a fraction of the processes you find in their biological namesakes
That is correct. But they are not supposed to: natural nervous systems are just an inspiration. That perspective is only valid in _nervous systems emulations_ - which is just a subset of what we do.
> using the term
'Artificial', in use, stands for "non natural, non spontaneous"; it means "done with an aim" (cpr. Sanskirt arth, artha, arthin, arya; Greek aretē, àristos). Those items are. They are called metaphorically - but evidently so - 'synapses', which means "joinings, interfastenings" (see in engineering 'haptic', "touching"), because they join the elements of a "neural network", where 'neuron' remains "a string" - a "connection significant of a relation".
So: at some point some thought: "What if we obtain something through joining similar elements in a network... Yes, similarly to the other one". It fits, because the model is this model, a model of the natural thing. You have natural neural networks, their model ("just a network"), and such model (of the natural thing) is used as a model (for designing further things).
I think we agree on that it’s made by humans. The discussion is about what is being made. Just because a tennis ball is round, we don’t call it an artificial sun.
Not necessarily, though primarily. The core point is in "crafting with an aim".
It is interesting that the first recognized use of the term is for "artificial day", meaning "dawn to dusk", which is the period in the "natural day" (here the term already starts to suggest an opposition - "artificial vs natural") in which you can "purposefully craft" (light allows).
It's more like calling an articulated robot with a manipulator a 'robotic arm', despite being infinitely simpler than a human arm and with much more limited functionality.
I agree that the speed comparison seems like a very misguided metric though.
Well, it's a robot in the shape of an arm. A robotic dog doesn't have to look or behave much like a dog to earn its name.
You wouldn't call them artificial dogs nor artificial arms (unless they were being used as a prosthetic I guess, but then it's just a robotic prostethic arm)
I had the same initial reaction, but after digging in a little I don’t think “artificial” is the worst description in the world here. I do think the innovation here is much more like an artificial neuron than an artificial synapse though. Fair warning, I’m no neuroscientist and I basically don’t know what neural networks are, but here’s what I’ve gathered.
neuron1 -> neuron2
Neuron1 receives an input signal across a synapse, “processes” that signal, and then either does or does not pass along an output signal to neuron2. I’m sure this is an incredibly deep field of research with a lot of nuance, but I think it remains a reasonable approximation to say that neurons “fire” or not in a binary manner. A lot of the magic takes place within the neuron itself, where unimaginably complex biochemistry dictates how likely a neuron is to fire in response to an input signal. As far as I understand it, this is analogous to the application of a weight to an input in a neural network.
A decent example along these lines is how opiates influence breathing. Neurons exist at a resting negative electrical state, which can be shifted to a sufficiently positive state in response to an input that the signal propagates down the neuron resulting in the passing on of that signal to the next neuron. Opiates drive that resting negative electrical state to be even more negative, and so in response to a normal “we’re running low on oxygen here!” input, a neuron will fail to become sufficiently positive to pass that signal along the chain. In NN parlance, it’s weight has been changed.
This piece describes a memristor that replicates this weighting of inputs to produce outputs through a material that stores these weights in a material that can be adjusted electrically rather than through biochemistry. There was actually a paper[0](released just two days ago!) that uses memristors to meaningfully create an artificial neuron with biochemical synapses. Of course, there’s a lot of extra machinery involved to actually be biologically useful, but nonetheless this tech can be used as a very simplified drop-in. Of course, as you say it’s like step one in a 10 billion step process, but I don’t think it’s totally dishonest to call it an “artificial” neuron, or at least a component of one.
Of course, bragging about how fast and small it is compared to a neuron and synapse is a bit like an elementary school teacher setting up a cool grow-lamp garden to teach kids about sunlight and photosynthesis and then bragging about how they produced an ultra-minutare sun that’s so efficient it runs off an outlet :)
For one, some neurons, when activated, don't just send a signal to specific other neurons, but instead release a chemical in an area, that affects the activation chances of other nearby neurons. I believe there are also other modes of activation, and other consequences of neuron activation, that make the brain far more complex. It should be remembered that the brain can also activate other glands in the body, which in turn change how the brain works - e.g. when releasing adrenaline, testosterone, oxytocin etc.
For another, as far as we know right now, each neuron itself is deciding whether to fire or not based on much more sophisticated logic than "sum(input*weight) > threshold". In fact, it seems that computation happens quite a bit in individual neurons, not only at the NN level. At the very least, the neuron activation function is not fixed, like in an ANN after training, it changes constantly for various reasons.
Oh the number of ways this model doesn’t match reality couldn’t even be counted. I suppose my standard for achieving an “artificial something” in biology is if accurately reflects reality well enough to learn from, and I only meant to imply that this might.
I will say that my mental model does hinge on the idea that the action of a single neuron at a single point in time in a single context can actually be equated to "sum(input*weight) > threshold". Doing the actual computation to figure out a principled measure of weight (and input, context, and maybe even time for that matter) is way outside our ability, but it seems like something that could be approximated in a simple experimental model!
"if accurately reflects reality well enough to learn from" - but are we learning real stuff from that, or only implications from the fake/artificial thing? I mean for sure we can see that as a brainstorming, exciting by itself, but does that get us closer to understanding the real thing, or it's at the maximum a Plato's cave exercise in rationality?
> don't emulate even a fraction of the processes you find in their biological namesakes.
It isn't expected that e.g. an artificial heart would emulate all processes of the heart. It just needs to cover the function of the heart to some degree, sometimes (usually?) they don't even have a beat.
> made by people, often as a copy of something natural
There is absolutely no requirement for the "copy" to be functional. See: artificial flowers. Things like "artificial hearts" are the exception and not the norm.
I was thinking about this nit for awhile. Artificial flowers do fulfill one purpose of flowers, i.e. decoration. And the parent is right that most other "artificial" things serve to fill in for some original purpose of the natural thing. But the real point is they don't have to fill all the purposes, or even most of them. An artificial eye could be made of glass or it could be a camera a thousand miles away... in the first case, it's just there in someone's head for show, fulfilling the aesthetic purpose of an eye. In the second case it's there for the purpose of seeing something and fills no aesthetic need at all. So an artificial neuron could fill any one of many functions... it could even just be a big plastic model of a neuron in a doctor's office.
Artificial neural networks have been called like that since the 1940s. So that's just how they're called. I assure you that people in the field know the difference very well.
Yes, "synthetic analogy" or something similar would be more appropriate. I guess artificial synapse always means an approximation (e.g. if its similar it's only in some partial way: the device uses ionic transport etc). I think artificial synapse has become the nomenclature in the academic literature and they are just going along with that. The article makes a bit of a mess of it though.
> they don't emulate even a fraction of the processes you find in their biological namesakes
As long as they provide the same resulting behavior, it doesn't matter the processes are the same. If the processes were the same, it'd be an electrochemical process and would be 10K times slower.
In the spirit of the thing, you have to admit that "artificial intelligence" is already a widely accepted description of a variety of algorithms which will never literally replace human brains in human skulls... so by extension, any hardware built to run those algorithms could be described as an artificial brain full of artificial neurons, if only to differentiate it from an actual brain running artificial intelligence.
It doesn't have to be, but both are, at some level of abstraction, switches that integrate multiple weighted inputs and set an output level that's used as input for other switches.
If this is what we expect, the analogy holds and the implementation details don't matter.
But there is a fundamental difference that breaks the analogy: the "function" that the neuron uses to set its output based on its input is not a function at all, it is a stateful mechanism.
That is, for the same input at different times, a real neuron will have different outputs; and its own ouptus may change its state. In contrast, the memristor, once programmed, always applies the same function to its inputs. Even if it can be reprogrammed with a different weighting, it can't do so based on its own output - at least not with any current neural network architechture.
I never said the "artificial neuron" would be stateless. Natural ones have states and, if we model artificial ones as stateless it's just because they are easier to model that way. In that sense, an artificial neural network is an exercise of how little "neuron-ness" one needs to make a useful neural network.
I read other research recently that showed that real synapsis have additional dimensions of data that are stored than previously thought. Whereas before it was thought they were a simple binary fire/no fire, they are also transferring data through width and height of the firing. Because of this neural networks designs may be missing a key feature.
That has never been thought by anyone but computer scientists who never looked at a biology textbook.
To begin approximating what a lone spherical synapse would actually do you'd need to solve 2^n coupled second order differential equations where n is the number of ions used.
That is before you throw in things like neuro transmitters and the physical volume of a cell. Simulating a single neuron accurately is beyond any super computer today. The question is how inaccurately can we simulate one and still get meaningful answers.
We are way to stupid to solve this riddle but I'm rather optimistic we could build something that solves it or at least build something that can build something that solves it.
I'm looking forwards to all the "easy" things it will figure out and stick us in a loop of "why didn't anyone think of that?" Something like the nth generation ML offspring solving the building of viable neurons at scale by breeding some single cell organism.
We didn't solve flight by building a bird. We solved it by building a plane. The problems we care about might not be solved by neurons at all. But right now using ANNs as a model of the brain is like saying that a bunch of kites have the same behavior as a flock of birds.
Seems to me, a bunch of asynchronously moving kites would be a more efficient approach to modeling a flock of birds than, say, iterating all those birds' positions frame by frame in a synchronous loop.
Flocks are best modeled as aggregates of extremely simple agents who want to avoid collisions, avoid complete separation and move with the center of mass of the flock.
You're willfully missing the point. I'm not talking about the behavior of a kite (the kite could be designed to act autonomously and do whatever). I'm saying asynchronous analog modeling is by definition going to be both more efficient and capture a smoother gradient of concurrent states than a giant for/next loop (essentially what all neural networks do today) where each agent is checked and modified sequentially.
It's not exactly a "new" finding that neurons communicate signals to each other by means other than purely electrical signalling. The existence of well over 100 different neurotransmitters has been known for some time, and these are used to create unique signal cascades within the receiving neuron depending on the exact concentration sent. There is nothing equivalent to this in artificial neural networks. Artificial neural networks are not necessarily using binary neurons. Each activation unit may receive and pass on a continuous signal, but this tends to be a single floating-point number, usually normalized to be between -1 and 1 or between 0 and 1. Biological neurons are sending at minimum hundreds of these continuous-valued signals to each other. Likely, the closest way to build an equivalent artificial neural network would to have each neuron have hundreds of connections to all the other neurons it is connected to, rather than just one, but even that isn't necessarily equivalent, as what actually happens internal to the cells in response to these signal cascades isn't all that well-understood, but it involves quite a bit more than just determining what sort of signal to pass on to the next synapse.
These are ... not the synapses you 're looking for. These are resistors. It doesn't seem they can be used to replace weights in hypothetical deep learning circuits. They certainly can't replace real synapses
> The new programmable resistors are similar to memristors, or memory resistors. Both kinds of devices are essentially electric switches that can remember which state they were toggled to after their power is turned off. As such, they resemble synapses, whose electrical conductivity strengthens or weakens depending on how much electrical charge has passed through them in the past.
Ya, headlines rule all, kind of annoying. I don't see and better comparisons even in the article text, maybe I didn't look closely enough though. I probably should stop being surprised by this stuff some day.
When ever someone comes up with a new resistor type they seem to ignore all the research and development done with nand memory and regular cpu architectures. Like the memresistor all of the research and development done with nand flash and 5nm technology outweighs all these new types of gates.
What are the main differences between programmable resistors and varistors or photoresistors? I guess presence of volatile/non-volatile memory, smaller scale size and maybe lower voltage operation (e.g. compared to varistors)?
But is that the point of artificial "neurons", to mimic the functioning of actual neurons? I'm certain we don't have a complete model of how the natural neurons function.
There's nothing biomimetic about this, anymore than a regular ANN. It is a way to encode a "neuron" from the ANN in hardware, instead of running it in sofware.
These 'synapses' aren't actually synapses and they don't emulate even a fraction of the processes you find in their biological namesakes.