Hacker News new | past | comments | ask | show | jobs | submit login
The cortex is a neural network of neural networks (medium.com/the-spike)
297 points by curtis on March 24, 2019 | hide | past | favorite | 127 comments



This is a good article for deepening the complexity model of the brain, but mere physical structure and behavior of subcomponents of neurons is only a tiny piece of the puzzle. To understand the workings of the brain you will also need to add in the complex interactions of hormones and other chemical agents in the brain and throughout the nervous system and the feedback loops they establish with the other systems of the body and through them, with the external world.

Neural networks of neural networks doesn’t begin to describe it. We haven’t even scratched the tip of the tip of the iceberg in understanding this stuff.


I appreciate what you are saying, but it may be that it's premature to try to list things necessary to understand something when it's not understood by anyone.

In fact, I think you're probably dead wrong. Certainly hormone interactions and feedback loops in the body have some consequence on an organism, but these things take time. Lets say that some feedback loop is extremely fast, say 10 seconds. It seems reasonable to assume that you can't have a complex feedback loop with the body much faster than that, just because it takes time for chemicals to physically move. Cognition is much, much faster than that. There are many situations where you can learn, and then apply that learning in less than a second. Now if we were instead talking about what it takes to build a complex organism that can satisfy it's own needs, then certainly these complex hormonal feedback loops are essential. You can't have an organism survive when it doesn't look for food when it's body is out of fuel or keeps eating poisonous berries because it's body can't send the reasonable feedback to it's brain, and you can forget about dropping everything and competing for mates at the correct time without some kind of behavior modifying hormonal signals!

I would say that your assertion is not significantly different from saying "you can't understand the workings of the brain without considering the complex interactions of the brain with the lungs, since without oxygen the brain can't work". So while I agree that the brain exists as part of the body, I don't see why you would automatically assume that the hormonal feedback loops with the body are at all necessary for cognition rather than a way to tell the brain when it's necessary to find food, when it's adaptive to conserve energy and be lethargic, or what have you.

My point is that we may or may not find that there is some critical ingredient of cognition hiding in this place or that, and you can't very well tell us where that will be if nobody knows if it's there.


Well neurotransmitters are a kind of hormone and they certainly play a direct role in the computational processes of neurons.


Neurotransmitters are not hormones. Some neurotransmitters are also hormones.


Yes, you're right. That was phrased a bit haphazardly.

My point was that we can't just determine the typical time constant for a hormone's effect and run home with it since there are chemicals which have significantly lower time constants and some of them act as endocrine hormones as well.

Furthermore, an immediate feedback loop isn't the only possible way the body could play a role in cognition, though I suppose the OP is aware of this and was talking about moment-by-moment interaction with the body on purpose.


Well sure. My point was that there’s countless additional factors to consider, beyond just physical structure. Those feedback loops may be slow, but there’s plenty of evidence that they influence aspects of cognition quite strongly. On the flip side, a lot of current research about brain function is built on fMRI scans which monitor blood flow changes to exact locations in the brain, but necessarily at a long temporal distance from the actual “decisions” being made.

So yes I think we agree that it’s all more complicated than anyone can even comprehend just yet.


> My point is that we may or may not find that there is some critical ingredient of cognition hiding in this place or that, and you can't very well tell us where that will be if nobody knows if it's there.

It's like the parable of the Drunken man looking for his lost keys under the street-lamp. He only looks under the lamp for his keys because that's the only place the light is.

Similarly, in neuroscience, we only have a few tools to look at the brain. By some miracle, the brain happens to be electrically active to a degree that we can shove wires into it and pick up signals. So, dutifully, grad students across the world have been shoving needles into brains and trying to pick up signals and then tell advisers what that means.

Another miracle is that blood has a ferromagnet in it and the brain has a fair bit of blood in it. So, after a lot of trial and error, we've been able to build very complicated fMRI machines that can give grad students an idea of the usage of blood in the brain. This is then correlated with other measurements to find out what is happening in the brain.

Recently, we've been able to combine two different little miracles together. One is opto-genetics, a little serendipity we pulled out of the chromatic hot-pools in Yellowstone. Essentially, when you shine light on a little bacteria, it's cell wall opens up and you can squeak certain ions through the hole. The other little miracle is CRISPR-CAS9. Here, you can again use some tricks bacteria have made up to more easily copy-paste in the genetic code. Using these tricks and a lot of help from post-docs, yet more grad students have been able to report results in lab meetings to their advisers.

More techniques exist, of course, but my point is that the vast majority of neuroscience is still in the dark, away from the drunken man searching for his keys in the light.

Look at astrocytes. These guys make up ~50% of the brain. Very recent research says that they play something of a role in the synapse, moving NMDA receptors around the cleft, recycling, phospho-tagging recpetor, etc. It's super recent work. But also super important. Before, we thought that only the neurons were involved in the synapse, but now we have good evidence that other, action-potential lacking, cells are also deeply involved.

That vein of research is also really hard to do and preform experiments in. As such, most grad students are wise not to do their work in that kinda lab. It's hard and has a high potential of failure, and therefore no degree.

So, wait and see. Our understanding of the brain is still very much in it's infancy. We've not really even got a good accounting of all the cell types and connections in the brain yet.


I enjoy that we are seemingly aware about our lack of understanding of the workings of the brain but are happy and eager to proclaim the advent of intelligent machines who will replace humans in all aspects imaginable.

What we are really building are pretty good pattern recognition machines, often with surprising capabilities. But we overreach when we equate that with intelligence.

At least it's useful to discuss the ethics due to buzzwords, media hype and industry demands before we are actually able to produce something truly artificially intelligent.

Until then, we continue to add DL algorithms to our Madam Tousseau-esque collection of artificial intelligence.


I feel like there’s a lot of people out there under the impression that we’re on the brink of human-like AI (in the self-driving car sphere, for example).

Surely there’s going to be some sort of crash when they collectively realise that’s not the case.


None of the ML people I’ve worked with (mostly at FB and Google, ymmv) have seriously believed we are anywhere close to human-like AI. The shortcomings of current ML systems are well understood — given metric tons of data, they are really great nonlinear regressors and data interpolators. They are absolutely terrible at generalizing, with no obvious solutions to this on the horizon.


I don't think ML people belong to the group that believes we are close before human-like AI. In fact, I think they are some of the few ones in touch about how far away it still is. As it should be.

Even a few years ago, I worked as a student at a Big4 and these firms ran out for "technology consulting" and told their clients about AI this, AI that and it's gonna be robots taking it all over. Of course those were advertisements, ruses even, for getting audit & tax customers via the "compliance" angle (e.g. cloud compliance laws).

But anyway, their is a true craze about AI and autonomous driving which holistically ignores all the major road blocks that we are facing now. It's buzzwords and marketing all over. It's the Gluten-intolerance of technology.

According to general public media we are 2-5 years away from level 5 autonomy. Ask people at Waymo about the issues they face...


I'm not sure that's a problem. Humans are terrific generalizers, and it would probably be a good thing for human civilization if human intelligence remained an essential part of the control loop. We can just continue developing ever-more-useful intelligence amplifiers and employing them as extensions of human cognition, instead of replacing ourselves by building minds like our own.


That one "singularity" guy is at Google, isn't he? Elon Musk claims to be worried about malicious super-intelligent AI. It seems like at least some people in the software and software-adjacent industries are talking like AGI is right around the corner.


Hmm.

Domain-specific AI is often already superhuman in performance, once trained. What we lack seems to be: (1) generalisation, as a trained AI is often useless in separate tasks; (2) efficient training, as AI take far more data to reach the same performance as a human (I have seen suggestions this is related to (1)); and (3) any idea at all what counts as self-awareness/conscious/qualia/etc., which may not be important from a performance point of view, but very much influences how people regard the AI and their future potential.


How can an AI system be "superhuman in performance" when it can't generalise outside its training set and requires vast amounts of data to have good predictive accuracy on that single training set?

What is really the case is that some AI systems (in particular, deep neural net classifiers) have superhuman accuracy and that again only in classification tasks.

Edit: I'm not being contrary for the sake of it. I think there is a very useful insight to draw from the success of deep neural network classifiers: that it is possible to perform classification without any kind of understanding of the objects, or their classes, at all. In the past, AI researchers operated under the assumption that reasoning and inference would be required to perform this task, but we know now that such abilities are not necessary if the task is only to classify objects. It's also clear that in some cases dumb classification can replace reasoning- as long as a reasoning problem can be reduced to dumb classification.

However- humans clearly have the ability to reason and draw inferences (regardless of how often we do that successfully). There must be some explanation for this. Why do we have an intelligence that goes beyond simple object recognition? What is the point of having a broad intelligence? It must mean that there is more to intelligence than tasks that can be reduced to classification. So reducing "performance" to "accuracy" (of classification) risks fudging an important difference between the "superhuman" abilities of machines and human abilities.

In the end, the question is what are humans good at and why can't machines do the same things, even though they can beat us roundly in other tasks? What is the difference between tasks that are easy for machines and tasks that are easy for humans?


I think we broadly agree. What you’re raising in your argument seems to be examples of what I consider to be current weaknesses.

However, I think you’re making a distinction without a difference going between classification accuracy and performance. That said, I am running on 1.5 hours less sleep than I need today — the only other important metric I can think of right now is the kind of skill which humans book-learn and which get implemented on a computer as an explicit and deliberate algorithm instead of being learned, and I simultaneously don’t count those as AI and think machines have beaten us for decades in that domain.

I do have one other question though: do you regard Alpha(Go|Zero|Star) as nothing more than classification?


Well, perhaps I'm making too fine a distinction, it wouldn't be the first time.

I believe in AlphaZero etc the deep neural net component was used to identify moves with a high probability of leading to a winning board position. That's a good example of a problem we used to think would require inference or reasonging but that can, after all, be solved by classification.

Although that's not to say that the same problems can't be solved by inference, plus powerful computation. That remains to be seen.

Edit: Maybe I shouldn't have called it "dumb" classification; that sounds dismissive. There's no doubt that neural net classifiers are impressive in what they do. I mean it to say that they have no understading of their domain, or ability to reason etc.


I think it is likely that there is more to animal/human intelligence than pattern matching.

I don't think we can say for sure that there is more, though. If and when we finally understand how it all works, I wouldn't be completely surprised to find that it is just many layers of pattern matching and associated feedback loops.

We just don't know enough yet.


s/classification/decision making/

AI systems are better than humans at making decisions. AI systems can scale to millions of decisions per second and beyond. AI systems don't need to spend 3/4 or their time sleeping or relaxing. AI systems don't need 20 years of training to become experts in their fields, they can be replicated almost instantly.

While the scope of decisions the AIs can do is somewhat limited at the moment, the trend is clear: in the not so distant future AIs will make all economically relevant decisions.


> in the not so distant future AIs will make all economically relevant decisions

I don't doubt that, just doubt that they will make unanimously good decisions, as expected of a superhuman system. AI systems are yet to be "intelligent" which makes half of their name a PR gag.

Pattern recognition is simply not equivalent to intelligence. It is certainly a pillar of intelligence, however.

> AI systems are better than humans at making decisions

Are they? Dermatology, transportation, face recognition, NLP... in all these fields we are yet to reach human-like performance outside of ultra-specific tasks. There is a difference between recognizing a human face every time, even if obscured by sunglasses or half hidden behind a pint of beer, and recognizing 100.000 human faces per second with an error rate of 8% because of shadows.

Decent performance can be reached when you train AI to do one hyper-specific thing under just the right circumstances with mountains of data which need to be prepared just right. Otherwise you end up with bias and other issues. I wonder how AlphaGo would have performed if it was required to suddenly switch to Mahjong or cooking but I suppose that wasn't its purpose.

Please also mail me a link for systems that prepare training data to automatically and reliably prevent bias and other major issues that lead to malperforming systems.

Demonstrations optimized for PR, AI companies which employ humans as AI-pretenders, and catastrophically failing systems are the reality of AI today. I don't doubt that we will improve but superhuman AI requires more than mere linear improvement from where we are right now.


Intelligent machines are not predicated on understanding the brain, necessarily.


Your statement presupposes that there is an abstract notion of 'intelligence' that can manifest itself in qualitatively different types of systems. That might be true. But, so far, the only kind of 'intelligence' that seems recognizeable to us as such is the product of squishy bunches of biological neurons... maybe squisky bunches of biological neurons are essential to 'intelligence.'


It is unlikely that our future scientific endeavours respect "What we know so far".


This. We didn't reach the moon by studying birds.


But we did reach it by understanding physics...


As a gross approximation, the hormones and other chemicals might act like hyperparameters in the neural networks. Increase learning rates, adjust regularisation penalties, few-shot learning vs generalisation etc.


Gene Regulatory Network (GRN) within cells is similar to recurrent neural network (RNN). RNN is at least useful partial model for GRN.

If you think gene expression within neurons as small RNN, it's neural networks all the way down. RNN within neuron reacts to hormones and chemical signals altering the functioning of individual neurons. There is also feedback loop to other direction.


Could you please elaborate how GRNs are analogous to RNNs? I fail to see the link.


The regulation of the expression of a group of genes, can be expressed using Recurrent Neural Network formalism. Genes regulate the expression of each other with positive and negative feedback loops that are often additive or non-linear and similar to higher order RNN.

RNN nodes represent genes and connections between them are regulatory feedback loops in gene expression.


I would argue that we have definitely scratched the tip, though there’s a lot more iceberg to go.

To anyone interested in this, I would highly recommend picking up a copy of “Principles of Neural Design”. It’s a look at how and why brains operate the way they do, from thermodynamic and information theory axioms up. A lot of the biology went a good bit over my head, but the authors do a great job of developing a set of hardware-agnostic principles that all brains follow.


This saddens me as I wonder how much time it will take? As my name states I'd like to establish a connection to the human brain and digitalize it, though the mystery of consciousness is something I cannot grasp right now


I can't imagine we'll get anywhere close to creating fidelity backups of human memories or consciousness within our lifetimes, let alone any sort of realtime brain-computer interface like what you see in the Matrix.

The problem is that the biology of the brain is incredibly complicated. Dynamic instability of microtubules, contribution of extracellular factors, neuron biochemistry, etc etc... Sure, we've extracted signal to reconstruct primitive visual features from the basal ganglia of mammals, but that's a long way off from what we'd need for humans. (I can't easily cite atm, but I'll be happy to come back and put in references if desired.)

The other big issue is that it's quite invasive to get data out of this system. I imagine we'll be making progress on human cloning and artificial organs long before we crack this nut simply because of how disruptive and insufficient current techniques are.

All that said, I'm pretty sure we're all going to die without any archival backup of our brain-encoded memories. Progress will be made, but not in time for us.


Or to say it differently: the tip of an iceberg is also an iceberg.


So, a neural network of a neural network is just a deeper neural network. The big question in dendritic processing is whether it is used (conflicting information about that, e.g. Jia&Konnerth's work), whether it represents anything, and how it is learned. Plasticity is all over the place in neurons and takes place also at the dendritic level with cooperation & competition between synapses, temporal dynamics and neuromodulation. The credit assignment problem is hard to solve at the circuit/population level, but dendrites offer an intriguing alternative, as it is possible for them to bidirectionally communicate with the spike initiation site.


Indeed, the Network-in-Network architecture [1] was a compelling idea to get complex activations, until it was realised that it's just a standard neural network which is not fully connected. Since neural networks are universal approximators, it's a bit silly to talk about something else being more powerful, it's all about the prior, bias, and training, which are all subject to the No Free Lunch theorem.

[1] https://arxiv.org/abs/1312.4400


You can also compose functions together but there’s a reason that programmers don’t generally jam everything into a single function or think about programs that way.


Right, which is one of the main critiques against deep learning, there is no separation of concerns or encapsulation, just a single function matching input to output. But at the end of most days, performance is what matters. Similarly, the brain hardly has a "clean" structure, it's seemingly spaghetti code even though there is some structure to it.


The model of the technical neuron is only inspired by the biological model, but not meant as approximation. Instead the goal is to obtain good results in actual applications.


Yes, and the sooner we dispel with this absurd notion that we have any evidence we are closely modeling the human brain, the better.


Sure, but it's helpful to compare our models of artificial intelligence with biological intelligence to see if there's anything to be learned.

We learned how to make airplane wings from the shape of a bird's wing. Of course we should not model our artificial wings so closely as to make a plane with wings that flap. But there was still plenty of stuff to learn by asking the question "why does a bird fly and my contraption doesn't?"


As an explicit example, winglets on airplanes were conceptualized from watching the way bird wings flap, observing the curl on the outer edge of the wing, discovering that it controls vortex formation, and then applying the same concepts to fixed wings.

That kind of thing happens all the time in aerodynamics, fluid dynamics, mechanics, etc — precisely because evolution is a pretty good optimization function, and so “natural” solutions can often be very close to optimal, but using hard-to-discover quirks of physics.


One of my favored arguments for maintaining as much biodiversity as we possibly can.

Each species' death is millions of years of labwork trashed


Labwork with as loosely defined goals as life ("reproduce", "accelerate rise of entropy") is costly. It's a robust, deep objective long-term, but extremely inarticulate with poor ROI short-term.

A species of spider nailing down how to live on a particular type of rock on a particular island, in a very particular environment over millions of years, is simply not articulate enough "lab work".

Which is of course not an argument for killing off species. But it's an argument against approaching that moral question from such utilitarian perspective. You might easily end up with results you don't like, once you do the cost/benefit analysis in a less hand-wavy manner.


I think the fact that it IS so inarticulate, but that the work has already been put in, is one of the best reasons to protect biodiversity


The "sunk cost" fallacy :-)


yep life was wasteful, time to wipe it out and start over amirite?


Alas there’s a lot of people out there claiming otherwise, gaining publicity and raising money off of this quite common misunderstanding.


Please dispel away.


Off topic, but it is really uncomfortable to read anything on medium because 25% of my screen is covered by header containing 'Sign In' and 'Get Started' and by footer with 'Get Updates' button.


In addition to the solutions in the sibling comments, let me offer my favorite one: a simple bookmarklet that zaps all sticky elements on the page. Works wonders in Medium and a lot of other sites.

    javascript:(function()%7B(function%20()%20%7Bvar%20i%2C%20elements%20%3D%20document.querySelectorAll('body%20*')%3Bfor%20(i%20%3D%200%3B%20i%20%3C%20elements.length%3B%20i%2B%2B)%20%7Bif%20(getComputedStyle(elements%5Bi%5D).position%20%3D%3D%3D%20'fixed')%20%7Belements%5Bi%5D.parentNode.removeChild(elements%5Bi%5D)%3B%7D%7D%7D)()%7D)()


Thank you, this works like a charm! I tried it on tapas.io where it got rid of both the annoyingly thick header as well as the sidebar.


Fortunately there is an extension for Chrome solving that problem.

https://chrome.google.com/webstore/detail/make-medium-readab...


The 'element zapper' (or 'picker' if you want it made permanent) in uBlock Origin also works great for these situations.


I usually just drop into the browser dev tools and delete those elements. But I also try to avoid medium, blocking its domains by default and only unblocking if I really, really want to read something (which is becoming increasingly uncommon, since I don't like how medium basically say "we need to share your data with third parties"). I blocked medium in my hosts file for a while, but that turned out to be a little bit too inconvenient. I usually hope that the HN comments can give me enough information so I can stay away from medium...


The moment they started abusing the "unsaved data" notification in the browser I blocked their entire domain in uMatrix.


Check outline out. https://outline.com/czFN3X I believe it was made by someone here on HN, or at least I found it through it.


I only recently realised that tapping the “stacked horizontal lines” icon to the left of the url in iPhone safari enables a reading view which removes this stuff.


"... Our analogies often look to artificial neural networks: for neural networks compute, and they are made of up neuron-like things; and so, therefore, should brains compute. But if we think the brain is a computer, ..." OK, enough of this. Neurons are not computers. There is nothing what can be compared to actual neurons. "Artificial" neurons are just reduced models of the real ones, so that only the "compute" parts are used to calculate input vectors. It's only a fraction of what the real neurons actually do.

While I appreciate the article trying to actually understand what's really going on in neural networks, let's not make unnecessary dumbed-down assumptions. At least the subtitle of the article is actually correct. The main title is sensationalist "...17 billion computers!!".


It's funny, I thought the opposite. I was happy to read an explanation by an eminent and respected systems neuroscientist on hierarchy of computation, rather than the musing of an undergrad computer scientist on their first encounter with nature neuroscience.


An ANN is just calculus with matrices that does not lend itself, by the mathematics alone, to any "neuronal" description which is ad-hoc and imposed from the outside. You can draw many computations in "neuronal" form, eg., a logistic regression. It's really just a way of diagramming math.

"Computer" is an observer-relative term. There is no physical property of a system which makes it a computer. A "digital computer" is just a tool made of silicon which we use to aid computation (a goal we have). There are many tools (from an abacus to a waterfall) that we can use to aid in computation.

"Computation" isn't anything other than a goal we have. To interpret the brain as engaged in it carries no information and says nothing explanatory. The sense in which a brain is a computer is the same sense in which everything is: a physical system whose state evolution can be used to aid in computation (but isnt: no one uses brains to compute).


Let's not go too far. When we talk about the brain being a computer, we mean it in a sense in which some μC/μP and related paraphernalias forming a computer inside your phone, or any other electronical device - that is, a component which takes all or most of the inputs, drives the device, and which if you rip out, you're left with a useless and dead machine.

A rock and a waterfall perform computations in a theoretical sense if you look at them right, but there's also a different sense in which a bunch of op-amps in a circuit are a computer, a bunch of hydraulic logic gates are a computer, and a Raspberry Pi is a computer, but a rock isn't a computer. In that sense, human and animal brains are computers too.

(I'm before my morning coffee so I apologize for not being able to name "that sense" properly.)


I'd be very interested in knowing what this "sense" is -- I don't think there is one.

I chose waterfall by symmetry with a hydraulic computer. A waterfall is just what we call a hydraulic computer when we're not using it for computation.

I still only see an observer-relative distinction (ie., in how it is used). One can "juggle rocks" to compute.

You've said something about "inputs" and "useless when removed". Both of these are again observer-relative. Useless gives it away.

An animal isn't "useless" when its brain is removed, because it isn't our use of it which makes it alive.


I don’t think it’s helpful to eliminate teleological accounts categorically. This writing is clearly for public consumption and benefits from simplying descriptions of what’s happening, so that the layman reader might have the impression of having understood something.

Computational neuroscience could be written off by your statement that the brain is not a computer, but perhaps soften your stance and accept that that allows for applying tools from computer science to ask questions, just as physicists do.


Well my stance is hard for the sake of ruling out computer science as an explanatory framework for neuroscience.

Computational metaphors arent explanatory, they're illusory sorts of explanations (like narrative) which "satisfy" without providing a causal model (ie., a scientific explanation).

I'm not convinced they have been helpful, and mostly end up giving deeply mistaken impressions about the nature of digital computers -- rather than helpful impressions about the nature of brains.


Computational metaphors about the brain usually acknowledge explicitly the nonlinearity of information transformation, and use the word computation under the assumption the transformation is doing something useful. This hardly seems controversial so I find the statements you’re making a bit allergic.


I only agree that it's good to have articles using layman's terms. But a problem arises when false/simplified claims are made from which the uneducated reader will make false or exaggerated assumptions! For example, an article says "Quantum Physics is so weird that it's almost magical!" The uneducated reader only can make the assumption "Quantum Physics is magical, because it's weird!" due to lack of background knowledge and understanding of terms. Same goes for any other popular science article/journal.

The key is to explain using layman's terms without making false claims while trying to simplify the subject.


I agree but it is rare to find a layman’s explanation that is faithful in way all the experts can agree on.


Your argument is solely based on the rather odd semantics you ascribe to the term "computer". In common contemporary parlance, hardly anyone would call an abacus or a waterfall a 'computer' just because you can use such things. Conversely, I think you would not cease to call a desktop PC a computer if it is solely used for playing games and never used for 'making computations'.

When people call a thing a 'computer' they mean that it can run (potentially complex) algorithms to produce outputs based on inputs, in many cases it would also be assumed that it has memory, and it would not have much in the way of direct physical interactions with the world (although it might be part of systems that have such interactions). Based on these notions, it seems very valid to interpret brains as some kind of computer, made out of cells instead of metal and silicon, and formed by evolutionary pressure instead of human engineering.


Yes, and I'm calling your view here incoherent or, at least, mistaken.

It isnt valid to interpret brains this way. An "algorithm" isnt something a physical system can "run". These are observer-relative properties a system instantiates because of our use of it.

A brain is a physical system and our model of it should be a causal model in terms of its intrinsic properties that do not "disappear" when not in use.

A digital computer is an oscillating electrical field across a piece of silicon whose oscillations "correspond" to steps in an "algorithm" but only to an observer who imposes those correspondances. The program "1 + 1" on a digital computer is some state transitions of the cpu's electrical state -- it is an algorithm only when we impose an meaning on those oscillations that they do not themselves have.

There is no physical distinction in the "waterfall computer" and "digital computer" that bares on them being computers. They are both computers in exactly the same way. We havent gone to the length of a water-to-lcd display, but there's no reason we couldnt.

Water falling over a series of moveable rocks can (easily) be arranged to display the result of "1 + 1" on an LCD (eg., bucket/pixel, with water level = intensity, pipes to handle "graphics").


It is your view that is incoherent. A brain and and a PC are physical systems that work based on causal physical mechanisms. Both can run algorithms, e.g., my brain and a computer can both instantiate the same algorithm, albeit in very different ways. Your argument actually seems to be based on some notion of function, or extrinsic utility, or teleology of some sort, i.e., computers are artifacts created for a specific use, while animals and their brains just exist in themselves, or something to that effect?


Well, when I had tried to say the same thing her HN awhile ago I was heavily downvoted. So you agree with Searle then? (I almost, but I leave a possibility he is wrong).


I don't disagree, but you've just described the Motte and this time omitted the Bailey. I'd like the Bailey folks to at least take pause.


All models are wrong, some are useful.


Not all models are wrong, since some may be modelling exactly the aspects of importance and not modelling exactly those which are incidental and of no consequence.


The point is that no model of a complex real world phenomenon is ever perfect, so, you shouldn't try to create a perfect model. But not being perfect doesn't mean that a model can't be useful; among all the possible "wrong" models, you need to identify the one(s) which is useful, ie, which models well the aspects of importance to you.


I realize all that. I'm questioning the wisdom of the standard manoeuvre of preemptively calling all models imperfect or wrong, though.


Not everybody realises all that, so I suppose the OP cited that quote to reinforce the point that ANN are a very useful, but very imperfect model of the brain. I cite it often enough, but usually it's in person, where I can see the reaction of the person I'm talking with, and explain if necessary.


Then they are still wrong. It's just that they are wrong in the part that you don't care about.


Is it wrong, or is it simply an instance of the fact that a model is not the thing it models?

The use of the word wrong sounds wrong to me here. The wrongness should be about whether the model accurately, or just approximately, or not at all, shares with the modelled object that part of the behaviour we are interested in.


All models are wrong. The map is not the territory. We are arguing semantics now. The point you're trying to argue is that a model is "right" in the sense that it models whatever we wanted it to model. We argue that a model is "right" if it captures the "true" nature, which it will never be able to do (because of its virtue of being a model) and hence all models are wrong.

Semantics. Unless you disagree of course.


Well, a neutral network is a neutral network of neutral networks.


I think the point of the article is that brain neurons are not equivalent to the ML representation of neurons, so that count if 17 billion in the cortex would actually require many more "ML neurons" to be simulated.

It also explains how we can keep adding complexity even if no new neurons are being created, since the branches themselves act like extra neurons.


Another point in the article is how the dendrites provide local storage and 2-layer NN processing for the neuron. Is this even considered in ML representations?


Wouldn't that just mean the brain's more like a small-world network, with small highly connected 'blobs' tied together in a sparser, larger network?


That is one theory. Multiple "agents" is something that has been proposed. I don't have the background to read beyond the abstract of https://eml.berkeley.edu/~webfac/malmendier/e218_sp06/Carril...

> We model the brain as a multi-agent organization. Based on recent neuroscience evidence, we assume that different systems of the brain have different time-horizons and different access to information. Introducing asymmetric information as a restriction on optimal choices generates endogenous constraints in decision-making.

There's also Society of the Mind ( https://en.wikipedia.org/wiki/Society_of_Mind ) by Marvin Minsky (which is very readable)

> A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.


The brain is a small world network[0], but article’s thesis is that a neuron (a single vertex in that small world) can actually compute quite a bit on its own—-it’s not an integrate-and-fire unit or a ReLU. I guess you could expand things out and treat each synapse, rather than each neuron, as as a unit, in which case there’s an extra level in your “world”. This is a little different from how people normally think of “connectivity” because the way information is routed within a cell is much more flexible than between them. Still, it’s not totally bonkers.

[0] Just so we’re on the same page, when people talk about the brain as a small world network, the general idea is that a neuron in (say) auditory cortex mostly communicates with other auditory neurons, locally and in higher/lower auditory areas. It doesn’t talk to neurons representing (say) touch in the small of your back. Thus, there are far fewer than 86B! connections in the brain, but there are still an awful lot!


Another thing that is not well known about the brain (among non specialists) is that there are roughly one order of magnitude more glia cells than neurons in the brain, which while non-spiking definitely also respond to synaptic activity and could be involved in computation.


This cannot be overstated. Glial cells do seem to communicate with traditional neurons


I take issue with the "dendrites know more than neurons" bit. The fact that they respond to almost all inputs suggests they are performing a different function that a somatic spike. My preferred explanation for that is that any type of input can be predictive of a somatic spike and that has to be transduced somewhere.

Specific patterns of concurrent input on a dendrite drive sub-threshold depolarization which is theorized to be key for sequence prediction.


Is this the first time the concept of neural networks of neural networks has been proposed? I think it’s close to an idea I’d been knocking around in my head but never studied NNs deep enough to encounter.

I wouldn’t be shocked if consciousness were composed of hundreds or even thousands of NNs. Or even a tree thousands of levels deep.


Marvin Minsky's Society of Mind was published in 1986.

A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

https://en.wikipedia.org/wiki/Society_of_Mind


As far as I know, nobody takes the "Society of Mind" theory seriously anymore and none of it lives in present day AI work. There was never a clear algorithm that could be constructed from the chapters on K-lines.


We have a fair idea how many layers are involved in some activities, such as eye-hand response times, because we can divide the observed delay by the speed of a single layer. It's not a lot of layers. Reference needed.


The 100-step rule is a connectionist theory constraint from below in cognitive science and neuroscience that states that no primary brain operation (e.g., face recognition) can take more than 100 neuron firing “steps.” This imposes a temporal restriction on primary brain processes of 500ms. (Feldman & Ballard, 1982)


That’s neat. But neurons also trigger multiple neurons. So a single neuron is potentially part of numerous hierarchies.


No, it's not. The idea of "neural networks of neural networks" exists as long as neurology is a thing. It's the underlying structure of any network that it can be subdivided in sub-networks. The article tells nothing new, but is making sensationalist assumptions instead.


> Or even a tree thousands of levels deep.

That can't be true otherwise it would take you a minute to form a conscious thought. Biological neurons are slow. If anything, artificial neural nets are 'deeper', going up to 1000 layers.


We need instruction pipelining...



There's network in networks: https://arxiv.org/abs/1312.4400 And network ensembles in general: https://en.wikipedia.org/wiki/Ensemble_learning


Essentially saying the brain is a system of systems


A neural network of neural networks is... a bigger neural network. Having two or three layers of nonlinearities per "neuron" doesn't do anything qualitatively different.

There are probably lots of huge differences between NNs and brains but this article is really making the case that the brain can be modeled as a big NN, just with a few thousand times more activations than neural cells.


Brain can indeed be modelled by NNs, however the neurons in brain are more complex and requires use of more complex neural models like Hodgkin–Huxley model or can be approximated by more simplified models like Integrate and Fire models. ( ref: https://www.humanbrainproject.eu/en/ )


What's the evidence that "brain" (whatever that means) can be modelled by NNs? What features of the brain can NNs model?


What I don't understand is the part about the supralinear/sublinear particularity of the dendrite. First, the article explains that: " If enough inputs are activated in the same small bit of dendrite then the sum of those simultaneous inputs will be bigger than the sum of each input acting alone (...) A bit of dendrite is “supralinear”: within a dendrite, 2+2=6." Further in the article, I find this explanation: "Because dendrites are naturally not linear: in their normal state they actually sum up inputs to total less than the individual values. They are sub-linear. For them 2+2 = 3.5". What makes the difference between a bit of dendrite spiting a sublinear vs. supralinear "result"? I feel that the difference lays in the 'if enough inputs are activated' vs. 'in their normal state'. If that's the case, what's the "normal state"? Could anybody help me understand this part?


This article probably serves as an amuse bouche in the fluid world of mapping or replicating functions of wetware to algorithms and vice-versa; the top highlight: 17 billion neurons, almost sounds like one of those sampled, haunting soliloquies in prog or psy-trance tracks, which are usually restricted to snippets from sci-fi movies or taxonomy of the universe e.g. there are billions and billions of stars..

This blog post via the Human-Centred AI research from The Stanford Institute, dealing with a similar subject matter, is wide-ranging, incisive and replete with sources.

https://hai.stanford.edu/news/intertwined-quest-understandin...


So, a deep neural network?


Not sure if I understand, but it seems the dendrites are "grouped" in a way, and their influence on the output is a function of the group?

Is this functionally equivalent to having a two layer mini-network (that represents one brain neuron), with one neuron on top, and "child" neurons on bottom that mimic the grouping behavior? If this is true, then I would suspect our networks are already doing something like this automatically.


yes, the linked papers deal with this 2-layer abstraction of a single neuron. In reality, the neuron-to-neuron connections however are different from dendrite-soma coupling and those levels (Dendrite and soma) differ in terms of their ability to integrate synaptic inputs and undergoing plasticity, so they re not really equivalent. This is still an active area of research with a lot of unknowns.


The takeaway seems to be more about degrees of complexity than any particular structural component taking precedence.


So are we going to be able to use any of this for a new deeper deep learning framework?


I would keep my fingers crossed.


Not really


If you handwave enough at this, it looks like capsule networks.


At which point should we start to call it neural internet?


This seems to make intuitive sense. If we ever create a true AI it will probably be on the order of billions of neural networks connected together.


We're already attempting suff at that scale. GTP-2 has 1.5 billion connections.

https://openai.com/blog/better-language-models/


metabotropic channels/synapses? ... sadly the most frequent are missing there.


Metadata is not just data


I've never understood the almost religious devotion many hackers have to the idea that the brain is a computer. The brain, or more practicably the brain, body, and a pencil and paper, can slowly simulate a Turing machine without great difficulty. But a Turing machine can simulate a DFA too and that doesn't make it one.

This should not be construed as denigrating the wonderful achievements of AI researchers. Just because what they do is inspired by the brain rather than isomorphic to the brain doesn't mean it isn't great work.


In accordance with the Church-Turing thesis, the Turing machine stands to be capable of doing anything that should be called computation. It follows that if the brain is capable of simulating of a Turing machine (this is called a universal Turing machine, by the way), then it too can do any computation. So then the class of things that both can do are the same, and so it is reasonable to call them the same thing, in some sense.


This only shows computers are a subset of what brains can do, not that Turing machines can do whatever brains can do.


If we conjecture that any physical process can be simulated by a computation then it follows that a Turing machine can simulate it.

While we don’t have any proof of this conjecture (as far as I know) neither have we discovered any exceptions.

This also doesn’t rule out the possibility of non-physical or non-mechanical elements in the brain (dualism/vitalism) but frankly I don’t even entertain that notion.


You’re just begging the question: if you assume your conclusion, any claim holds.

Which is exactly my point — everyone is completely okay with those assumptions, without justifying that. I find it suspect.

How about showing physical processes are necessarily Turing computable, that is, justifying your underlying assumptions, before the straw man implication that I’m talking about dualism?

The mathematical equivalent of your argument is that because all finite-length approximations of a number are rational, the number itself must be rational — but this is untrue, in the general case. And in fact, for almost no numbers does a finite set of those rational approximations yield a general rule to predict the full structure of the number.

It’s therefore unclear that our limited scientific models being computable mean the underlying object they’re approximating is computable. But if we don’t know reality is computable, then we don’t know it can be simulated on a Turing machine.

Just assuming an answer doesn’t help us resolve the claim.


Amazing how many people miss this. It’s like confusing the implication with the equivalence.


What I was trying to communicate was that the Turing machine believed to represent the limits of what is physically possible. So then you have

turing machines >= brain (since a brain is physical) and brain >= turing machine (by simulation argument)

The conclusion is the brain and Turing machine can do the same thing (brain = turing machine).


> What I was trying to communicate was that the Turing machine believed to represent the limits of what is physically possible

Another religious tenet with no observable basis.

Where did so many hackers get this misconception that computable and physically possible are proven to be the same? Many claim the Church-Turing thesis shows this. Have they never read it carefully?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: