Hacker News new | past | comments | ask | show | jobs | submit login
Neural Annealing: Toward a Neural Theory of Everything (opentheory.net)
86 points by dangirsh on Nov 29, 2019 | hide | past | favorite | 48 comments



> But we’ve also mostly been doing it wrong, trying to explain the brain using methods that couldn’t possibly generate insight about the things we care about.

Wow, it must suck to work in neuroscience and find out that your entire field is pointless because this guy says so.


This comment is inflammatory, how is it at the top? Statements saying we're doing it wrong are either revolutionary or misinformed, and I don't see that it's obvious either way


I thought it was a reasonable response to an inflammatory position. If you think you've discovered a new approach that works better than the current ones, you can announce it without dismissing an entire discipline as hopelessly ineffective.


I read the article (once) and didn’t encounter this passage nor much criticism in general.

Most of the article outlines in a positive way the annealing idea and it’s related extents.

Inflammatory ‘comment’, sure, but hardly a main theme for the article.


It will not be obvious for some while whether the author is revolutionary or misinformed, but, historically, revolutions have usually depended on painstaking groundwork of the sort the author disparages in this statement. The discovery of gravity built on painstaking and thorough astrometry, and the discovery of evolution built on painstaking and thorough taxonomy. In neither case could it justifiably be said that these precursors "couldn’t possibly generate insight about the things we care about."


I do not like the lack of actual support in the data for this idea.

Not in developmental science, not in neuroscience. Not in how we understand cellular mechanics to work. Neurons and glial cells are born, grow, die and chemically change all the time.

Psychedelic therapy is in the infancy. Go and run trials before building sand castles of theory.

Most importantly, the testable assumption is what, adding energy would change belief? But of what form and sort? Neurons organize into attractors? Then what do they look like and what's the process? Without that it is woo.


Hi AstralStorm,

You may want to check out some of the background research to get a sense of what this piece is trying to do-- I can recommend: - Atasoy's YouTube talk on CSHW (as linked and transcribed here: https://qualiacomputing.com/2017/06/18/connectome-specific-h... ) - Carhart-Harris and Friston's REBUS model: http://pharmrev.aspetjournals.org/content/71/3/316/tab-artic... - More recently, Friston's Waves of Prediction: https://journals.plos.org/plosbiology/article?id=10.1371/jou...

Likewise, a great paper for understanding why a story focused on e.g. glial cells, cell types, and such is unlikely to produce substantial results about e.g. psychedelics: https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...

Beyond that, I would suggest the most helpful criticism is specific. If you don't agree with something, good to point out exactly what you disagree with. :)


A definition of how the aggregate would look, and explanation on why lesions have predictable results in every brain, as pure annealing based process would produce amorphous results. Why brains are similar in structure?

This model just does not deal well with lesions. After all, a lesion would be even stronger of a change than normal processes or psychedelics, causing the annealing to revert locally while it is healing. We do not observe this "untraining" behavior caused by healed neural lesions as per experiments on quadriplegic rats that get spinal therapy. They need next to no rehabilitation to run again.

Likewise, existence of phantom pain phenomenon is counterevidence - and why it just cannot be simply untrained.

Yes, I literally poked a hole through this idea. If it actually ever produces working results, then it should be able to fix phantom pain.

Likewise, single seizures would cause permanent changes in personality or skill as aftereffects, which is not observed. Whereas repeated ones do, but still not exactly with permanent effects. How does this model explain the difference? The applied "energy" is the same... However epileptics rarely show similar effects, even if they also suffer Grand Mal seizures. Why the difference?

Why do mood disorders show up most often in adolescence and say not later? How does this model explain age related dementia? Why only certain anesthetics cause mental problems? (Or sometimes cure, e.g. ketamine.) Etc.

The model perhaps works partially in a normally functioning brain, but does not help us understand at all how memory is formed or why is it located mostly in hippocampus. Why cerebellum handles balance and movement, why there is another cortical movement handling? "An attractor forms" begs the question of "why" or "why in this form", which this theory cannot answer. (Especially why these structures are mostly already formed in utero, but still need additional training.)

Whereas developmental neuroscience based on proper hormonal basis can. (Up to a point, it's quite new.)

The whole thing is a logical (as in high level) explanation based on no biological basis. It lacks the complete causation chain for how psychedelics operate and why every one of them produces different results. Most importantly, an underlying chemical based model would produce identical results as the neurons themselves change permanently in response. (See: neuronal memory allocation mechanisms.) We just do not understand all the neurochemistry related to specific chemicals. Likewise how psychedelics and induced seizures are related. They are, but not entirely and any useful model would explain the difference rather than cover it up.

Neuroplasticity is being studied, a simplistic model of "annealing" is not helping in understanding - whereas results relating to say biology of cell given specific inputs are. (E.g. LTP and its relation to CREB and thus other neurochemistry. Synaptogenesis and axonal chemotaxis. More... That get activated during normal and abnormal operation.)


Right. There is no testable assumption. This article is philosophy, which was the precursor to science. They are using an entirely outdated paradigm.



Philosophy is an 'entirely outdated paradigm'..? This is news.


This is really interesting but I would almost use the word "pseudoscience".

I mean a lot of it seems to be on the right track to me in a broad way, but its problematic because it is a mashup of real scientific ideas but the process seems to be more like a philosophical essay. You can't get scientific or engineering progress from philosophy.

There are good reasons that most of academia moved on from philosophy.


> There are good reasons that most of academia moved on from philosophy.

There are philosophy departments in almost every reputable University I know...

Everything that is non-STEM is underfunded, it doesn't matter if it is philosophy, anthropology, sociology, history, etc. This has nothing to do with academia itself, but with the managerial society we live in, where MBA-types decide on the value of everything with simplistic, dumb and short-sighted economic metrics.

Science absolutely needs philosophy. Science is philosophy. There is not a lot of encouragement to do real science/philosophy these days, because this requires deep thinking and following all sorts of unknown paths. Everything must be justified in terms of what gadgets can be built with the discoveries.

We are going through a profoundly anti-intellectual stage in western culture.


To see the difference between science and philosophy, compare this article to a reputable scientific paper.

You will see that science includes a testable hypothesis, experiments and/or analysis, and conclusions derived from the data.

This essay contains none of that.


Philosophy encompasses science. Testable hypothesis are a philosophical idea, justified on philosophical grounds. You can't justify the very idea of "testable hypothesis and experiments" by using testable hypothesis and experiments. There is a branch of philosophy devoted to this type of question, called "philosophy of science".

One very famous philosopher of science, Karl Popper, gave us the idea that for something to be considered a scientific theory, it must be possible to devise and experiment that could, in principle, falsify the theory. Still, the issue is not settled and there are alternative positions. These are deep questions.

One interesting thing to retain here is that science provides empirical knowledge, but there are other forms of knowledge. You alluded to "testable hypothesis", and so you in fact deployed non-empirical knowledge.

The scientific method gave birth to a branch of philosophy, first known as "natural philosophy", and modernly known as "science".


I do not think there is any value in disparaging philosophy, but in responding to such attitudes in the current context, it would be more useful to discuss what the branch known as philosophy of mind is achieving now, rather than the significance of philosophy in the foundation of science.


If philosophy is useless, then the good news is that we have no particular reason to believe that testable hypotheses, experiments, and data-derived conclusions are particularly useful either.


The reasons remain the same, regardless of how you label them.


>Everything that is non-STEM is underfunded, it doesn't matter if it is philosophy, anthropology, sociology, history, etc.

...

>Science is philosophy.

If you are going to justify philosophy by subsuming science, then you can not consistently claim that it is underfunded. By any reasonable standard, pure scientific research is respectably, if not ideally, funded.

What's missing here, of course, is any consideration for those branches of philosophy that are not science. This strategy of showing the importance of philosophy by invoking the success of science is short-sighted, and does a disservice to fields such as ethics.


You can't get scientific progress from philosophy? Wait until you try justify your belief of knowledge without philosophy. The fact is philosophy underpins much of science. You can't just 'move on' from that when it literally forms the foundations. That being said, you might be right about the article itself. Just as you can have bad science, you can have bad philosophy too.


I like articles like these where you can't tell whether it's about Neuroscience, Artificial Intelligence, Computer Science, or Psychology till you're a significant chunk in.


Paradigm-shifting ideas. Thanks for posting. Sounds like this could be A New Kind of Science.


For those who don't know, A New Kind of Science is when Stephen Wolfram of Mathematica fame wrote something like 1000 pages in layman's terms to convince you of his pet theory that the universe is a cellular automaton.

I didn't the hate the book but it was pretty weird and not super scientific.


I regret being so cynical about this post and the book I mentioned.

It's a really nice book. Lots of beautiful illustrations and interesting ideas with code snippets for generating those illustrations. I found it fascinating as a kid. The message wasn't too important for me, and I didn't really have the background to draw my own conclusions from it anyway.

Someone on IRC, who was a big fan, introduced it to me.

I actually started programming with this very book, copying the examples from it into Python and then Scheme (introduced to me by that same guy) and Common Lisp. I respect Wolfram's dedication. It's the type of dedication that brings us things like Mathematica and TempleOS.

The guy who introduced me to this book and then Scheme had contacts at Wolfram Research and later sorted me out with a free copy of Mathematica. I could never have afforded that with my pocket money, and I wasn't at university yet. He was a big underdog in the community and had his own problems too. What a great guy.

He's still going strong today: http://xahlee.org/

There's something to be taken even from book like this. It would be good to revisit and re-evaluate the book as someone who now has a semblance of an education in the area. I wish I didn't sell it.


I share your views. TempleOS and Mathematica are monuments to what the human creative spirit can achieve.

Let's not be too fast to pass judgement on their authors while having done 1/100000 of what they did


> being young is like microdosing on LSD all the time

If we invent significant life extension, microdosing may be a required part of the protocol just to keep the brain plastic enough to function. It may already be beneficial as a routine geriatric prescription. "Go play with grandma, she's on her weekly trip."


Another major area of life. Nice to read.


To hold a concept of certain complexity within the brain the brain itself must have greater complexity than the concept.

Thus humans can never understand the brain because the brain has equal complexity to itself and for the brain to understand the brain then it must have greater complexity than itself which is impossible.

Like many other things in life, we can only hope to understand a simplified and symbolic representation of brain. The problem is... it's quite possible that human intelligence itself has no higher level abstraction that we are capable of holding in our head. The processes that create consciousness, stripped of all the fluff and irrelevant details, may be of sufficient complexity that our brains will not be able understand what's going on.

In short I fear that the ultimate goal of neurology may be a fruitless endeavor.


Why do people still flock around this argument sometimes, it is so easy to debunk: we already have and are building machines (let's say a complete vechicle, a Tesla Model S) that are already too complex for any individual human to understand all the processes of. There might be engineers who exactly know how a steering works, how the cpu is built, how the chemistry of the tire works, but there is no one who can realistically hold (and operate on) all the knowledge that goes into making such an object. And yet, behold, we are building many of them and they all mostly work, they are functional, stable etc. This is all the type of understanding we need for a brain as well. It already works, why would anyone think that it is impossible?


Sigh. My statement of impossibility was understanding the brain understanding the brain as a whole. This is the only part that is IMPOSSIBLE.

Your statement of modularizing portions of understanding into pieces to deal with complexity is isomorphic to my statement that we can only hope to understand A representation of the brain as a simplification.

The engineers who understand steering Understand the rest of the car as a symbolic representation. The lead designer of the car understands most detailed components of the car as symbolic representations.

In short each human as individuals CAN ONLY understand the CAR/BRAIN as a symbolic simplification. This is the BEST possible outcome. What you are SAYING is the EXACT same thing I am SAYING.

The one difference is... I am taking it ONE STEP further with a speculation on the nature of intelligence.

There exists concepts in this world that are fundamental and cannot be modularized. It is a very reasonable speculation that the understanding of consciousness itself cannot be further modularized or subdivided. I am saying that in order to understand consciousness it may very well be, that we have to understand consciousness as a complex whole and this complexity may be too big for us to hold in our heads.

This IS a realistic possibility. We are already seeing the limits to this with the black box nature of the neural nets we are generating. Either way this was a speculation. You (and many others) completely misunderstanding and taking it the wrong way.


If you are simply saying that it is "possible" then I agree with you completely. However from everything I have learned including the neural nets (there is actually quite a lot of insights we can gain from how they work. The myth that they are a blackbox is just that - a myth. There are techniques to understand them and visualizations that explain a lot of what they do) is that your thesis is unlikely, certainly we shouldn't give up at this point at the very least. I simply haven't seen anything that would indicate that it would be impossible to understand consciousness in the same practical mannter as the way we understand for example cars now (ie, at least enough to build one, at which point it can take over). In fact we might not even need to fully understand it (or understand on some deep fundamental concept level that you have mentioned) to be able to reproduce and improve it.

> There exists concepts in this world that are fundamental and cannot be modularized.

Out of curiosity, what are some of these concepts that you have stumbled upon that you would put in this category (preferably at least somewhat related to building AI or new technology in general)? I am sure there are some but it would be interesting to see specific examples.


>Out of curiosity, what are some of these concepts that you have stumbled upon that you would put in this category (preferably at least somewhat related to building AI or new technology in general)? I am sure there are some but it would be interesting to see specific examples.

Visual recognition. Voice recognition. Language Recognition. Self driving. Music Composition. We have failed in all these areas to use modularization to simplify these problems into something we can model as a theoretical whole. We turn to neural nets not to "understand" the problem but to build systems that can solve the problem while bypassing understanding.

>I simply haven't seen anything that would indicate that it would be impossible to understand consciousness in the same practical mannter as the way we understand for example cars now

If we have that level of understanding then we should be able to modularize the problem into smaller units of complexity so that a team of people can build the solution component by component. So far for the examples I listed above we don't have that level of understanding. We could employ the same techniques used for language recognition to understand the human brain... by building a neural net.... but that kind of defeats the point right? The neural net may simulate consciousness but it doesn't exactly let you (or a team of people) understand it...

Also for your comment on the black box nature of neural nets....

I think we can both agree that the high level understanding or "insights" we gain are at best very high level. The logic of "visual recognition" is not understood by humans. We can only reproduce such logic as a blackbox. A neural net.

These "visualizations" or high level descriptions of machine learning algorithms aren't a form of true understanding... It's just a summary of a class of problems. For example voice recognition and visual recognition are two very different problems but they are both "visualized" and modeled in the exact same way. You can probably also model consciousness as a problem that consists of a multi-dimensional set of points where you need to find the best fit curve. This does not mean you understand it.

You can also probably peer into these artificial neural nets and see what components are doing what similar to how we can peer into a human brain and see which parts of a brain light up when you give it stimuli. Again, it doesn't mean you have true understanding.

This is not to discount the entire field of neurology as a whole. Clearly there are bit and pieces we can understand. I am saying the ultimate goal of understanding consciousness may be fruitless. It's a very meta concept and possibly as a result it's also very hard for you as a consciousness yourself to write the exact definition of consciousness or pinpoint exactly what it is.


So for example, what kind of understanding do you think is lacking in the visual recognition example? To me it seems that this field is quite well understood.

The net looks at the data and searches for patterns. On first layers simple patterns like contours (which is a simple mathematical function), on higher levels different kinds of dots or smaller objects (objects meaning, a spatial configuration of contours and whitespace from the previous layer), on higher levels relationship of those obects to each other, etc. etc. Pooling and convolution handle recognizing of these patterns in different parts of the image. Final layers do statistical weighing of which patterns were found in which strengths, and generate the most likely result (of in this case, a classification task).

I'm not saying it's magic, I'm not saying it is AGI, I'm not even saying it's far from pure statistics, but what part of this process is "hard to understand what the net is doing"? It is all pretty clear at this point? There are online courses the describe every step of this...

> These "visualizations" or high level descriptions of machine learning algorithms aren't a form of true understanding... It's just a summary of a class of problems

Seems like an arbitrary distinction that you have come up with. I argue that if we can make those nets, if we can control them, if we can predict them - then yes it is true understanding.

Now yes the universe is indeed a fractal and you can undersatnd any small particle in a an uncalculable number of ways and claim that your understanding is more true, more profound etc, but what is the point? I think a great cause is to create AGI, to which I still haven't seen any arguments hinting at impossibility of.

> The logic of "visual recognition" is not understood by humans. We can only reproduce such logic as a blackbox. A neural net.

I really don't understand what you are saying here unfortunately, sounds like a philosophical black hole of overintellectualizing a simple phenomenon. I would say that we understand enough to have practical ability, which is all that matters.

> It's a very meta concept and possibly as a result it's also very hard for you as a consciousness yourself to write the exact definition of consciousness or pinpoint exactly what it is.

Well sure, I agree with you there. It is already the case for many phenomenon that we can nonetheless control. What is the exact definition of a car? Of a video game? Of a tree? Of a career? Of a family? I don't think humans need exact definitions of things, they can just as well function relatively optimally without them. Not all knowledge is in logical definitions.


Here's a simple way to put it.

If you understood visual recognition. Then you can code up the algorithm by hand, given enough time.

If the problem is too complex for you to understand then you can subdivide the problem into smaller pieces and give it to a team of people to code it.

You can't currently do either of these things. You require an algorithm to code it up for you. Therefore you don't really understand it.

>The net looks at the data and searches for patterns. On first layers simple patterns like contours (which is a simple mathematical function), on higher levels different kinds of dots or smaller objects (objects meaning, a spatial configuration of contours and whitespace from the previous layer), on higher levels relationship of those obects to each other, etc. etc. Pooling and convolution handle recognizing of these patterns in different parts of the image. Final layers do statistical weighing of which patterns were found in which strengths, and generate the most likely result (of in this case, a classification task).

I kind of got into this. Peering into the neural network doesn't mean you understand it anymore then peering into the human brain to see what parts light up given certain stimuli. If you can understand it then you can code it up in the same amount of lines of code. You obviously can't and don't understand the algorithm well enough to do this. However you're satisfied by this level of understanding which is fine, but the ultimate goal of neurology is to surpass this level of understanding and THAT is what I am addressing.

>Well sure, I agree with you there. It is already the case for many phenomenon that we can nonetheless control. What is the exact definition of a car? Of a video game? Of a tree? Of a career? Of a family? I don't think humans need exact definitions of things, they can just as well function relatively optimally without them.

Clearly there is a difference between our lack of an exact definition of a car vs the exact definition of consciousness let's not get into the semantics of the difference here suffice to say that we're both human beings and we understand there is a huge lack of understanding of what consciousness is vs what a car is.

>Not all knowledge is in logical definitions.

It is. Lack of a logical definition is an indication of lack of knowledge. Your brain clearly has an exact definition of a career given the fact that given an input you can instantly recongize whether something is a "career" or not. The reason why you can't write the exact exact definition down in words is because you lack understanding of the structure of the definition that is being held in your brain. However given all possible inputs your brain will have an output. Given all inputs you will be able to write down a full mapping between inputs and outputs... that is a formal definition of "career"... you just lack the capability/knowledge of simplifying the definition into a single logical statement and understanding what exactly it is... but an exact formal definition exists in your brain.

>I really don't understand what you are saying here unfortunately, sounds like a philosophical black hole of overintellectualizing a simple phenomenon. I would say that we understand enough to have practical ability, which is all that matters.

What's wrong with over intellectualization? Are you implying that you want less brain power applied to this problem? We need to be stupider to understand something? If you say that practical ability is enough then this argument is over because we are talking about different things. I'm not talking about practical ability I'm talking about understanding of consciousness. The ability to define it and hold the concept in your head or several heads. That's what I'm talking about and that is the ultimate philosophical goal of neurology. A brute force simulation of a neural net which is the current state of things these days is not a form of understanding. Once you understand what it is, you should be able to code it up by hand.

Sure we MAY (keyword) only be interested the practical aspects of something like voice recognition. But the question of consciousness is clearly not something we're just interested in emulating. We're interested in understanding it.

If we develop 3D printing and scanning to the point where we can reprint the entire atomic structure of things we see in reality this does not mean we understand everything at the molecular level. Practical application != actual understanding. Taking this printer and printing out a human brain doesn't help us understand it at a deeper level just like how a photocopier copying a calculus text book doesn't help us understand calculus.

Additionally your "practical" machines will always have limitations if there isn't full understanding of the phenomenon that operates underneath. Self driving cars that deliberately crash into an unrecognized object is an example of unpredictability that comes with lack of understanding.


> >Not all knowledge is in logical definitions.

> It is.

> What's wrong with over intellectualization?

Ah, well there you go, now I see where you come from. (Not just from those lines, but from the whole post, which the lines illustrate.)

I don't agree that we lack understanding that you write of, but I can see how you would think that if you though that logic is the pinnacle of intellect.

But logic is just one part of understanding, there are other ways the knowledge can be collected, transmitted and applied. If you aim to only use logic, you end up with overintellectualizing everything, and in that case yes, it is very hard to say that you truly understand anything.

> You can't currently do either of these things. You require an algorithm to code it up for you. Therefore you don't really understand it.

This is, for all practical intends and purposes, a pointless distinction. If we have created the algorithm, it means we have created the thing itself and it means we do undertand it on some level. What you talk about is some "perfect 100%" understanding, which does not really exist, it is a trick of a mind that cannot use any other facilities than pure logic, and wants the world to fit into those categories that can be perfectly described with rigid categories of the logical apparatus. The world doesn't really fit though, but such a mind will ignore that and will end up having to overintellectualize everything, it doesn't know where to stop.

Humans are fully capable of operating in the world without fully (100%) understanding it, because they have other facilities than pure logical understanding. It doesn't mean that they operate things that they don't understand randomly, or just making choices for no reason. It means they have other intuitive methods of operating which can produce results without having a rigid logical model.


>This is, for all practical intends and purposes, a pointless distinction. If we have created the algorithm, it means we have created the thing itself and it means we do undertand it on some level. What you talk about is some "perfect 100%" understanding, which does not really exist, it is a trick of a mind that cannot use any other facilities than pure logic, and wants the world to fit into those categories that can be perfectly described with rigid categories of the logical apparatus. The world doesn't really fit though, but such a mind will ignore that and will end up having to overintellectualize everything, it doesn't know where to stop.

>Humans are fully capable of operating in the world without fully (100%) understanding it, because they have other facilities than pure logical understanding. It doesn't mean that they operate things that they don't understand randomly, or just making choices for no reason. It means they have other intuitive methods of operating which can produce results without having a rigid logical model.

Stop using the term overintellectuallization. It's a meaningless word that says too much brain power for a given problem.

We don't even have to get philosophical about this and talk about the nature of "understanding". I am not and you are misinterpreting what I am saying. They laymans definition of 100% understanding is good enough because clearly all researchers in the field agree that we don't understand consciousness.

Let's simplify things then so we don't argue over semantics and the nature of understanding.

We can both agree that humans or a team of humans can "understand" what an operating system is and almost all fields in the sciences we as humans generally use that same level of "understanding" as a metric. We also understand the operating system well enough to the point that a team of humans can build one by hand.

That is the metric we are using for "understanding" consciousness because that is the metric used in ALL other hard scientific fields. No need to talk about what "100% understanding" is. Meaning that once you understand it, you can build it or model it by hand in a computer program. Unfortunately the trends in machine learning show that we may never hit that bar so you are arguing for lowering that bar. You are saying if we can employ other mechanisms to build structures that are too complicated to comprehend, that is enough to say we "understand" that structure.

So basically your bar for understanding is lower and inconsistent with what scientists and computer scientists all over the world would use as a bar to classify the fact of whether or not they "understood" something.

So put it this way. I'm talking about "understanding" on the level that most people, most of science, and most researchers talk about it. No "over intellectualization" bs here. You unknowingly are the one who's making the leap here and moving the bar of understanding to a different more abstract place.

Let's look at the implications of what you're talking about. You say that using a program to train a neural net to simulate consciousness is enough to "understand" consciousness.

Then would you say if I can take a biological organism and reconstruct a duplicate of that organism that is molecularly and genetically identical to the first organism then I have complete understanding of all biological organisms?

Again your logic makes no sense here. We CAN do the above. It's called cloning, and although we can clone things we don't completely understand the mappings between genes and the macro features of the creature the gene describes. Complete understanding of genetics involves the ability to insert a 100% custom gene into a cell and having a standard computer program simulate the resulting creature.

Let's bring it back full circle. What I am originally talking about. Most things in science and engineering are too complicated to understand as a whole. So we use symbolic representation to simplify the system for understanding. The OS programmer who writes the windowing system thinks of the scheduler as an abstract representation and the OS programmer who writes the scheduler does the same for the windowing system.

Currently we cannot do the above for neural nets. This is an inconsistent phenomenon with most of the systems humans are interested in building. We do not have the ability to modularize a neural net hence why we rely on machine learning algorithms. I am addressing this phenomenon, describing the limitations of it, and applying it to the nature of consciousness as we do know like the visual recognition algorithm both systems reside in a neural net and thus probably suffer from the same problems and limitations.

So that is all that I am saying. We will likely not be able to modularize the problem of consciousness to a place where we can understand consciousness like we understand other things in other scientific fields. This is fundamentally inconsistent with levels of understanding that are achievable in other fields of science. What you're talking about is another topic all together which is moving the bar of understanding to a lower level so that we can redefine "understanding." Ignore the bar. Who cares. Focus on the essence of what I am saying and how our ability understanding consciousness is different from our ability to understand an operating system and almost everything else in science.

>But logic is just one part of understanding, there are other ways the knowledge can be collected, transmitted and applied. If you aim to only use logic, you end up with overintellectualizing everything, and in that case yes, it is very hard to say that you truly understand anything.

This is off topic. Logic only has direct application to formal systems and it's really too "philosiphical" to get into right now. Suffice to say that your argument is basically this. I am wrong because my arguments are too logical.


> "understanding" [...] is the metric used in ALL other hard scientific fields [...] Meaning that once you understand it, you can build it or model it by hand in a computer program.

> If you can understand it then you can code it up in the same amount of lines of code.

You have made up the "by hand"/"same amount of lines of code" requirement. The real world does not have it. If a scientist builds a neural network that generates (through machine learning) an algorithm that works, we all say that the scientist has written the algorithm, no one cares if they have done it by hand or through application of a clever meta algorithm.

If the "overintellectualizing" term is not common enough, let's replace it by "overthinking", it's very close.

> Most things in science and engineering are too complicated to understand as a whole. So we use symbolic representation to simplify the system for understanding. [...] Currently we cannot do the above for neural nets.

I disagree. I see that we do exactly that with neural nets. The distinctions you are trying to come up with sound semantic and arbitrary.


>I disagree. I see that we do exactly that with neural nets. The distinctions you are trying to come up with sound semantic and arbitrary.

You disagree, and therefore you are wrong. Similar to how if I say the sky is blue and you disagree. The distinction sounds semantic and arbitrary but it is not. Think harder, the failure here is not a semantic difference but a failure in you to process the right abstraction. Your disagreement is irrelevant in the face of reality.

Take the neural net or several neural nets. Decompose those neural nets into modules. Recompose those modules into new neural nets. Can you do this? No. Why?

Because you can't really modularize neural nets. What these analysis techniques are doing is showing you that there is sort of a module like thing here but like brain surgery it doesn't mean you can rip it out and reuse it somewhere else. That's a true lack of understanding of what's going on.

We both agree that the human brain is a black box. We also agree that we know about the existence and location of modules in the human brain. Things like the "emotion" and "locomotion" are known modules. Let's say we meet a paraplegic person who's locomotion part of his brain is damaged. Can't we fix him by doing a transplant? A recently deceased patient who died of unrelated reasons could have his "locomotion" module cut and transplanted into the brain of the person who needs it.

We can't because we actually don't have access to the modules. Just a blurry picture that some sort of module is there. Same with artificial neural nets. The day you can graft a module in a neural net and compose it with another is the day you have fulfilled the definition of what a "module" is. You haven't and therefore you are utterly wrong and therefore you lack knowledge about what is going on inside a neural net. This is definitive logic.

>You have made up the "by hand" requirement. The real world does not have it. If a scientist builds a neural network that generates (through machine learning) an algorithm that works, we all say that the scientist has written the algorithm, no one cares if they have done it by hand or through application of a clever meta algorithm.

Yes I have made it up to illustrate that there is more than a semantic difference. Look I'm not making up requirements here and there just to screw with you. I'm making them up so you can see there is actually a huge difference between training a neural network VS. doing meta programming.

A compiler is a meta programmer. You give it a high level language and it programs the CPU in a lower level language. There is a fundamental difference between what's going on here and what's going on when you train a neural net. There is a Functor between the difference in the process of creation to the level of understanding. WE have less control over the creation of weights in a neural net then we do over the assembly code a compiler generates just like how we have less understanding of the overall neural net then we do of the assembled program.

There is a clear gap here. I'm not literally setting a requirement here. My intention is to illustrate a gap in understanding and the gap seems to be permanent and a bulwark in our overall goals of understanding consciousness, not from your "requirement" perspective, but from scholars in the field.

literally a compiler "translates" code and a neural network is "trained." The word translate and train have more then a semantic difference.

>If the "overintellectualizing" term is not common enough, let's replace it by "overthinking", it's very close.

There is literally only a semantic difference here. You eat your own words. From my perspective overthinking is not what's going on here. It's "underthinking"... what is an adjective to describe a person who "underthinks?"


> You disagree, and therefore you are wrong.

Hehe and then you wonder why the argument is not going anywhere. If you would be talking about hard science which is proven by experiments or strong mathematical models, then yes, opinion would be irrelevant. But you are not talking about that at all, you are talking about your own subjective thoughts about the subject. And I just dont think they describe the current nor future state of AI properly.


If you read my post... I'm not talking about my subjective thoughts at all. I'm comparing ML to the broader spectrum of engineering and science.

I'm illustrating a difference in dichotomy between "training" and "programming". While you are trying to set some kind of bar for "design." Also please don't use "Hehe" here it's against the rules.


typo "design" -> "understanding"


This sounds eerily similar to some of the early proofs of God. I think there may be a flaw in this line of thinking.

|Thus humans can never understand the brain because the brain has equal complexity to itself and for the brain to understand the brain then it must have greater complexity than itself which is impossible.

We don't need to know the entire state of the brain at any given moment to understand the mechanics of the brain, just as we don't need to know the entire state of every molecule on planet Earth in order to understand it's physics.


The problem with your line of reasoning, is that you are thinking in technological progress only through individual achievement.

This is what technological iterations, through generations of human beings, give us all. Humans achieve super-human level of technological achievements because we iterate over what others have left for us.

Knowledge is like a stair, made from the hard work of many great human beings, and even for hard problems, with enough ammount of iterations, and with consecutive progress (without social deterioration) we can achieve anything.

Somebody will fill that "last" step in the stair that will make a paradigm shift to all of us..


Yes. You might find interesting some very rated ideas as this article - Bayesian Brain. In this view, the brain is a complex system that learns to adapt to and predict its environment, similar to other complex systems like even companies.


[flagged]


Your comments in this thread have broken the guidelines with swipes like "Of course you completely missed the point", "Leave it to the internet to read a comment and take it the wrong way", "Sigh", "Sheesh", and so on. That's just the sort of thing users here are asked to avoid. Would you please review https://news.ycombinator.com/newsguidelines.html and edit all that out of your comments here from now on?

Also, please don't use allcaps for emphasis. That's in the guidelines too.


All right, fine. I will.

I think the flag link has been disabled for my account. If you re-enable it that would be helpful in dealing with this problem.

People are rude to me all the time on HN. And the rules of HN expect me to be civil in the face of a culture that is uncivil. If you would like me to help you promote the guidelines it would be helpful if you allow me to flag others as well instead of responding in kind.


Flagging has not been disabled for your account.


Of course, we could use abstraction and analysis to tackle this...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: