Hacker News new | past | comments | ask | show | jobs | submit login

To hold a concept of certain complexity within the brain the brain itself must have greater complexity than the concept.

Thus humans can never understand the brain because the brain has equal complexity to itself and for the brain to understand the brain then it must have greater complexity than itself which is impossible.

Like many other things in life, we can only hope to understand a simplified and symbolic representation of brain. The problem is... it's quite possible that human intelligence itself has no higher level abstraction that we are capable of holding in our head. The processes that create consciousness, stripped of all the fluff and irrelevant details, may be of sufficient complexity that our brains will not be able understand what's going on.

In short I fear that the ultimate goal of neurology may be a fruitless endeavor.




Why do people still flock around this argument sometimes, it is so easy to debunk: we already have and are building machines (let's say a complete vechicle, a Tesla Model S) that are already too complex for any individual human to understand all the processes of. There might be engineers who exactly know how a steering works, how the cpu is built, how the chemistry of the tire works, but there is no one who can realistically hold (and operate on) all the knowledge that goes into making such an object. And yet, behold, we are building many of them and they all mostly work, they are functional, stable etc. This is all the type of understanding we need for a brain as well. It already works, why would anyone think that it is impossible?


Sigh. My statement of impossibility was understanding the brain understanding the brain as a whole. This is the only part that is IMPOSSIBLE.

Your statement of modularizing portions of understanding into pieces to deal with complexity is isomorphic to my statement that we can only hope to understand A representation of the brain as a simplification.

The engineers who understand steering Understand the rest of the car as a symbolic representation. The lead designer of the car understands most detailed components of the car as symbolic representations.

In short each human as individuals CAN ONLY understand the CAR/BRAIN as a symbolic simplification. This is the BEST possible outcome. What you are SAYING is the EXACT same thing I am SAYING.

The one difference is... I am taking it ONE STEP further with a speculation on the nature of intelligence.

There exists concepts in this world that are fundamental and cannot be modularized. It is a very reasonable speculation that the understanding of consciousness itself cannot be further modularized or subdivided. I am saying that in order to understand consciousness it may very well be, that we have to understand consciousness as a complex whole and this complexity may be too big for us to hold in our heads.

This IS a realistic possibility. We are already seeing the limits to this with the black box nature of the neural nets we are generating. Either way this was a speculation. You (and many others) completely misunderstanding and taking it the wrong way.


If you are simply saying that it is "possible" then I agree with you completely. However from everything I have learned including the neural nets (there is actually quite a lot of insights we can gain from how they work. The myth that they are a blackbox is just that - a myth. There are techniques to understand them and visualizations that explain a lot of what they do) is that your thesis is unlikely, certainly we shouldn't give up at this point at the very least. I simply haven't seen anything that would indicate that it would be impossible to understand consciousness in the same practical mannter as the way we understand for example cars now (ie, at least enough to build one, at which point it can take over). In fact we might not even need to fully understand it (or understand on some deep fundamental concept level that you have mentioned) to be able to reproduce and improve it.

> There exists concepts in this world that are fundamental and cannot be modularized.

Out of curiosity, what are some of these concepts that you have stumbled upon that you would put in this category (preferably at least somewhat related to building AI or new technology in general)? I am sure there are some but it would be interesting to see specific examples.


>Out of curiosity, what are some of these concepts that you have stumbled upon that you would put in this category (preferably at least somewhat related to building AI or new technology in general)? I am sure there are some but it would be interesting to see specific examples.

Visual recognition. Voice recognition. Language Recognition. Self driving. Music Composition. We have failed in all these areas to use modularization to simplify these problems into something we can model as a theoretical whole. We turn to neural nets not to "understand" the problem but to build systems that can solve the problem while bypassing understanding.

>I simply haven't seen anything that would indicate that it would be impossible to understand consciousness in the same practical mannter as the way we understand for example cars now

If we have that level of understanding then we should be able to modularize the problem into smaller units of complexity so that a team of people can build the solution component by component. So far for the examples I listed above we don't have that level of understanding. We could employ the same techniques used for language recognition to understand the human brain... by building a neural net.... but that kind of defeats the point right? The neural net may simulate consciousness but it doesn't exactly let you (or a team of people) understand it...

Also for your comment on the black box nature of neural nets....

I think we can both agree that the high level understanding or "insights" we gain are at best very high level. The logic of "visual recognition" is not understood by humans. We can only reproduce such logic as a blackbox. A neural net.

These "visualizations" or high level descriptions of machine learning algorithms aren't a form of true understanding... It's just a summary of a class of problems. For example voice recognition and visual recognition are two very different problems but they are both "visualized" and modeled in the exact same way. You can probably also model consciousness as a problem that consists of a multi-dimensional set of points where you need to find the best fit curve. This does not mean you understand it.

You can also probably peer into these artificial neural nets and see what components are doing what similar to how we can peer into a human brain and see which parts of a brain light up when you give it stimuli. Again, it doesn't mean you have true understanding.

This is not to discount the entire field of neurology as a whole. Clearly there are bit and pieces we can understand. I am saying the ultimate goal of understanding consciousness may be fruitless. It's a very meta concept and possibly as a result it's also very hard for you as a consciousness yourself to write the exact definition of consciousness or pinpoint exactly what it is.


So for example, what kind of understanding do you think is lacking in the visual recognition example? To me it seems that this field is quite well understood.

The net looks at the data and searches for patterns. On first layers simple patterns like contours (which is a simple mathematical function), on higher levels different kinds of dots or smaller objects (objects meaning, a spatial configuration of contours and whitespace from the previous layer), on higher levels relationship of those obects to each other, etc. etc. Pooling and convolution handle recognizing of these patterns in different parts of the image. Final layers do statistical weighing of which patterns were found in which strengths, and generate the most likely result (of in this case, a classification task).

I'm not saying it's magic, I'm not saying it is AGI, I'm not even saying it's far from pure statistics, but what part of this process is "hard to understand what the net is doing"? It is all pretty clear at this point? There are online courses the describe every step of this...

> These "visualizations" or high level descriptions of machine learning algorithms aren't a form of true understanding... It's just a summary of a class of problems

Seems like an arbitrary distinction that you have come up with. I argue that if we can make those nets, if we can control them, if we can predict them - then yes it is true understanding.

Now yes the universe is indeed a fractal and you can undersatnd any small particle in a an uncalculable number of ways and claim that your understanding is more true, more profound etc, but what is the point? I think a great cause is to create AGI, to which I still haven't seen any arguments hinting at impossibility of.

> The logic of "visual recognition" is not understood by humans. We can only reproduce such logic as a blackbox. A neural net.

I really don't understand what you are saying here unfortunately, sounds like a philosophical black hole of overintellectualizing a simple phenomenon. I would say that we understand enough to have practical ability, which is all that matters.

> It's a very meta concept and possibly as a result it's also very hard for you as a consciousness yourself to write the exact definition of consciousness or pinpoint exactly what it is.

Well sure, I agree with you there. It is already the case for many phenomenon that we can nonetheless control. What is the exact definition of a car? Of a video game? Of a tree? Of a career? Of a family? I don't think humans need exact definitions of things, they can just as well function relatively optimally without them. Not all knowledge is in logical definitions.


Here's a simple way to put it.

If you understood visual recognition. Then you can code up the algorithm by hand, given enough time.

If the problem is too complex for you to understand then you can subdivide the problem into smaller pieces and give it to a team of people to code it.

You can't currently do either of these things. You require an algorithm to code it up for you. Therefore you don't really understand it.

>The net looks at the data and searches for patterns. On first layers simple patterns like contours (which is a simple mathematical function), on higher levels different kinds of dots or smaller objects (objects meaning, a spatial configuration of contours and whitespace from the previous layer), on higher levels relationship of those obects to each other, etc. etc. Pooling and convolution handle recognizing of these patterns in different parts of the image. Final layers do statistical weighing of which patterns were found in which strengths, and generate the most likely result (of in this case, a classification task).

I kind of got into this. Peering into the neural network doesn't mean you understand it anymore then peering into the human brain to see what parts light up given certain stimuli. If you can understand it then you can code it up in the same amount of lines of code. You obviously can't and don't understand the algorithm well enough to do this. However you're satisfied by this level of understanding which is fine, but the ultimate goal of neurology is to surpass this level of understanding and THAT is what I am addressing.

>Well sure, I agree with you there. It is already the case for many phenomenon that we can nonetheless control. What is the exact definition of a car? Of a video game? Of a tree? Of a career? Of a family? I don't think humans need exact definitions of things, they can just as well function relatively optimally without them.

Clearly there is a difference between our lack of an exact definition of a car vs the exact definition of consciousness let's not get into the semantics of the difference here suffice to say that we're both human beings and we understand there is a huge lack of understanding of what consciousness is vs what a car is.

>Not all knowledge is in logical definitions.

It is. Lack of a logical definition is an indication of lack of knowledge. Your brain clearly has an exact definition of a career given the fact that given an input you can instantly recongize whether something is a "career" or not. The reason why you can't write the exact exact definition down in words is because you lack understanding of the structure of the definition that is being held in your brain. However given all possible inputs your brain will have an output. Given all inputs you will be able to write down a full mapping between inputs and outputs... that is a formal definition of "career"... you just lack the capability/knowledge of simplifying the definition into a single logical statement and understanding what exactly it is... but an exact formal definition exists in your brain.

>I really don't understand what you are saying here unfortunately, sounds like a philosophical black hole of overintellectualizing a simple phenomenon. I would say that we understand enough to have practical ability, which is all that matters.

What's wrong with over intellectualization? Are you implying that you want less brain power applied to this problem? We need to be stupider to understand something? If you say that practical ability is enough then this argument is over because we are talking about different things. I'm not talking about practical ability I'm talking about understanding of consciousness. The ability to define it and hold the concept in your head or several heads. That's what I'm talking about and that is the ultimate philosophical goal of neurology. A brute force simulation of a neural net which is the current state of things these days is not a form of understanding. Once you understand what it is, you should be able to code it up by hand.

Sure we MAY (keyword) only be interested the practical aspects of something like voice recognition. But the question of consciousness is clearly not something we're just interested in emulating. We're interested in understanding it.

If we develop 3D printing and scanning to the point where we can reprint the entire atomic structure of things we see in reality this does not mean we understand everything at the molecular level. Practical application != actual understanding. Taking this printer and printing out a human brain doesn't help us understand it at a deeper level just like how a photocopier copying a calculus text book doesn't help us understand calculus.

Additionally your "practical" machines will always have limitations if there isn't full understanding of the phenomenon that operates underneath. Self driving cars that deliberately crash into an unrecognized object is an example of unpredictability that comes with lack of understanding.


> >Not all knowledge is in logical definitions.

> It is.

> What's wrong with over intellectualization?

Ah, well there you go, now I see where you come from. (Not just from those lines, but from the whole post, which the lines illustrate.)

I don't agree that we lack understanding that you write of, but I can see how you would think that if you though that logic is the pinnacle of intellect.

But logic is just one part of understanding, there are other ways the knowledge can be collected, transmitted and applied. If you aim to only use logic, you end up with overintellectualizing everything, and in that case yes, it is very hard to say that you truly understand anything.

> You can't currently do either of these things. You require an algorithm to code it up for you. Therefore you don't really understand it.

This is, for all practical intends and purposes, a pointless distinction. If we have created the algorithm, it means we have created the thing itself and it means we do undertand it on some level. What you talk about is some "perfect 100%" understanding, which does not really exist, it is a trick of a mind that cannot use any other facilities than pure logic, and wants the world to fit into those categories that can be perfectly described with rigid categories of the logical apparatus. The world doesn't really fit though, but such a mind will ignore that and will end up having to overintellectualize everything, it doesn't know where to stop.

Humans are fully capable of operating in the world without fully (100%) understanding it, because they have other facilities than pure logical understanding. It doesn't mean that they operate things that they don't understand randomly, or just making choices for no reason. It means they have other intuitive methods of operating which can produce results without having a rigid logical model.


>This is, for all practical intends and purposes, a pointless distinction. If we have created the algorithm, it means we have created the thing itself and it means we do undertand it on some level. What you talk about is some "perfect 100%" understanding, which does not really exist, it is a trick of a mind that cannot use any other facilities than pure logic, and wants the world to fit into those categories that can be perfectly described with rigid categories of the logical apparatus. The world doesn't really fit though, but such a mind will ignore that and will end up having to overintellectualize everything, it doesn't know where to stop.

>Humans are fully capable of operating in the world without fully (100%) understanding it, because they have other facilities than pure logical understanding. It doesn't mean that they operate things that they don't understand randomly, or just making choices for no reason. It means they have other intuitive methods of operating which can produce results without having a rigid logical model.

Stop using the term overintellectuallization. It's a meaningless word that says too much brain power for a given problem.

We don't even have to get philosophical about this and talk about the nature of "understanding". I am not and you are misinterpreting what I am saying. They laymans definition of 100% understanding is good enough because clearly all researchers in the field agree that we don't understand consciousness.

Let's simplify things then so we don't argue over semantics and the nature of understanding.

We can both agree that humans or a team of humans can "understand" what an operating system is and almost all fields in the sciences we as humans generally use that same level of "understanding" as a metric. We also understand the operating system well enough to the point that a team of humans can build one by hand.

That is the metric we are using for "understanding" consciousness because that is the metric used in ALL other hard scientific fields. No need to talk about what "100% understanding" is. Meaning that once you understand it, you can build it or model it by hand in a computer program. Unfortunately the trends in machine learning show that we may never hit that bar so you are arguing for lowering that bar. You are saying if we can employ other mechanisms to build structures that are too complicated to comprehend, that is enough to say we "understand" that structure.

So basically your bar for understanding is lower and inconsistent with what scientists and computer scientists all over the world would use as a bar to classify the fact of whether or not they "understood" something.

So put it this way. I'm talking about "understanding" on the level that most people, most of science, and most researchers talk about it. No "over intellectualization" bs here. You unknowingly are the one who's making the leap here and moving the bar of understanding to a different more abstract place.

Let's look at the implications of what you're talking about. You say that using a program to train a neural net to simulate consciousness is enough to "understand" consciousness.

Then would you say if I can take a biological organism and reconstruct a duplicate of that organism that is molecularly and genetically identical to the first organism then I have complete understanding of all biological organisms?

Again your logic makes no sense here. We CAN do the above. It's called cloning, and although we can clone things we don't completely understand the mappings between genes and the macro features of the creature the gene describes. Complete understanding of genetics involves the ability to insert a 100% custom gene into a cell and having a standard computer program simulate the resulting creature.

Let's bring it back full circle. What I am originally talking about. Most things in science and engineering are too complicated to understand as a whole. So we use symbolic representation to simplify the system for understanding. The OS programmer who writes the windowing system thinks of the scheduler as an abstract representation and the OS programmer who writes the scheduler does the same for the windowing system.

Currently we cannot do the above for neural nets. This is an inconsistent phenomenon with most of the systems humans are interested in building. We do not have the ability to modularize a neural net hence why we rely on machine learning algorithms. I am addressing this phenomenon, describing the limitations of it, and applying it to the nature of consciousness as we do know like the visual recognition algorithm both systems reside in a neural net and thus probably suffer from the same problems and limitations.

So that is all that I am saying. We will likely not be able to modularize the problem of consciousness to a place where we can understand consciousness like we understand other things in other scientific fields. This is fundamentally inconsistent with levels of understanding that are achievable in other fields of science. What you're talking about is another topic all together which is moving the bar of understanding to a lower level so that we can redefine "understanding." Ignore the bar. Who cares. Focus on the essence of what I am saying and how our ability understanding consciousness is different from our ability to understand an operating system and almost everything else in science.

>But logic is just one part of understanding, there are other ways the knowledge can be collected, transmitted and applied. If you aim to only use logic, you end up with overintellectualizing everything, and in that case yes, it is very hard to say that you truly understand anything.

This is off topic. Logic only has direct application to formal systems and it's really too "philosiphical" to get into right now. Suffice to say that your argument is basically this. I am wrong because my arguments are too logical.


> "understanding" [...] is the metric used in ALL other hard scientific fields [...] Meaning that once you understand it, you can build it or model it by hand in a computer program.

> If you can understand it then you can code it up in the same amount of lines of code.

You have made up the "by hand"/"same amount of lines of code" requirement. The real world does not have it. If a scientist builds a neural network that generates (through machine learning) an algorithm that works, we all say that the scientist has written the algorithm, no one cares if they have done it by hand or through application of a clever meta algorithm.

If the "overintellectualizing" term is not common enough, let's replace it by "overthinking", it's very close.

> Most things in science and engineering are too complicated to understand as a whole. So we use symbolic representation to simplify the system for understanding. [...] Currently we cannot do the above for neural nets.

I disagree. I see that we do exactly that with neural nets. The distinctions you are trying to come up with sound semantic and arbitrary.


>I disagree. I see that we do exactly that with neural nets. The distinctions you are trying to come up with sound semantic and arbitrary.

You disagree, and therefore you are wrong. Similar to how if I say the sky is blue and you disagree. The distinction sounds semantic and arbitrary but it is not. Think harder, the failure here is not a semantic difference but a failure in you to process the right abstraction. Your disagreement is irrelevant in the face of reality.

Take the neural net or several neural nets. Decompose those neural nets into modules. Recompose those modules into new neural nets. Can you do this? No. Why?

Because you can't really modularize neural nets. What these analysis techniques are doing is showing you that there is sort of a module like thing here but like brain surgery it doesn't mean you can rip it out and reuse it somewhere else. That's a true lack of understanding of what's going on.

We both agree that the human brain is a black box. We also agree that we know about the existence and location of modules in the human brain. Things like the "emotion" and "locomotion" are known modules. Let's say we meet a paraplegic person who's locomotion part of his brain is damaged. Can't we fix him by doing a transplant? A recently deceased patient who died of unrelated reasons could have his "locomotion" module cut and transplanted into the brain of the person who needs it.

We can't because we actually don't have access to the modules. Just a blurry picture that some sort of module is there. Same with artificial neural nets. The day you can graft a module in a neural net and compose it with another is the day you have fulfilled the definition of what a "module" is. You haven't and therefore you are utterly wrong and therefore you lack knowledge about what is going on inside a neural net. This is definitive logic.

>You have made up the "by hand" requirement. The real world does not have it. If a scientist builds a neural network that generates (through machine learning) an algorithm that works, we all say that the scientist has written the algorithm, no one cares if they have done it by hand or through application of a clever meta algorithm.

Yes I have made it up to illustrate that there is more than a semantic difference. Look I'm not making up requirements here and there just to screw with you. I'm making them up so you can see there is actually a huge difference between training a neural network VS. doing meta programming.

A compiler is a meta programmer. You give it a high level language and it programs the CPU in a lower level language. There is a fundamental difference between what's going on here and what's going on when you train a neural net. There is a Functor between the difference in the process of creation to the level of understanding. WE have less control over the creation of weights in a neural net then we do over the assembly code a compiler generates just like how we have less understanding of the overall neural net then we do of the assembled program.

There is a clear gap here. I'm not literally setting a requirement here. My intention is to illustrate a gap in understanding and the gap seems to be permanent and a bulwark in our overall goals of understanding consciousness, not from your "requirement" perspective, but from scholars in the field.

literally a compiler "translates" code and a neural network is "trained." The word translate and train have more then a semantic difference.

>If the "overintellectualizing" term is not common enough, let's replace it by "overthinking", it's very close.

There is literally only a semantic difference here. You eat your own words. From my perspective overthinking is not what's going on here. It's "underthinking"... what is an adjective to describe a person who "underthinks?"


> You disagree, and therefore you are wrong.

Hehe and then you wonder why the argument is not going anywhere. If you would be talking about hard science which is proven by experiments or strong mathematical models, then yes, opinion would be irrelevant. But you are not talking about that at all, you are talking about your own subjective thoughts about the subject. And I just dont think they describe the current nor future state of AI properly.


If you read my post... I'm not talking about my subjective thoughts at all. I'm comparing ML to the broader spectrum of engineering and science.

I'm illustrating a difference in dichotomy between "training" and "programming". While you are trying to set some kind of bar for "design." Also please don't use "Hehe" here it's against the rules.


typo "design" -> "understanding"


This sounds eerily similar to some of the early proofs of God. I think there may be a flaw in this line of thinking.

|Thus humans can never understand the brain because the brain has equal complexity to itself and for the brain to understand the brain then it must have greater complexity than itself which is impossible.

We don't need to know the entire state of the brain at any given moment to understand the mechanics of the brain, just as we don't need to know the entire state of every molecule on planet Earth in order to understand it's physics.


The problem with your line of reasoning, is that you are thinking in technological progress only through individual achievement.

This is what technological iterations, through generations of human beings, give us all. Humans achieve super-human level of technological achievements because we iterate over what others have left for us.

Knowledge is like a stair, made from the hard work of many great human beings, and even for hard problems, with enough ammount of iterations, and with consecutive progress (without social deterioration) we can achieve anything.

Somebody will fill that "last" step in the stair that will make a paradigm shift to all of us..


Yes. You might find interesting some very rated ideas as this article - Bayesian Brain. In this view, the brain is a complex system that learns to adapt to and predict its environment, similar to other complex systems like even companies.


[flagged]


Your comments in this thread have broken the guidelines with swipes like "Of course you completely missed the point", "Leave it to the internet to read a comment and take it the wrong way", "Sigh", "Sheesh", and so on. That's just the sort of thing users here are asked to avoid. Would you please review https://news.ycombinator.com/newsguidelines.html and edit all that out of your comments here from now on?

Also, please don't use allcaps for emphasis. That's in the guidelines too.


All right, fine. I will.

I think the flag link has been disabled for my account. If you re-enable it that would be helpful in dealing with this problem.

People are rude to me all the time on HN. And the rules of HN expect me to be civil in the face of a culture that is uncivil. If you would like me to help you promote the guidelines it would be helpful if you allow me to flag others as well instead of responding in kind.


Flagging has not been disabled for your account.


Of course, we could use abstraction and analysis to tackle this...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: