Hacker News new | past | comments | ask | show | jobs | submit login

>but would it actually be sad?

To the fullest extent that you are capable of being sad. There is no even remotely plausible alternative to this physicalist argument.

I would argue that you aren't nearly as conscious as you think you are. That's the conclusion I've come to after many hours and years on the meditation cushion cultivating awareness of my own cognition. Any thought, choice, or action that you make doesn't actually happen in your conscious brain. You just become aware of it after it's happened. That's all there is.




It's premature to take for granted that a paper brain emulator can exist in principle.

Take something else less...emotionally laden. A black hole. Could we model a black hole on paper? If so, we ought to just be able to drop a computational particle in and see exactly what happens at the singularity. Except we can't -- the whole reason we call it a singularity is because we get divide by reality errors if we try to compute it.

But I would argue there's a different problem. A black hole is the most computationally efficient method of calculating itself. Any method of accurately emulating the black hole on the Planck level is going to take more energy (and mass) than the actual black hole. As a result, those pieces of paper would literally collapse into an actual black hole long before they could properly emulate the black hole they were modeling.

Which isn't to say that the brain is anything like that but the point is, it's not a foregone conclusion that we can construct a completely accurate emulator of a physical object.

Finally there's something of a category error in making the claim that pieces of paper can be sad. A paper emulation of a bar magnet would not itself be magnetic. In the world of the emulation it would produce an output that would model magnetism, yes, but that's the extent of it. Our paper brain emulator, if it were constructible, would produce output that would be a model of sadness in the emulation, but it would not actually be sad.


I think I'm almost entirely with you except for these two statements:

>It's premature to take for granted that a paper brain emulator can exist in principle.

I think it is premature to say we could simulate all of quantum mechanics with sheets of paper, so on that technicality, I totally agree. However, I think it's quite unlikely to be the case that we couldn't in principle simulate the requisite components required for a faithful replication of consciousness. But you're correct that it's probably not a given.

>Finally there's something of a category error in making the claim that pieces of paper can be sad. A paper emulation of a bar magnet would not itself be magnetic. In the world of the emulation it would produce an output that would model magnetism, yes, but that's the extent of it. Our paper brain emulator, if it were constructible, would produce output that would be a model of sadness in the emulation, but it would not actually be sad.

I think I disagree with almost all of this. The bar magnet is a bit of a bait and switch because the system, the input, and the output weren't well defined. In consciousness, you need to define the system, system input and system output properly. If the system has persistence of thought/computation, and the inputs and outputs can be defined in a way identically to those of your own consciousness, then it would actually be sad, in every way that you are. In particular, if you replace bits of paper with "neurons", your argument should absolutely still hold. So the logical extension of this argument is that either YOU also aren't capable of being sad, or your consciousness does not originate in your brain. Both of those I think are pretty close to sufficiently absurd that we can accept them as givens.


> As a result, those pieces of paper would literally collapse into an actual black hole long before they could properly emulate the black hole they were modeling.

That is wrong. Any method of emulating the black hole within a volume less-than or equal to the Schwarzchild radius of the black hole would collapse, but you could do the calculation over a larger volume and avoid that. The most massive stars are far more massive than the least massive black holes.

The smallest black hole known that I could find from a quick search is XTE J1650-500, at about 3.8 solar masses with a "15 mile"[1] diameter. There are a lot of stars more massive than that.[2]

[1] https://www.nasa.gov/centers/goddard/news/topstory/2008/smal...

[2] https://en.wikipedia.org/wiki/List_of_most_massive_stars


>Any method of emulating the black hole within a volume less-than or equal to the Schwarzchild radius of the black hole would collapse, but you could do the calculation over a larger volume and avoid that.

...This was my intuition as well, but I'm far from certain about it. It's not uncommon for the universe to find sneaky ways to prevent us from "cheating" so to speak. These often result from deep symmetries/conservation laws and fundamental limits on information.

I'm actually a big opponent of the simulation argument for that reason. I don't think you can accurately simulate the universe without having a universe to simulate it in. Otherwise, you could just 'turtles-all-the-way-down' the simulation. The information density would have to be unbounded, and this seems to be in disagreement with fundamental laws of the universe.

We don't have a theory of quantum gravity... it's definitely possible that to simulate the blackhole in a more spread out manner would be impossible. I could envision a fundamental tradeoff where to replicate the information exchanges between the microscopic constituents requires that either you satisfy the hoop conjecture, or you keep increasing the size of your model without bound, whereby for any finite size you still satisfy the hoop conjecture. (Particularly since the required mass/energy density goes down significantly as the radius increases.) I know that's wild speculation, but I feel like it's not quite crazy enough to take for granted.


How could you possibly know that you "became aware" of anything without some information being sent back from that-thing-that-became-aware? You know it so well that you are even able to comment about it.

I think the fact that people ('s brains) are so sure that they are conscious, is pretty clear evidence that for some reason your brain is getting messages back from this consciousness thing. Either that, or brains are consistently are built so they can't shake the idea that they have an observer (see: how many comments are on every HN article about consciousness, including this one).


>How could you possibly know that you "became aware" of anything without some information being sent back from that-thing-that-became-aware?

I'm sorry, but I don't quite understand either your question or the following paragraph. However, I feel like it could be an interesting line of discussion if you can explain it to me better.

I'm a bit lost, it sounds like you're suggesting consciousness and your brain are too different things? Is that right?


Well the two things are the part where physics does stuff, then the other part is where you become aware of it (whatever that even is, but you experience it none-the-less. Let's call it the consciousness).

Random assumptions/observations that might be relevant:

- The universe seems to generally work under "simple" principles that can be explained with a few mathematical equations.

- The brain is made of the same stuff as everything else.

- You are only aware of a very very small amount of things that actually happen in your body: You are not aware of what happens in your visual cortex, just what comes out of it. You are not usually aware of the sound waves themselves, just the conceptual "sound". I would call all that stuff "processed data". Sound, images, touch, all that stuff, generally get blended together into one cohesive experience.

- All that looks like a flow chart when I think about it. The photons go into your eye, stimulating nerves, electrons flying everywhere. That cascade of nerve stimulations goes into the visual cortex where stuff is processed into actual stuff that you care about, but the data is still just essentially electrical signals at that point. But then suddenly, you are aware of that processed data. Sounds like it explicitly got that processed visual data and sent it out a transmitter, to be listened to by none other than my "self".

- I guess I would then relate my consciousness to a radio receiver that's waiting on the other end. And in some specific part of my brain is a transmitter.

- A radio station can't possibly know if anyone is listening to the signal they sent, unless the listener communicates back that they are listening. In the same way, it doesn't make much sense to talk about how aware you are if the "aware part" (the radio receiver) doesn't talk back.

- Random final observations if this model were true: To maintain conservation of energy, the receiver would likely have to act like a capacitor, storing energy that was used to send the message to it, and using that energy to send a message back. Either that or it communicates back by collapsing wavefunctions (okay now it's getting a little out of hand). Also, maybe using this model, you could say that your sense of the progression of time is proportional to the amount of information sent. Which is why time goes so quickly while you are asleep, because your brain stops sending stuff out the antenna (also getting out of hand).

Hopefully you find this idea at least a little interesting. I feel like there's a world of possibilities here that haven't properly been considered.

I would love to hear your take on it.


Ah yes ok I get it now.

There is feedback from the attention centers. In particular, when you pay attention to things, those connections get reinforced. So the real feedback occurs on a bit of a different timescale than the rest of your cognition, though there's probably some shorter term feedback too if I had to guess.

In a sense, I think there's a center of attention/awareness that consists of almost all of what we usually call consciousness largely because it's contents also get fed back in as an input. There's the self-awareness component.


That doesn't solve anything though, there is still the "you" that becomes aware.


Not really. If you really start digging you realize that there isn't much "you" there left to be conscious. To a great extent I think consciousness is largely a bit of an illusion. Most of what people typically think of in terms of consciousness is really the contents of their attention. But you don't even truly control that attention. The decisions to switch from one unconscious information stream to another are themselves unconscious information streams. In essence, I'd argue that upon inspection, "attention" is the apparent last vestige where any consciousness may reside. But even deeper inspection reveals that it itself is quite an empty concept.


That's a good description of where I'm at, and I keep wondering if maybe the definition most people are using for "consciousness" just doesn't match mine. The longer I meditate, the less consciousness matters to me; it seems totally clear that I become aware of things after they've already been generated elsewhere in my brain, and that awareness is just a kind of self-reflection which maybe helps with higher-order planning or whatever. I'm still open to realizing I've missed the mark--especially because so many prominent meditators seem very focused on (or even enthralled by) mysteries of consciousness. Until then, I'm really struggling with why people would try to elevate this subjective experience to a fundamental aspect of physics.


> I keep wondering if maybe the definition most people are using for "consciousness" just doesn't match mine

I think this is true, when we talk about consciousness we talk about it as if consciousness has the ability to control the body and speak and that our thoughts are controlled by our consciousness. I think illusionist arguments are very strong here against that. We can change how someone thinks or acts based off of physical stuff (like lead poisoning someone) therefore those things must mostly be physical.

I have come to the conclusion that conciseness probably is only an experiential thing. It might not be able to control anything but there is something fundamental there experiencing something. We cant know that the world is real: we could a brain in a jar, or in the matrix, or on a massive DMT trip right now. But we can know "I think therefore I am", probably better written I experience therefore I am. It seems very strange to throw out the one thing we know to be true, that we are having a conscious experience, in favour of something we don't know to be true, that the world is real and causing that experience.


> I have come to the conclusion that conciseness probably is only an experiential thing. It might not be able to control anything [..]

Doesn't it cause us to at least have these kinds of conversations?


There is always still the _subjective experience_ of paying attention to whatever I'm paying attention to, of "being the one that sees my visual field", and so on.

There must be fundamental difference between the subjective experience I have of vision and that of a computer with a camera and processing software, I can't imagine that it has a similar experience.

How come there _is_ a subjective "me" that experiences things and can pay attention to them? Given that we are clearly bags full of extremely fancy chemical reactions.


I think there is a difference there, but it's largely because the computer with a camera is such a simple system at this point.

In contrast, you have many, many layers of very sophisticated and interconnected abstraction and reality modeling between that visual stream and other forms of processing. Typically, the higher the level of abstraction, the more "aware" of it you are as it gets filtered and dumped into your attention centers.

In short, even our most sophisticated state of the art "deep" learning algorithms are but puddles compared to the ocean of depth available in your brain. ...and almost none have any form of attentive aggregation and selection.


> There is always still the _subjective experience_ of paying attention to whatever I'm paying attention to, of "being the one that sees my visual field", and so on.

"Being me" is an experience or feeling. We have lots of other feelings both from within and outside our bodies. Could it be that consciousness is simply how it feels to have a focus of thoughts and attention in our brains?


Are you aware of the model of the mind system from The Mind Illuminated (a book that teaches meditation)?

First, I have to clarify concepts. There is a consciousness created by the mind. It's the place where sensory input is experienced. It's also the place where thoughts are experienced. It's basically the screen that allows the different parts of the mind system to communicate.

Then, there's the consciousness talked about in this article. It's a more primordial quality.

Now, that model of the mind system says that attention and awareness are the two modes of the mind. Consciousness as the screen of experience is created by the mind. Perception is created by the mind as well.


That’s the “Flashlight in the Dark” concept of consciousness as the focus of attention.


>To the fullest extent that you are capable of being sad. There is no even remotely plausible alternative to this physicalist argument.

Actually that's just hand waving, and taking for granted what needs to be proved.


I feel like you could be equally dismissive to the statement "There is no remotely plausible alternative to a godless universe."

And yet, I think that statement stands pretty strong on the basis of what we do and what we can know.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: