Hacker News new | past | comments | ask | show | jobs | submit login
Information Is Physics (acm.org)
234 points by mana99 on Nov 29, 2019 | hide | past | favorite | 75 comments



Landauer's principle intuitively: the laws of physics are reversible, and therefore two distinct states (1 or 0) always map to two distinct states (1 or 0, evolved). So how can you erase a bit - take a 1 or 0 and map them to a 0? The answer is that you have to send the 0 or 1 to the environment i.e. map 1 or 0 to (system,environment) = (0,1) or (0,0) , which manifests as the dissipation of heat.


So where does the information go upon the final heath death of the universe? Does it become encoded by, say, the final diameter of the post-expansion universe (i.e. by some "environmental" property at the sub-quantum field level)?


There is no environment when you are considering the entire universe. This means it evolves unitarily and no information is ever erased.


> evolves unitarily and no information is ever erased.

Nor created.


The heat death is when the universe is in it's maximal information state.


The information doesn’t go anywhere. In the heat death of the universe all particles are at absolute zero due to the heat of the universe spreading uniformly in the vacuum of space.


Fascinating! Thank you for sharing, OP! Having recently gone down a similar rabbit hole [1], having references and starting points is incredibly useful for further exploration, as this stuff is very hard to search for. There does seem to be a direct computational link between information and the universe - and the way we’re using this link now , with classical computers, is highly abstracted and inefficient (the article also compares current energy cost of computing vs the much much lower theoretical limit). Quantum computers come closer to the nature of that link, but still require many abstractions.

I wonder if it will be possible to eventually compute without abstractions, directly on the substrate of the universe. Though that might be equivalent to real magic ;)

1. https://juretriglav.si/computing-with-nature/


I don't think you need to bring quantum computing into the picture. For example "naturally" 3D printing a Mango takes a few years. Just drop a seed in the right place on earth and wait. Isn't that computing on the substrate of the universe?

Are you asking if nature can be coaxed into doing it in a few seconds or a day? Or maybe you are not bothered about time and want to drop in that seed and grow a chair? The answer is probably yes to both. It will just take some time to get there. I don't think quantum computing will be required but who knows...


> Or maybe you are not bothered about time and want to drop in that seed and grow a chair?

If you don't mind giving the tree a little guidance as it grows: https://en.wikipedia.org/wiki/Tree_shaping


Yeah, dropping a seed in and growing a chair very quickly sounds like it is in the same ballpark, or at least further along the spectrum of efficient computation. That seed (program) would very tricky to produce, though. Contrary to the current practice of writing programs relatively easily, but the machines to run them are incredibly complex, both to run and to manufacture. With the seed, though, the machine would be nature, 1=1.

I agree that there is no need to bring quantum computing into the discussions, but it is higher on the efficient computing scale. It also requires a lot of effort to come up with programs that make sense, and they use more of natural phenomena (superposition, entanglement, etc) compared to classical computers. It’s something to look at for inspiration, at least?



For anyone interested in this, the Landauer limit now has experimental evidence.

Also, it is now believed that the Margolus-Levitin theorem puts a lower bound on energy even on reversible computations.


Weird. It's new to me that we didn't have experimental evidence before, but I guess it would have to be pretty subtle equipment.

I wish I knew how to catalog/index my assumption/belief set, so I could scrub it down for unverified sub-assumptions from time-to-time. I'm happy to update them, when they are proven wrong, but I don't know how to do this methodically and as a result, my brain fills up with crap. And then I can't trust my own glib reasoning, and then I have to be methodical for anything I need to think through, and that slows me down. And I hate that.


There are quite a few things in physics which didn't have direct experimental evidence, while most are convinced because the theoretical results and incomplete evidence are persuasive enough. If this Science article is accurate, there was no direct experimental evidence that proved the majority energy of the Sun comes from the fusion of protons into helium, until the neutrino experiment in 2014.

https://www.sciencemag.org/news/2014/08/underground-experime...


"They did a stellar job..."

Well ... yes, yes they did.

Pretty fascinating, if light, overview of what's involved in neutron detection. A case of understanding theoretical models (causal knowledge), materials, purity and processing, conversion processes (neutrino interaction -> light -> photodetector), statistics, and ultimately, a view of what was happenning inside the solar core eight minutes ago, events which won't manifest on the solar surface for another 100,000 years.


> the theoretical results and incomplete evidence are persuasive enough

In the quoted example, "direct evidence" from that paper was not necessary to confirm that the other calculations and models match. The achievement presented in that paper was to measure the specific neutrino flux, but there were many other measurements before that matched the developed models and weren't less convincing.

It's always good to read more than only journalist's interpretation. Here's from the paper itself:

Eve before "solar neutrinos from secondary processes have been observed, proving the nuclear origin of the Sun’s energy and contributing to the discovery of neutrino oscillations."

Only "those from proton–proton fusion have hitherto eluded direct detection."

https://www.nature.com/articles/nature13702

The paper itself stated that the reactions in the Sun were already proved.


Woah!


The foundations of our knowledge are very quickly based on assumptions/beliefs.

Light is not really understood. It's a wave and a photon, depending on what we need to solve. We incorrectly say light is a "wave", but "wave" is a verb!! I remember thinking this sounded ridiculous when I was in school. You probably did too! We should all go back and redo our assumptions like you say.

The other thing that disappointed me was the explanation of why magnets attract/repel. (Do you remember your feeling about it? Did your innocent child mind reject the explanation?)


Light is well understood but the explanation depends on what level of abstraction you require. There are quantum, classical and naive explanations of light and each are radically different. It's not our understanding that is flawed but rather the assumption that if there's two different explanations of the same physical phenomenon it indicates a lack of knowledge (rather than an abundance of knowledge).


That just moves the problem to "which layer of abstraction should I use". Ex. If I'm developing a new material that interacts with light, should I consider light to have mass, or not? Well if it's a space sail, photos have mass, if it's not a space sail, light has no mass. But I'm not sure if this new material is a space sail....which abstraction should I use?


Your brain is not a computer


We have no scientific evidence of any process in and brain or mind which isn't computable in the sense of Church/Turing. On the other hand I personally feel that I experience episodes that couldn't be computed, but I struggle to articulate the smallest parts of these. One is the small creative epiphany of writing, the structures of with can be created computationally, but the turns of sense and narrative don't seem to me to be the outcome of functional transformations.

I know that others will hold to the idea that they are and could be, but to me that renders us as zombies, and that isn't satisfactory in the sense that while a zombie that acts like a real person would be evolutionary fit, I see no reason why the zombie should believe it has free will, or could, and yet I do.


The zombie could believe it has free will the same way it holds any other belief - whether it happens to be a delusion is immaterial to whether it's able to hold the belief and there's nothing particularly unique about this particular belief. It's also far from implausible that belief in free will could have some evolutionary utility (or, more likely, be a concomitant of some other property that has evolutionary utility). None of this proves you wrong, of course, I just don't see why we should imbue our common sense intuitions in this one area with such authority when they have failed us in so many others.


Perhaps you are unaware that the term "computer" originally referred to a person who performed scientific calculations by hand (using their brain).

https://www.amazon.com/When-Computers-Human-David-Grier/dp/0...


And yet both of them start to degrade when I spill alcohol on them.


It's hard to imagine what it would even mean to get experimental evidence against the Landauer limit. You'd have to throw out basic thermodynamics. (It would about the same as getting experimental evidence for a perpetual motion machine; possible, but basically requiring a revolution in fundamental physics.) The disputes over the Landauer limit were conceptual and definitional disputes, not disagreements over well-formed predictions.


On average, you can demonstrate the landauer limit.

https://advances.sciencemag.org/content/2/3/e1501492

Even in those experiments, there are scenarios where energy is recovered during the erasure of the bit. Although the number of experiments is large enough to recover the k_b T ln(2) expected value.


I think the next step for physics could be potentially in establishing the link between abstract information and what is perceived as "real matter".


This is a good first place to start:

https://en.wikipedia.org/wiki/Self-organization

Forms can be quantified by the entropy with respect to its component parts -- generally, the more ordered, the less entropy. Constructing things broadly corresponds to reducing the entropy of the set of configurations of the components.


See thermal physics, which combines statistical mechanics and thermodynamics:

https://en.wikipedia.org/wiki/Thermal_physics

For a computation focused example, see Schneier's essay on strength of a 256-bit symmetric cipher:

https://www.schneier.com/blog/archives/2009/09/the_doghouse_...

Edit for completeness: rest energy converts to mass as follows: E = m*c^2


So a form of Platonic realism? The abstract stuff forms the structure of matter.


Self squared dragons and monstrous moonshine.


I find it amusing that I didn't say anything for OR against, just mentioned them and that apparently someone 'disagrees.' With what, an inaccuracy? Then post a reply. Just voting says: I don't get it.


This reminds me a bit of David Deutsch's Constructor Theory. What's the mainstream acceptance of that like at the moment?

https://en.wikipedia.org/wiki/Constructor_theory


I'm always ticked off when I hear the hype around constructor theory. While information theory has had profound impacts in physics, constructor theory has always seemed to me to accomplish nothing but add fancy words. Yet it has done very well trading on the past prestige of its proponents, and promoting itself through popular talks where no real scientists are present to question them.

For example, constructor theory "explains" why life exists. Their argument is as follows (condensing 20 pages of vague equation-free words into a few sentences): first assume that there exists a constructor for life. Such a constructor is defined to be an object that can create new life. This leads to the creation of more life. Therefore life exists.

I was so shocked by this that I reread their paper three times before concluding their argument really was that vacuous. Their physics-related stuff is similar: assume the desired conclusion, then pretend you've derived it by adding the vague term "constructor" everywhere. (To derive F = ma... assume F is a constructor for ma!) It's so content-free I literally can't find anything specific to criticize. But they keep getting press by making more and more outrageous claims.

The fact that tens of thousands of curious people around the world have been duped into thinking that it is on par with ideas that actually have content proves the absolute bankruptcy of science communication. It is also a warning to avoid blind trust in past credentials.


The first sentence of the Wikipedia article is "Constructor theory is a proposal for a new mode of explanation in fundamental physics".

Well, if it's a "new mode of explanation", then the baseline expectation is that it doesn't add anything to all existing explanations of physics. The point of it would be to somehow allow explanations of things that were unexplainable before...

I mean, if you have a new way of explaining things, you have to derive all of the things we already know are true.

Edit:

The "Outline" at the bottom of the page doesn't sound like a profound new way of understanding the universe; it makes my bs meter twitch, but I don't see anything that provides a definitive reason to consider it all null.


You’re completely correct: new modes of explanation can be extremely useful, and you can’t judge from the Wikipedia article alone because it can’t get to the meat! That’s why I had to read the papers. Turns out, the meat just isn’t there, it doesn’t exist. There aren’t any ideas beyond the vague popsci outline, which by itself does no good.


“It is also a warning to avoid blind trust in past credentials.”

Indeed. A thing is initially more interesting if the author has past contributions of worth, but historical performance should never lead to blind faith about claims. Always remember Linus Pauling’s irrational infatuation with vitamin C.


"settled into something that many people agree is a problem."

love this phrase.


Can someone ELI5 the general concept?


Information is physics: Information (bits) can be viewed as energy. This view allows use of tools and formulas from statistical physics and relate these to information theory (Shannon Entropy == Energy Entropy). It then becomes possible to describe and quantify the information flow between systems, or the thermodynamic cost (heat generation) of computation/flipping bits. It allows for units of measurements such as temperature and pressure to be applied to information.

Physics is information: Energy can be viewed as information. Some researchers investigate the notion of "it from bit": that all of physical reality is the result, and derives its essence, from yes-no bit strings/Turing programs. This view offers answers to the limits of physical inference machines such as the human brain. It leads to the field of digital physics, offering a notion of a computational universe, applying our knowledge of algorithmic information theory, such as the distribution of random strings, to physical reality.


Scott Aaronson has a nice blog post[1] on the same subject. Basically, information is a physical thing in the universe and is just as important as space or energy or time.

1: https://www.scottaaronson.com/blog/?p=3327


There's a lot here but this specifically talks about AdS/CFT which seems to be a pretty core concept https://www.youtube.com/watch?v=klpDHn8viX8

Just linked this on a similar information theory thread [1] (glad it's so popular on HN right now!)

1: https://news.ycombinator.com/item?id=21653230


A contest near the end of this chart: https://xkcd.com/435/


where are the philosophers with applied consciousness to imagination?


Does this relate to entropy (information theory) by any chance ?

Information Theory part 12: Information Entropy (Claude Shannon's formula) https://www.youtube.com/watch?v=R4OlXb9aTvQ




Feed sad. It is really one of the few area has long term impact. Sadly my young fellow hkust student cannot have a chance to explore and see it. Even get a funeral house to refuse to burial him. Rip. Just not sure what to say.


It was only a matter of time


What else could it be? Everything is physics.


That remains up for debate. More to the point, the claim in OP is the converse of the one you're trivializing.


This is the same sort of claim as everything is water, the four elements, matter, mind, math, or according to this article, information. The difficulty as always is showing how everything is actually X. It started with Thales or Parmenides, and the allure of reducing everything to one substance remains with us to this day.


Except physics is precisely not a substance. So let me ask that again, what else could information be? Even dualistic philosophies are conceptually reducible to "physics" because physics is all there is, by definition. To paraphrase George Carlin about the Earth and plastic, even if you think "minds" or "information" are not "physical" (by which people usually mean material, which is yet again something else), then redefine physics to be "old physics"+"that new arbitrary thing you named" and you'd still have physics.

Those discussions might involve interesting physics, but it's certainly clouded in very poorly chosen and confusion-inducing terminology.


Physicists certainly don't concern themselves with the study of everything. Take sociology or art, for example. At best you can you say everything is made up of physical stuff.

But even then, there are questions. Are math and logic made of physical stuff? What about causality or laws of nature? And what of universals or possible worlds or counterfactuals? If those exist, are they physical?

And then there's consciousness, and the ideas we have about the world, of which the study of physics is but one domain (containing concepts of energy, mass, fields, laws, information). Are these ideas physical?

What about the debate over the proper interpretation of quantum mechanics? Is that physical or philosophical? Is philosophy physics?

You quickly run into problems when you say that everything is domain X, because then you have to justify lumping everything into that category.

And the idea that everything is physics is open for debate, so clearly not everyone agrees that it is true by definition. Most things in philosophy don't get a free pass by virtue of definition, because people aren't going to usually agree to go along with said definition. Instead, they will want to ask what it means for everything to be physical.


That the discipline of physics doesn't study everything doesn't entail that everything isn't physics. The title of the OP doesn't refer to the discipline.

As for your other points, I'm familiar with the debates on physicalism in their various forms. I just believe most of them are elaborate semantic games with very little actual content.

That people will ask what it means for everything (or anything) to be physical is precisely my point — stating that "information is physics" is a vague and (in my view) probably empty/tautological statement if you don't explain what you mean by that more precisely.


Theories are now emerging that Universe is running one large bayesian learning algorithm (bayesian inference itself is proven to be an optimal knowledge creation method)

See e.g. Bayesian Brain and Universal Darwinism


See also: brain as hydraulics, brain as a system of cogs, brain as a computer.[1]

[1] https://aeon.co/essays/your-brain-does-not-process-informati...


Argh, that article! Those are metaphors at some level of representation!!! You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage. The three conditions stated in the article basically are the set up for reinforcement learning, (which can be implemented at some level of abstraction on a computer) and the question about if a representation is required or not is a mathematical one. For linear systems with gaussian noise the optimal control this is an answered question, yes you optimally estimate the state, then you base your controller off the optimal estimate. For more complicated systems it is unclear if the representation is required or not, but it sure seems reasonable that some level of representation is required. In the baseball example they are still talking about keeping a constant optical line, not what happens to raw optic nerve inputs. It has a reduced dimensionality representation!


> You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage.

Jaron Lanier argues that computers and computation are cultural. Why give them a special ontological status? Yeah, we can think of the universe as computational or informational. We can also think of it as mathematical, mental or just whatever physics posits (fields, strings, higher dimensions, etc). Whatever the case, when someone like the article in the OP states that reality is X, then an ontological claim is being made. Its metaphysics.

One could instead argue that the world just is itself, and anything we say about it is our human model or best approximation. Which would be a combination of realism (the world itself) and idealism (how we make sense of it). Then it's just a matter of not mistaking the map for the territory. Instead of saying that the brain or the universe is X, we say that X is our best current map for Y. We don't say that London is a map. That would be making a category error.


I think the article's overall stride is not so much about how these things are represented in neuronal mapping per se, and more that we shouldn't apply the idea of computer mechanics to organisms.

Less load/process/store of absolute data and more like natural processes such as the process of erosion creating rivers. The analogy of the environment as a "lock" and organisms are just the most fit "keys" to success in particular environment.

So the computer analogy is bad because organisms are more a matrix of interactions, feedbacks and responses that work well enough, but dont follow a "logical" design. This can be replicated within a computer easily and the result is evolutionary computation & hardware, genetic algorithms and evolutionary neural networks. The problem in understanding the result of evolutionary systems is that they're blind to design and only respond to fitness and therefore create systems that are so tightly coupled it's a quest to understand how the model even works.

So the article is suggesting we shouldn't apply human design principles to evolved solutions. Perhaps we need some kind of "messy science" to make sense of it all.

Going forward, machine learning running evolutionary algorithms on neural networks should be able to produce sufficiently incomprehensibility for us to be studying our own inventions for years to come.


Thank you for this. I spend a non-trivial amount of time telling people working in AI and machine learning (which I also do) that the brain isn't some parameter optimization machine and that analogies from whatever technology or math people are excited about aren't very useful. I wish some neuroscience education and articles like these were some part of the ML canon.


I couldn't disagree with you more. The article referenced by GP mistakes the form for the function. Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies.

And even if we don't have the correct algorithms in sight today, there is every reason to believe that whatever processes are occurring in our brains and bodies, can indeed be simulated and replicated virtually.

The only way to argue against this idea is to claim that there is some special magical non-material aspect to our existence... which no article or neuroscience education has yet demonstrated.


The comment was about universal Bayesian brains and other things that are quite a stretch to say the least. Of course, since our brains are made of physical matter, they must perform computations that other physical matter can perform.

The trap is to think about the brain in terms of things we find impressive, and about things we find impressive as being like brains somehow. Therefore analogies to steam engines, computers and deep learning. And these analogies have always turned out to be silly.


> Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies

BUT: at least I think we are far from it. Very far. In the sense that we don't need more computing power of the current approaches to get e.g. AGI, we need radically new ones. And I actually don't see why this would be opposed to more neuroscience education, instead of excitement for cool but still quite limited models, and why this would be pretending that there is some "special magical non-material aspect to our existence"

How much can you compress the essential structure and complexity of an intelligent brain? It is an open question, but if in the end you can not compress it "enough", it does not have much practical consequences of it being also theoretically a mathematical object. And on top of that: we already know how to make new ones...


Define intelligent.

Very tiny animal life shows what we would consider intelligent behavior. There is no particular reason to believe that evolution has even come close to size optimization that intelligence can be reduced in, as there are a large number of other dimensions it is working on at the same time, survival being the big one.


>> which no article or neuroscience education has yet demonstrated.

True, but there are some pretty interesting ideas out there. I'm going have to start putting together a list of articles. From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain. Not saying I believe these things, but the people behind them are pretty smart.


There is no real evidence that we have free will, and the general "suspicion" in the field is that we don't. Yes the brain is made of particles, but their arrangement is very particular and very complex, leaving cognition and all other things the brain does to almost certainly be emergent phenomena. Boiling down to single particles is like trying to reverse engineer a Tesla by focusing on the fact that it has iron atoms in it.


> From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain.

The question remains, what reason do we have to believe that only a living brain, and not a silicon analogue, can tap into those features of reality?


I agree with you that humans are very different from optimization machines in that they have some freedom in what they choose to optimize. Alan Newell made this point a long time ago, back then attempts were made to describe humans in terms of control theory. It works up to a point, but autonomous behavior needs the faculty to set goals independently of pre-programmed optimization points as well as current situational factors. Humans, Newell argued, should be understood as knowledge systems that operate on their representations of the world, but are equally adept at simulating the world in their heads, and create knowledge beyond current representations.

The article, however, is rubbish. As psychologist, I cringed throughout. It is a blurr of half-baked ideas and ill-understood controversies from cognitive science. The author manages to write an entire article about information at its core without ever properly defining information, not to speak of representation. In the sense of Shannon, or course neurons are channels transmitting information. What else would they do?

And of course we can decode that information even from the outside, even down to discrete processing stages during the execution of mental tasks (https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejn.13817). And if there are truely no representations in the brain, as the author states, how do we plan for future events that are far beyond the horizon? And even if you reject all that, there is DNA in the brain that is literally information and expressed (decoded and made into protein) ALL THE TIME.

Regarding cognition, the good Mr. Epstein has not grasped the difference between computers and computability. I don't think anybody is looking for silicon in the brain. The smart people are asking how it is possible for a complex system to operate in a complex world without an outside unit directing their behavior. They ask "How how can the human mind occur in the physical universe" (http://act-r.psy.cmu.edu/?post_type=publications&p=14305)? How is it that we can do the things we do? How do we set goals, plan steps to achieve them, and choose the right actions for implementation?

I get where you are coming from and I agree with you regarding a dangerous misunderstanding of AI, especially ML. But this article is not helping putting things in perspective. I am willing, however, to concede one point to Mr. Epstein: His brain is dearly lacking information, representation, algorithms, or any such marker usually signifying intelligent life.


The article is bad, but the point of silly analogies to various technologies remains.

Regarding the questions you addressed, my suspicion is that the brain's primary trick is to model the organism and the environment. Planning ahead, reasoning and synthesizing knowledge can all happen if you can do that. I'd argue (and of course I'm biased) that control theory is probably a better place to start thinking about the brain, in that light, insofar as building models of the world is important.


Information processing as basis for human existence is not an analogy once you accept a very basic premise of what information is. It is the literal description of what is going on, even on the biological level. I've mentioned DNA, the immune system is another example.

If you want to be successful in a complex world, survive, replicate, you will profite massively if you know what is going on around you better than that other thing that wants to eat you. If you can grasp the structure of the physical world and predict its changes, you will come out on top. Information processing is an evolutionary necessity, because we are grounded in a physical world. Information is the successful way to deal with the world, because it gives the organism a choice.

Control theory is great if you want to describe real valued in- and outputs and their relationship over time. Like throwing a ball. But at some point we need to become discreet and abstract the real valued domain of space and time into symbols.


> But the IP metaphor is, after all, just another metaphor

With no apologies to the UNIX metaphor of "everything is a file", my favorite starting point for [pretending at] explaining intelligence/understanding/recognition has long been "everything is a metaphor" ;)


Everything is a file if you are brave enough.


Yes, this line of thought it pretty interesting. There is also an equivalence between quantum physics and a kind of machine learning called restricted Boltzmann machines, in that they can efficiently simulate each other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: