Hacker News new | past | comments | ask | show | jobs | submit login
Maps Of Matter (futureofmatter.com)
107 points by optimalsolver on June 29, 2021 | hide | past | favorite | 65 comments



> Put another way: the conventional use of “emergent” has a rather passive flavour. What I'm talking about is a much more active imaginative stance, the kind of stance taken by people like Bret Victor or Satoshi or Picasso.

"generative"?


I think generative art could be emergent in future, but as far as I'm aware it isn't currently. The best examples of designed emergence we have right now are cellular automata, like Conway's Game of Life, or Primordial particle systems (https://youtu.be/makaJpLvbow).


Yes, though word is much broader in application than that. See for example generative design [1]. Pattern languages also, and the deliberate juxtaposition of things as a creative catalyst. Sort of in line with the article I have two metaphors for that idea in the knowledge creation arena: “Throwing things into the Great Model Collider” and “One model to the tune of another”.


I like that he is thinking in a bold way and hope it leads to something productive, but it seems like nearly all of this physics will be unverifiable and will remain theoretical. He's talking about physics at time scales and conditions that are not found naturally earth and that require a tremendous amount of energy to produce.


I just want to say that this website is beautiful. It's visually pleasing and spare. The HTML is purely semantic, and the CSS is no more than necessary.

To me, this is an ideal web page. Hats off to the author!


The essay claims we have a solid understanding of the basic rules of physics. I'm not a physicist but my impression is that we don't know a lot and are missing some essential parts. Black matter and black energy anyone?

Perhaps our model of a world is missing a few dimensions which makes it hard to understand what is going on.

Related: https://www.collective-evolution.com/2013/07/02/flatland-und...


You mean dark matter and dark energy.

String theory posits 10+ dimensions but they would be extremely tiny (compactified is the term) and cyclic so all objects would travel through them in a modular way. Like, if you walked to the end of your bedroom, you'd have gone through these dimensions something like 10^100 times again and again like a clock hand going around.

But experiments with gravity and light supposedly show no loss of energy or speed and other metrics, that would indicate some other hidden dimensions where energy could be absorbed/trapped.

It's possible these other dimensions have extremely high vacuum states so they repel any incoming energy packets..but who really knows?

In any case, we know plenty enough to start crafting new combinations of atoms to create room temperature superconductors. Fabrication techniques have to get better. We should be testing 10000000 compounds every day. If we had room temperature superconductors we could have CPUs, GPUs, and other such transistor based technology running at 1000s of GHz. The only reason we can't do that now is because of the heat given off by the resistance of the materials we are currently using to build these circuits!

Overclocking culture used to put liquid nitrogen on Pentium 4s and get em up to 10+GHz. Fun but not sustainable. Just evidence that the thermal envelope is the true problem for current single core clock speeds.

A room temperature superconductor would change computing forever. It is in my opinion, the thing we should be focusing all our material science and physics efforts on. Computers that are 1000s of times faster would allow us to take machine learning, and by extension, artificial intelligence, to a new frontier.


The more complete claim is that we understand the fundamental laws of physics as they pertain to everyday life. The qualifications are important. Fundamental because being able to predict the interactions of a handful of particles doesn't automatically lead to visibility of the large-scale behavior. An example of this is protein folding: we know how proteins fold, but finding out how to get a particular shape basically requires brute-forcing the search space. Everyday life excludes dark {matter,energy} as well as quantum gravity and anything that requires the energies of large particle accelerators to probe.


> Speculative, and likely contains errors, misconceptions, and omissions; thoughtful, informed comments are welcome. Intended for a general scientifically curious audience, not requiring much detailed specialist background.

I enjoyed the reading, it is well written. Like you, I think there is a bit too much speculation to consider it an article suitable for a "scientifically curious audience". From my point of view the parallels and the "connections" between phylogeny, elements, matter and physics make it very little pseudo-scientific and too metaphysical.


>With sufficiently good tweezers and a lot of patience you could reassemble a human being into a bicycle of comparable mass; and vice versa.

Considering that there is no material distinction between a living person and a dead body, I would say that this view is perhaps overly reductive. The matter of the body is essential, but not sufficient; life is fundamentally a process driven by autopoietic processes, which you would need to initiate in order to have anything meaningful.

Being is uninteresting without becoming, state without change is just dead.


> Considering that there is no material distinction between a living person and a dead body

When someone dies, information is lost. With cell death comes loss of structure. Unless we can use their existing DNA to repair the damage, the person will remain dead. And even if revived, their brain's memories might still have been severely compromised. Especially if, like what you hint at, the process of electricity flowing in the brain is the actual memory itself, an ongoing process. It's certainly possible that if the process is halted, the memory is lost. Just like DRAM.

At the moment it's pretty much a mystery. But I agree, OP assumes a lot about structure over process. Zen says everything is a function, not an object. And by nesting functions, we get everything...

Memory and consciousness really have no reason to be entirely structural. A lot of things actually don't. Our old computer memory technology was actually just delay line memory. Perhaps our brain works in a similar way to some extent.

The universe itself, in most respects, is made out of stable processes that we abstract as objects. An atom is really just a process of energy flowing in a specific way through space over time, repetitively. Upset the process and you get a nuclear explosion which most of us would consider an event or process...not an object. So was the energy of the atom ever really an object in the first place? No, it was like a candle flame. Constantly kindling itself, of different oxygen and fuel from moment to moment. Yet we gave it a name and an identity.

https://en.wikipedia.org/wiki/Delay_line_memory


Yes I totally agree. My perspective comes from a mix of zen and systems/cybernetics; matter and other temporarily stable configurations like life are stable attractors in the state of energy flow. So the fundamental essence of reality is energy, rather than matter. The difference been transient energy like "events" and static-appearing things like matter is that the latter is constantly maintaining itself in a feedback loop.

Believing it to be matter is a mistake, and one that leads you to misleading metaphysical beliefs like the idea in the article above, or that all qualities can be reduced to quantities, or all things can be understood by looking at the pieces that it is made of.

It's strange to me that materialism is still such a popular belief amongst people who consider themselves rational, because even the origin point of that belief (the atomistic view of physics) fell apart at the hands of quantum mechanics.


> It's strange to me that materialism is still such a popular belief amongst people who consider themselves rational, because even the origin point of that belief (the atomistic view of physics) fell apart at the hands of quantum mechanics.

Why does QM contradict materialism? I understood materialism to be the opposite of spiritual beliefs rather than being about strictly Newtonian physics (although I guess that's how it started). Quantum mechanics describes the behaviour of matter and therefore is surely central to a materialist world view? What would be the correct word to describe the view that consciousness is the result of physical processes? I thought that was materialism.

> So the fundamental essence of reality is energy, rather than matter.

They're basically the same thing though, yes? E=mc^2 and all that?


You're thinking of physicalism, of which materialism is a sub-category. Physicalism is the assertion that there are no supernatural phenomena (which I agree with); materialism suggests that the essence that everything is built out of is matter. I think most people would agree that materialism is outdated (given that atomism has been outdated for >100 years), but many of the implications of materialism (that the whole can be explained by the parts, ie reductionism, and the idea that all important properties can be quantified, as two examples) still persist.

The (superior) alternative to materialism is emergentism. Materialism and emergentism both imply other ideas and ways of thinking about the world, which are the actual important things.


Ah OK. I think you're splitting hairs a little bit there - I would consider those two terms (materialism and physicalism) to be interchangeable; the basic idea is the same but one is a refinement of the other using modern knowledge. You could say physicalism is materialism v2 :-)

As for emergentism... that's materialism v3, so (in my head at least) these are all different ways of saying the same thing.


yeah that's fair - the problem in my mind isn't that people are using the wrong word (who cares!), but that even though we know that matter isn't the essence of reality now, we still have many other beliefs that are dependent on that belief, that we haven't moved on from.


I don't follow.

> many of the implications of materialism (...) reductionism

Is reductionism considered a consequence of materialism? To me, the two seem independent. Reductionism works just as well for abstract concepts as it does for physical matter.

> the idea that all important properties can be quantified

I'm not sure how this follows from materialism either. We quantify many things that have nothing to do with physical matter.

I also get the feeling that you consider these two "implications of materialism" to be outdated - I disagree with that view. For instance, I can't think of an example of a property or phenomena that is best left not quantified - there are plenty of important things in life that we can't quantify yet, because we lack the measurement tools or conceptual framework for it, but quantifying these is obviously doable in principle, and desirable.


> Is reductionism considered a consequence of materialism?

It is; fundamentally the theory is that the world is made up of lego bricks, so we can examine the world by looking at the pieces. This is a useful tool, but it doesn't actually work in all cases. For example, the areas of science where this does work are considered "hard sciences", and the areas that it doesn't are considered "soft sciences".

Reductionism isn't outdated in the sense that it is useless; it's outdated in the sense that it cannot be used to understand all phenomena, especially emergent phenomena. Systems science is and has been creating new tools that can be applied to understand emergent phenomena.

I don't want to get too navel-gazy with this, but there are many things that defy quantification, or our efforts at quantification are and can only ever be procrustean in nature. That doesn't mean that they cannot be modelled, but that the modelling must be process-oriented rather than state-oriented. For example, if you are modelling the behaviour of a thermometer at the level of its components, you can model the causal relationships between heater and sensor as a self-correcting feedback loop - this is a qualitative model. Only at the level of the total behaviour can you model the ambient temperature and desired temperature quantitatively.

Am I making sense here? I'm still working my way through the textbooks for some of these concepts so sometimes I find it difficult to put into words.


My core objection is, the way I see it, reductionism doesn't stop working for soft sciences in any fundamental way. There's no fundamental irreducibility of a phenomenon (uncertainty principle notwithstanding, soft sciences aren't anywhere near worrying about that); the limit is our computational capacity - of our brains, of our computers, of our scientific discourse. We just can't keep so many pieces in our heads simultaneously, so we don't bother, and create higher-level abstractions to make things easier on ourselves.

That's how I view emergence too: there's no new behavior suddenly appearing when your system is complex enough, behavior that couldn't be predicted from looking at the pieces - it's just too much work to deal with pieces directly. The discontinuity we see doesn't exists in the real world - it's caused by the rungs of our ladder of abstraction.

As an example of the rungs on the ladder: we study gases on a molecular level, modelling them as bouncy balls. We also study gases at a higher level, modelling them as fluids. We go further still, viewing them as a bunch of parameters (pressure, volume, temperature). Three different perspectives, three separate set of behaviors - yet there's no actual discontinuity in the real world, and a lot of interesting phenomena can be observed when we try to create a smooth transition between the models; that is, we look in between the rungs.

> For example, if you are modelling the behaviour of a thermometer at the level of its components, you can model the causal relationships between heater and sensor as a self-correcting feedback loop - this is a qualitative model. Only at the level of the total behaviour can you model the ambient temperature and desired temperature quantitatively.

The way I see it, casual models have only a coarse relationship with the real world. A self-correcting feedback loop can be analyzed in terms of its conceptual components, which are mathematical in nature - but you won't get from here to predicting the behavior of a real-world thermostat until you start plugging in physical models. How much complexity you'll have to deal with depends on the physical model you plug in. There's lots of space for reduction and quantification here, depending on the answers you seek. For example, the concept of "ambient temperature" is a very high-level abstraction in itself - if you're willing to break it apart, suddenly a lot more things across the model become more directly related to the real world, and easier to quantify.

---

The point I'm trying to express here is, in my view, there are three types of limits to reductionism and quantification:

- The Uncertainty Principle - the fundamental limit, around which you can't quantify some things. IANAPhysicist, but my feeling is, it's not a principled limit to quantification - it only reveals that we're trying to quantify measures that are ill-defined.

- Fundamental limits to computation - I'm thinking of the Halting Problem, Gödel's Incompleteness Theorems. We can't create quantitative metrics and build reductive models in ways that are uncomputable.

- Practical limits, aka. too hard to bother - this is what I believe is 99% of common arguments against reductionism and examples of emergence. We look at systems as a whole, because looking at pieces is too much work. But whether it's too much for our working memory, or too much for all computing power of our civilization - it's still not a fundamental, philosophical limit, and therefore not a philosophical argument against reductionism.

On that last point, if one can prove that a higher-granularity model would require more compute than the universe could provide over its lifetime, then I'll give it a solid shmaybe as a fundamental limit.

---

Addendum on systems science.

I have an interest in systems science, which I pursue to the extent my free time allows. I've studied the basics, done some toy modelling, and one thing I've learned so far is: the most insightful part of modelling a system is plugging numbers into it.

For example, see: https://insightmaker.com/insight/206860/Musings-on-a-HN-comm.... It was my attempt to push numbers through a system model I first described on HN. The working, executable model is on the left, for the conceptual one and original HN comment, scroll to the right. Extra commentary: https://news.ycombinator.com/item?id=26137264.

I now no longer trust models that aren't executable - it's too easy to create something that looks fine in the abstract, but is completely wrong. Making it run on real - quantified - data is the fastest way to discover the depths of one's ignorance, like I did in the example linked above.

(Well, to be honest, 80% of my ignorance was revealed by defining units of measurement for each sink and flow - so if you want a quick way to debullshit a systems model, I suggest starting with that.)


There is logically a position between determinism and dualism. See https://plato.stanford.edu/entries/anomalous-monism/


> That's how I view emergence too: there's no new behavior suddenly appearing when your system is complex enough, behavior that couldn't be predicted from looking at the pieces - it's just too much work to deal with pieces directly. The discontinuity we see doesn't exists in the real world - it's caused by the rungs of our ladder of abstraction.

Reductionism relies on dissecting the whole and viewing the parts individually. The fundamental distinction between something that can be reduced (let's call it a collection) and something that cannot (a system), is that when you dissect a system, a fundamental aspect of the system is lost. That isn't to say that said aspect has materialised of its own accord, but that it originates with the relationships between the parts. There is no seat of "car-ness" in a car, and if you were to try to understand a car by looking at the individual parts, you would not be able to unless you could intuit how those parts interact. The difference between an engineered system and a natural system is that in engineering we intentionally attempt to minimise the number of interrelations so that the object can be understood easily from an analytical perspective. In natural systems, the levels of interconnection are much greater; we cannot understand a society by examining each individual in isolation, we have to move up the ladder of abstraction to a level where coherent patterns can be identified. We have to look at the whole.

The thing that reductionism misses out on is non-linear causality. Feedback loops, inherently based in relationships between parts and not the parts themselves, give rise to higher-order behaviour, which brings us the concept of levels of abstraction. The classic idea of reductionism is that if we create the lowest-level model (at this point that would be quantum mechanics, but it could certainly go lower in future), then we can derive all the higher-order models from that. This may be theoretically possible (purely in the philosophical sense), but it's just not practically useful. The position of reductionism is that this is the only (or perhaps primary) valid way in which models can be constructed.

> The way I see it, casual models are almost completely abstract - they have only a coarse relationship with the real world. A self-correcting feedback loop can be analyzed in terms of its conceptual components, which are mathematical in nature - but you won't get from here to predicting the behavior of a real-world thermostat until you start plugging in physical models.

This is true of all models - "all models are wrong, but some are useful", "the map is not the territory", etc. Modelling reality inherently involves reducing it to the parts we're interested in, because a model of reality in its entirety would be the same size as reality itself, and thus unrepresentable (even if we could capture the total state of reality). A basic equation for Newtonian motion excludes things like drag, the variability of gravity, turbulence and so on. The closer we need to get to matching reality, the more factors we need to include until the model transitions from mathematical to a simulation, by sheer necessity. The amount of detail we include depends on what we're trying to do; models are purposive tools rather than descriptions of reality.

I think the name of reductionism does not help when discussing it; to reject reductionism is not to reject modelling, because we cannot operate in the world without models.

I agree that the argument is a philosophical one. The reductionist perspective is one of objectivism; science measures reality, and thus lower-order models are more high-resolution and we can abstract away from a very-high-resolution map of reality by focusing on the details we care about. The emergentist perspective is constructivist; it states that our models are tools that we create in order to interact with our environment but they are not reality itself, and thus you should use the model that is most useful for interacting with the environment based on its predictive capability.

[edit in response to your addendum]

I totally agree that executable models (in essence, simulations) are 1000x better in basically every way than static models. I'm trying to work in this space myself in order to bring these ideas into software development. But I believe that plugging in the numbers is useful precisely because it highlights the qualitative aspects of the model; how different variables are causally related. The temporal (heh) aspect of the simulation highlights how the variables are bound, but the specific values whether they be 1000 or 10,000 units are not the thing you're learning, unless those values happen to be a divergence point in the model.


Regarding quantifying everything - at a quantum mechanical level, all is not totally quantifiable, and that's fundamental. See https://en.wikipedia.org/wiki/Uncertainty_principle. I assume this is what the parent was referring to.


That is part of it, yes.


I think there was a big intuitive glimpse of this in dialectical materialism (but not on the stalinist "diamat" flavor). "Levels" of reality emerge were each one has his own emergent "laws", there are intertwined mutual influence between "levels", phase transitions are emphasized, relationships are more important than objects, objects are always "contradictory" (always divisible and in flux) and really relationships in disguise, all "categories" we made in our mind as always transitional and imperfect, etc.


Hegel is considered to be in the intellectual heritage of systems theory, AFAIK.


I think systems theory can be directly traced to Bogdanov's "tektology", and he surely was aware of Hegel works.


Although emergentism doesn't solve the hard problem.


The hard problem of consciousness?


Yes. I think the categorical quantities->qualities jump is still there, or at least unexplained.


> It's strange to me that materialism is still such a popular belief amongst people who consider themselves rational, because even the origin point of that belief (the atomistic view of physics) fell apart at the hands of quantum mechanics.

Indeed. Even the stationary solutions to Schrodinger's equation, which correspond to unchanging states, involve constantly revolving real and complex parts that balance eachother out to keep the amplitude squared at a constant.

The idea of an object and a lego universe is really the west's biggest folly. I recommend this movie to anyone looking to shake up their perspective: http://www.digitalphysicsmovie.com/

https://en.wikipedia.org/wiki/Stationary_state


> It's strange to me that materialism is still such a popular belief amongst people who consider themselves rational, because even the origin point of that belief (the atomistic view of physics) fell apart at the hands of quantum mechanics.

Well put. This boggles my mind too. It also seems to me that it should be evident that ultimately, either we rely on infinite regression (essentially explaining nothing) or at a certain point we pick an ontological primitive that we don't explain. And that our choice should be informed by how much of the known world can be explained in terms of such a primitive. I can't fathom what would lead anyone to choose matter for that purpose, and at the same time it seems like the mainstream, unquestioned choice.


we don't even have the option of infinite regression because we cannot know the total state at a quantum level, both because of uncertainty and because quantum mechanics violate locality.


It's been shown that memory/personality/self is stable with no brain activity [0]. Perhaps consciousness is a transient process, but memory is not and consciousness can be restarted from stable memory.

[0]: https://en.m.wikipedia.org/wiki/Deep_hypothermic_circulatory...


> A key principle of DHCA is total inactivation of the brain by cooling, as verified by "flatline" isoelectric EEG, also called electrocerebral silence (ECS). Instead of a continuous decrease in activity as the brain is cooled, electrical activity decreases in discontinuous steps. In the human brain, a type of reduced activity called burst suppression occurs at a mean temperature of 24 °C, and electrocerebral silence occurs at a mean temperature of 18 °C.[32] The achievement of measured electrocerebral silence has been called "a safe and reliable guide" for determining cooling required for individual patients,[33] and verification of electrocerebral silence is required prior to stopping blood circulation to begin a DHCA procedure.[34]

Thank you for the link. Is "measured electrocerebral silence" a measure of zero electrical current in the brain? Or just close to it?


> Electrocerebral inactivity (ECI), or electrocerebral silence (ECS), is defined as no cerebral activity over 2 µV using a montage that uses electrode pairs at least 10 cm apart with interelectrode impedances < 10,000 ohms and >100 ohms. https://www.medscape.com/answers/1140075-177597/how-is-elect...


Very interesting, this changes my understanding quite a bit. Do you know if these patients had any short term memory effects? If not, this would imply that the brain is JITing structure almost immediately from events... that's mindblowing.


I didn't claim that memory specifically is immaterial, but that the material is not the "fundamental thing" of monistic metaphysics.

Consciousness certainly isn't a transient process, as it is stable over time (until it isn't). It seems most reasonable to say that it's a self-sustaining process (autopoietic or self-referential) emerging from the body.


Given that we know of nothing that isn't material, I'm unsure of why it's reasonable to think material is not the fundamental thing of consciousness.


I'm not suggesting anything supernatural, but that energy is prime and that matter is a construction of energy.


That's not a surprising or controversial statement though - physics for the last hundred years (since Einstein) has increasingly moved towards accepting that matter and energy are for all intents and purposes the same thing. I don't think that changes the tenets of materialism, just refines them.


The term you're looking for is physicalism, of which materialism is one belief, the other being emergentism, which its replacement.

There are a number of beliefs that rest upon the bedrock of materialism that persist today despite materialism being outdated. The prime example is reductionism, the idea that anything can be understood by looking at its parts - this is not true, and while this idea is as outdated as the materialist view, it unfortunately persists. Many other ideals like quantification rest on reductionism and are also becoming outmoded in academia but not in the public perception of what are considered rational ideas.


What is that knows that which is material?


Firstly you're drawing an unwarranted distinction between physicalism and materialism. There is no consensus that there is a distinction, many philosophers consider them interchangeable, and there are competing accounts of possible distinctions.

Materialism is not a theory of physics, it's a theory of the relationship between physics and various philosophical questions. Asserting that it's somehow incompatible with quantum mechanics is completely unwarranted. Bear in mind the term Physicalism wasn't even introduced into philosophy until the 1930s, long after quantum mechanics had become firmly established, and nobody with any credibility in philosophy was seriously suggesting then that quantum mechanics disproved materialism.


> Firstly you're drawing an unwarranted distinction between physicalism and materialism.

Let's not argue the definition of words then. The distinction I'm drawing is whether said philosophy believes that higher-order models of reality at any given level can be derived from understanding the smallest parts. The "yes" side, I'm calling materialism, but you could also call reductivism or perhaps objectivism. The "no" side, I'll call emergentism or constructivism. Neither side implies a supernatural aspect to reality.

> Materialism is not a theory of physics, it's a theory of the relationship between physics and various philosophical questions. Asserting that it's somehow incompatible with quantum mechanics is just silly.

Only if you're relying on the equivalence of materialism and physicalism. Quantum mechanics violated many ideas considered to be ground truth at the time, including locality and deterministic certainty. I'm far from an expert in this area, but my understanding is that the thing I'm calling materialism is dependent on the idea of the universe as a lego-style composition, and that quantum mechanics violate this concept. But this view is also espoused by Fritjof Capra, a particle physicist. Perhaps somebody should tell him that it's silly!


All that quantum mechanics does is substitute wave functions instead of 'atomic' particles. A future theory might substitute something else, strings, branes, whatever. It doesn't matter, from a philosophical point of view it's just physics. It has no bearing on the relationship between physicalism or materialism and other competing philosophical theories. This is the sort of thing Deepak Chopra gets so badly wrong with his vague mumbo jumbo.

Hmm, I just looked up Capra. Obviously I've not read his book, but it looks like he talks a lot about 'the relatedness of all parts', well that's just kind of obvious. I note you talk about emergent phenomena. Sure of course, that's all physicalists and materialists are saying. It's not like we've never thought of this stuff before. That's not something that disproves what were saying, or something that we've not considered. It's the whole point.


You seem hung up on some other argument that I'm not making, perhaps related to spirituality or quack medicine. My argument is about epistemology and ontology, within the scope of physicalism. It's a real and ongoing debate within the philosophy of science.


Just casting around ad hominems and absurd comparisons doesn't make you right.


Rich, coming from someone who makes snide remarks about materialists who "consider themselves rational".


That wasn't meant to be snide, but there's clearly a bee in your bonnet so there's no point continuing this discussion.


> belief (the atomistic view of physics) fell apart

What is this, then?

https://www.scientificamerican.com/article/see-the-highest-r...


it's not that atoms don't exist, it's that atoms are the smallest part of reality - that they are indivisible. That's what atomism is.

https://en.wikipedia.org/wiki/Atomism


Actually, atoms may well be the smallest parts of reality that we can ever "see." Structureless particles such as electrons are "points" which, due to their infinitely small size, makes them impossible to see even in principle. (Incidentally, this is exactly makes them true "atoms," i.e. indivisible, and they are always created and destroyed as a whole.)


This may be a bit naive but I am curious, how do you then define energy ? Does/can it exist independent of matter ?

All we observe is it's transformation, arising out of interacting matter.

I like the processes with feedback loop view of matter.


Energy might not be the right word, as I believe energy is a subset of these concepts but not the totality. The right word is something like causal relationships. The idea is that stable loops create higher-order behaviour; this works at all levels of abstraction, whereas the "energy" model is generally only used to describe interaction at the atomic and subatomic levels.

This idea is taken primarily from cybernetics, but it also pops up in other places because it's a transdisciplinary concept. In fact, cybernetics and systems theory were specifically created out of observations of common patterns that appear across disciplines, and created by transdisciplinary teams of academics from both the humanities and the sciences/mathematics.


Many people professing to be materialists are actually physicalists. The claim is not that everything that matters is matter. Rather it's that everything that matters is physical state.


If you (not you personally, the general 'you') still believe that reductionism can explain all phenomena then even if you claim to be a physicalist, you are still stuck in materialist patterns of thinking.


Ah, I'm sorry to see this answer so late, but I find this interesting so I'll respond anyway.

I find "reductionism" to be a very vague and often unclear term. When used correctly (at least how I understand it), it is meant to denote phenomena where in principle, with infinite computational power, you still could reduce the total behaviour to a combination of underlying lower-level behaviours. But with irreducible phenomena, this way of understanding it is suboptimal and does not produce understanding in practice, because it is missing the forest for the trees and introduces a lot of information which can effectively be ignored. In this way, moving up a level, or doing a kind of basis shift, we can understand the phenomenon much more easily. In these situations, we say that reductionism does not hold.

But I've also seen people use the term in a mystic, almost magical way, where supposedly the upper phenomena cannot be put into a connection with the lower underlying phenomena even in principle. They seem to be seeing it as a kind of break in causality, so that the upper phenomena stands on its own, unrelated to the lower layer.

With the above in mind, why do you specifically think that reductionism determines someone specifically as a materialist and not a physicalist? Can you expand a bit more on that?


Materialist and emergentist are both types of physicalist, so a materialists is already a physicalist, just missing some of the pieces.

> But I've also seen people use the term in a mystic, almost magical way

It's really unfortunate that holism in general has attracted the likes of woo-peddlers and magical thinkers because it discredits the core idea. My perspective on reductionism (and by implication, materialism) versus holism is that reductionism tends to focus on the smallest pieces rather than appropriately scaling up and down the ladder of abstraction depending on what level you are operating on.

So for example, when looking at behaviour you want to deter (crime, environmental damage etc) a reductive approach would be to just attempt to counter the individual's behaviour with punishment (ie, making such behaviour illegal). The holistic view would be to examine not just the behaviour of the individual but the incentives and structures that drive said behaviour and try to fix that, because without removing the incentive the behaviour will not be truly removed.

I believe that reality is organise from the bottom-up, so there's nothing being added at the higher levels that is apart from physical reality. But the thing that reductionism typically misses out on is that reality is made of material _and relationships_ and when you take the material apart into the smaller parts, you break said relationships and much of the picture. The reductive approach to understanding reality is to do exactly that, to take things apart to understand them, but many things stop working when you remove the individual part from its environment, and thus in those scenarios you need to understand the whole picture. So often I see people trying to address a problem that manifests at point A (say, a particular component of a software system) by fixing it at point A, where the correct fix is actually elsewhere. It was running into this pattern over and over in my software engineering work that made me discover systems theory in the first place but I see the same mistake all over the place.


Thank you for taking the time to explain! I find this sentence particularly illuminating:

> But the thing that reductionism typically misses out on is that reality is made of material _and relationships_ and when you take the material apart into the smaller parts, you break said relationships and much of the picture.

I fully agree with this position. This is actually what I attempted to explain (but not very well) with my "reductionism understood right" take above.


I enjoyed reading your other comments maybe out of implicit bias, and this is tangential maybe, but just the same I'm curious about what you make of Nima Arkani-Hamed's recent work—namely amplituhedrons—in this larger context.


it's way beyond me; my grasp of quantum mechanics is at a slightly-beyond-pop level. My road into the systems memeplex was from trying to understand my own intuitions about software engineering and wicked problems in the social space. From there it's been mostly philosophy with a little bit of maths.


(I hate to be that guy but a flywheel is an energy storage device, not a positive feedback loop.)


> What I'm talking about is a much more active imaginative stance, the kind of stance taken by people like Bret Victor or Satoshi or Picasso

Is this entire essay basically just a physicist describing chemistry without understanding that chemistry is already there and has been for hundreds of years?


I think the point of this article is (to start with chemistry and then) to go far beyond.


it doesn't sound like the author really has a concept of how 'far beyond' chemists go as a matter of course. The study, classification, exploration and exploitation of emergent phenomena is built into the way chemists do things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: