Hacker News new | past | comments | ask | show | jobs | submit login
A Jewel at the Heart of Quantum Physics (simonsfoundation.org)
285 points by milkshakes on Sept 18, 2013 | hide | past | favorite | 91 comments



Is anyone here familiar with this work? I would like to hear more about it. At a minimum, this article is very much better than the usual science blog filler—it contains signs of a genuine conceptual breakthrough. For example:

“You can easily do, on paper, computations that were infeasible even with a computer before.”

That doesn't happen very often! Or this:

[T]he new geometric approach to particle interactions removes locality and unitarity from its starting assumptions. The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry. The usual picture of space and time, and particles moving around in them, is a construct.

That is exactly the kind of thing that happens when one model is replaced with a deeper one.


I've been keenly following this area of research and have heard some of the talks and read some papers. I concur with the sketch given by @gaze.

1. The notion of unitarity is a "common sense" rule that implies that you can consistently assign probabilities to possible outcomes in a quantum mechanical experiment (http://en.wikipedia.org/wiki/Unitarity_%28physics%29)

2. The notion of locality states that what happens on Jupiter better not affect how fast your code compiles this morning (or how experiments run on earth). This is an important postulate, because if this weren't true, we might as well give up on doing science as there would be arbitrarily many external influences we couldn't take into account.

All our experiments so far are perfectly consistent with these two principles. Quantum field theory is the brainchild of the marriage of unitarity (from quantum mechanics) and the idea of locality (in field theory). Our world view today ("Standard model of particle physics") is based on this.

In practice, calculating answers using this theory is painfully difficult. YOu will add up thousand pages worth of algebraic terms and then your answer will 'miraculously' reduce to a few terms. This and other observations have inspired researchers to search for underlying structure... leading to what is now being called the 'amplitude revolution' by some people.


> 2. The notion of locality states that what happens on Jupiter better not affect how fast your code compiles this morning (or how experiments run on earth). This is an important postulate, because if this weren't true, we might as well give up on doing science as there would be arbitrarily many external influences we couldn't take into account.

Unfortunately for science, the principle of locality has been violated multiple times with experiments involving quantum entanglement.[1]

I think you are confusing Micro-causality with Einsteinian locality[2], the theory that influences propagate at most as fast as light. Micro-causality is the locality that's associated with quantum field theory and what was discussed in this article. Many theories in the history of science have been proven wrong. And if one of these is, then it will no lead to the end of science as we know it.

[1] http://arstechnica.com/science/2007/04/new-experiments-with-...

[2] http://www.rzuser.uni-heidelberg.de/~as3/nonlocality.html


I bet my arm that in a Many World Interpretation of Quantum mechanics, those apparent non-localities have not, in fact, have been violated.

We can have a classical analogy to this. Put a blue ball and a red ball in a black box. Shake the box. Without looking, take a ball from the box and put it in a black safe. That way, you should have no idea which ball in in the safe.

Now send the black box to your colleagues on Mars, along with a copy of your experimental protocol. Let them open the box. And, lo and behold, the instant they see the ball, they know which colour your ball is, despite the fact you're separated by several light-minutes. As if information travelled faster than light —no, scrap that, instantaneously.

Quite underwhelming, isn't it? Well, the EPR though (and actual) experiments work on similar principles: there is a common cause. When 2 particles are entangled, it means they share information. When you send those particles away in opposite directions, they carry this information with them, at Slower Than Light speed. Then we perform some experiment on one of the two particles, and we know instantly how some other experiment is likely to turn out with the other particle.

It's that mundane. It only gets confusing when you talk of "probabilities", instead of talking of the square of complex amplitudes… which are a bit different from actual probabilities. In the Copenhagen interpretation of quantum mechanics, we tend to think of the universe as unique, and the laws of physics have some fundamental randomness in them. The Many World interpretation is simpler: no randomness, just a universe that splits in multiple blobs that eventually cease to interact with each other (The split is not instantaneous: it propagates at the speed of light at most).

So, when you perform that experiment with the first particle, you're not telling the second one how to behave. You merely learn which universe you are living in.

---

Now, find a reliably replicated experiment where information was transmitted faster than light, and I will be choking through sheer astonishment.


The way I see it, locality in every Lorentz frame = causality.

What Bell experiments disprove is local realism. In the conventional formulation of quantum physics (quantum mechanics, quantum field theory, etc.) it's realism that takes the fall. Locality is still crucial in the conventional perspective.


Realism looks like such a misnomer. Or does it mean something deeper than the existence of a hidden variable?


Are the terms they reduce to constant, i.e. are they the same variables like spin or charge, no matter the problem? I'm not a physicist - not even close - but I do love a good mathematically general abstraction across a problem space.


Physicist here: This has a higher probability of being important than your usual science blog material. The Simons Foundation is a serious entity, backed with serious funds.

Arkani-Hamed had some compelling things to say recently at a talk and colloquium at our university; looking forward to learning more.


backed with serious funds

As I suspected, The Simons Foundation is backed by Jim Harris Simons, mathemetician and the most successful hedge fund manager of all time (Google Medallion Fund and Renaissance Technologies for more info). Cool.


This isn't my field (a.k.a. I'm talking out of my ass), but I can give it my best shot. Usually with QFT you start with locality and unitarity as a sort of starting point. You might hope that you could come up with something much simpler where locality and unitarity come about naturally from the model itself (the goal of physics being to show that "it had to be this way".) QFT tells us how the world works by calculating scattering amplitudes. You throw n things in, and m things come out with some momentum, some spin, some color, etc. The way we started doing this was with Feynman diagrams... these diagrams tend to look like you have some particles banging off of some other particles, and they behave a little that way, but they're actually a notation for integrals in a series that you have to sum over. Hence the genius of the notation... they describe math while looking like something physically relevant. However, they're a little weird in that if you interpreted the diagram literally, you'd find that some of the diagrams involved in an interaction actually aren't physical! One might say that the degree of nonphysicality suppresses that diagram's contribution. Physicists describe these diagrams as "off-mass-shell." If you wish to describe an interaction, you figure out all the diagrams relevant to the interaction, calculate them and sum them. When the article says "tons of pages of calculations", what they mean is you need to sum over a LOT of Feynman diagrams to get an answer.

Now, there's some newish kinda diagram that's gone into use (I think maybe only for N=4 SYM?), which is also used to calculate interaction cross-sections. However, they're written in such a way that they always describe on-shell processes, hence why they're called on-shell diagrams. I think Zvi Bern had something to do with this. These diagrams have an underlying structure... and mathematicians have started writing similar diagrams recently (as in, they ran into a similar structure recently), but instead of the diagrams being to describe interactions, they're used to describe some structure called a positive grassmanian. The positive grassmanian in low dimensions relates to convex polygons... it's a simple thing. This guy had this interesting insight that an interaction crossection corresponds to some sort of volume... and in this case, it corresponds to the volume of this polytope described by the positive grassmanian... or something like that.

The positive grassmanian says nothing at all about unitarity or positivity, nothing about space or time, but it lets you calculate shit. That's very, very cool.


Do you know of a good review paper?



This is very useful, thank you.


This isn't really my specialty (string theory), but it's close enough "next door" that our conferences tend to have speakers on this topic on a regular basis. (There's a lot of overlap in individual researchers, too.)

I'm not going to try to one-up the article's non-expert description, but it's worth emphasizing that this isn't a "bold new idea of the week". (Rare among science journalism, I know.) This is a longstanding research program with a substantial number of contributors that's been making clear, gradual progress for a decade or two.


"it contains signs of a genuine conceptual breakthrough."

It contains claims of such signs, which seems to be the point of this fluff piece.

Alarm bells should go off when you read statements like this:

"...giving up space and time as fundamental constituents of nature and figuring out how the Big Bang and cosmological evolution of the universe arose out of pure geometry."

Ah yes, well that solves the mystery, doesn't it. The universe arose out of pure geometry.

A more concrete sign of the limitations of this work can be seen in the paragraph prior to the above quote:

"Physicists must also prove that the new geometric formulation applies to the exact particles that are known to exist in the universe, rather than to the idealized quantum field theory they used to develop it, called maximally supersymmetric Yang-Mills theory."

The problem is that maximally supersymmetric Yang-Mills theory conflicts with experiments (see e.g. http://news.discovery.com/space/lhc-discovery-maims-supersym...). This is why they hurry to qualify this point by saying:

"This model, which includes a 'superpartner' particle for every known particle and treats space-time as flat, 'just happens to be the simplest test case for these new tools,' Bourjaily said. 'The way to generalize these new tools to [other] theories is understood.'"

Well that's good, because they're going to need that generalization. Now all they need is to actually produce it.

This is the sort of thing that leads these gee-whiz models to end up on the ash heap of history.

Don't get me wrong, it would be fantastic if the amplituhedron revolutionizes quantum theory. But based on the red flags in this article, I wouldn't hold my breath.


> Ah yes, well that solves the mystery, doesn't it. The universe arose out of pure geometry.

That was a serious hypothesis before this. Heard about Tegmark's level IV multiverse? It's the idea that every mathematical structure just "exists", like "poof magic", and the simpler ones would, like, have greater weight. Kind of a literal interpretation of Occam's Razor, really. A corrolary is, the simpler the laws of physics are, the more probable the above hypothesis is. And suddenly we learn that the true laws of physics might be much simpler than we anticipated? This is huge.

Or not.

The idea of a timeless universe, where our subjective notion of time just arise from its structure has been around for quite some… time.

Likewise, Occam's Razor itself suggests that the true laws of physics are simpler than we think. Plus, current human laws of physics are either false or incomplete. I fully expect future physics to be further simplified. Claims of massive simplification are therefore not that surprising.

---

That said, I agree with your specific objections (the "work in progress" warnings).


> That was a serious hypothesis before this. Heard about Tegmark's level IV multiverse?

Of course, I love Tegmark's speculative work. But that's all it is, speculative. It's like reading science fiction, one shouldn't confuse it with reality, at least not without evidence to justify that.

> It's the idea that every mathematical structure just 'exists', like 'poof magic', and the simpler ones would, like, have greater weight.

There's no evidence that this is how reality works, and there's no particular reason to think that it should work this way.

> A corollary is, the simpler the laws of physics are, the more probable the above hypothesis is. And suddenly we learn that the true laws of physics might be much simpler than we anticipated? This is huge. Or not.

All current evidence is that it's a fantasy, one which happens to be so attractive to some people that they're willing to suspend critical thinking to favor it, much as people do with belief in supernatural beings.

There are certainly cases where basic mathematical principles are expressed in a very direct way in the universe. Probability in quantum mechanics is one example, Noether's Theorem is another. But even in these cases, saying that some property of the universe arises from pure mathematics is misleading.

To boil down the objection, mathematics is a way to describe and model these phenomena, and shouldn't be confused with the phenomena themselves. Doing so is a map/territory style confusion.

If you want to say that the universe arises from pure mathematics, you then have to explain what sort of phenomenon pure mathematics is, that it is capable of producing such effects. To use my analogy, it would be a bit like saying "The topography of Earth arises from a pure map."

> The idea of a timeless universe, where our subjective notion of time just arise from its structure has been around for quite some… time.

That's a little different from the idea that reality is an incarnation of pure mathematics. In case, the fact that ideas have been around for some time has little bearing on their validity. Consider astrology.

> Likewise, Occam's Razor itself suggests that the true laws of physics are simpler than we think.

It does not actually suggest that in general. It can mean that in cases where an explanation contains unnecessary elements, but it says nothing about explanations necessarily being simple, or simpler than they already are.

> Plus, current human laws of physics are either false or incomplete.

I prefer to think in terms of models than laws, and models are always incomplete. The only complete model is the thing being modeled itself.

> I fully expect future physics to be further simplified. Claims of massive simplification are therefore not that surprising.

By your premise, actual simplification would not be surprising, but claims have no necessary correlation to actual simplifications.

Sorry to be picky!


> There's no evidence that this is how reality works,

Agree, to the extent we're talking rather direct evidence. It's like trying to distinguish between "poof magic" and "God did it". Quite impossible.

> and there's no particular reason to think that it should work this way.

Disagree. It's the simplest hypothesis to date that I know of. Therefore, I assign at least a non trivial probability to it.

> All current evidence is that it's a fantasy

You mean, current evidence speaks again the level IV multiverse hypothesis? Or something else? Anyway, please name three.

---

> [The idea of a timeless universe is] a little different from the idea that reality is an incarnation of pure mathematics.

As far as I know, we try to express the laws of physics with pure mathematics since Newton, if not earlier. I think it at least indicates a hope that reality may be accurately described by pure math. Provided we can, good luck trying distinguishing that, and "being an incarnation of math. No way we can test it from within.

> In case, the fact that ideas have been around for some time has little bearing on their validity.

Of course. I was merely pointing out that this idea was more ordinary that you made it out to be. To me, it is not sensationalist at all. It's something I more or less independently thought about in my teen years.

---

I maintain that Occam's Razor always suggest that the true explanation is simpler than the one we currently have. We're not logically omniscient. For any sufficiently complex explanation, there is always this nagging doubt that we missed something. Since by Occam's razor, the simpler explanation is the best explanation, the mere possibility of the existence of a simpler explanation is enough to suggest we're not there yet.

But it's no more than that, a suggestion. With enough double and triple checking, computer-verified proofs… we can be rather sure we did find the simplest explanation.

In the case of our current understanding of physics, we quite know for sure that we are missing something. In my opinion, that makes Occam's Razor's suggestion all the stronger.

---

> By your premise, actual simplification would not be surprising, but claims have no necessary correlation to actual simplifications.

I was just saying that when I hear someone claiming something that I don't find very surprising, I generally take that as serious evidence that the claim is true. I may not believe them, but they would at least get my attention.

---

> There are certainly cases where basic mathematical principles are expressed in a very direct way in the universe. Probability in quantum mechanics is one example,

Nope, not this one. :-) Quantum mechanics has to do with complex amplitudes, whose square determine the Born statistics. Plus, the wave function as we know it is deterministic. The relation to probability theory is tenuous at best. It serves more at hand-waving your way to the Copenhagen Interpretation, instead of biting the bullet and posit a collapse theory.


> It's the simplest hypothesis to date that I know of. Therefore, I assign at least a non trivial probability to it.

I don't think it's simple at all. What is the role of "mathematics" in the conjecture - what is the word supposed to mean, and what about this new definition of mathematics results in the creation of physical realities? It seems unrelated to the discipline I know of by that name, which at its most general, is an approach to defining, analyzing, and using formal models. Essentially Tegmark is saying "all formal models (of a universe) must have a concrete realization", but why, and what is the point of introducing formal models into the picture - what role do they play?

It comes down to this comment of yours:

> we try to express the laws of physics with pure mathematics since Newton, if not earlier. I think it at least indicates a hope that reality may be accurately described by pure math. Provided we can, good luck trying distinguishing that, and "being an incarnation of math." No way we can test it from within.

To test a claim, the claim first has to be stated coherently. Mathematics is an approach we use to describe and model things, including the universe. It simply isn't some sort of creative force existing independently of the minds that can contemplate it. So when someone says "the Big Bang and cosmological evolution of the universe arose out of pure geometry", they are speaking incoherent nonsense, because "pure geometry" is simply not the sort of entity that can produce such an effect.

For this to make sense, someone would have to describe the nature of this creative force that they're calling pure geometry, and then the only connection to what we normally call geometry is that ordinary geometry would be a way of describing the effects of that creative force.

Geometry or mathematics are approaches to modeling, and are neither physical phenomena themselves nor the cause of those phenomena. As soon as someone claims that "it" is the cause of phenomena, they have committed an equivocation fallacy and begun talking about some other "it" which they haven't defined.

> You mean, current evidence speaks again the level IV multiverse hypothesis? Or something else? Anyway, please name three.

I was responding more to your characterization than to Tegmark's actual definition: if "every mathematical structure just exists" and "the simpler ones ... have greater weight", then it should have observable consequences, but we don't observe such consequences. For example, we might expect things to be constructed from pure Platonic solids, we might expect the subatomic realm to be less fuzzy, etc.

> With enough double and triple checking, computer-verified proofs… we can be rather sure we did find the simplest explanation.

Actually I don't agree with that. Finding simplicity does not always lend itself to formal process. But I'm saying that although Occam's Razor can remind us that a better explanation could be simpler than the one we currently have, it does not tell us that this is the case.

> I was just saying that when I hear someone claiming something that I don't find very surprising, I generally take that as serious evidence that the claim is true.

We differ on that. The fact that something is not surprising is not evidence. Many claims that are not surprising turn out to be false. To be more likely to believe something that seems unsurprising to you implies that you're choosing beliefs based on personal bias.

> Quantum mechanics has to do with complex amplitudes, whose square determine the Born statistics. Plus, the wave function as we know it is deterministic.

None of this contradicts what I was saying. Those complex amplitudes are complex to deal with superposition, and once that is taken into account, probability densities in QM are described perfectly by probability theory, so I don't know in what sense you mean that the relation is "tenuous". This has very little to do with one's position on interpretations, it's there in the math whether you like it or not.


(If you want to continue this conversation offline, you can reach me by e-mail —see my profile and my website.)

Okaay.

By "simplicity", I mean something like the inverse of Kolmogorov complexity. I know it's not very well defined, but given a Turing complete language, there is a proof that a given program is the shortest of its equivalence class —if it is.

I said "simplest", not "easy to grasp for a human brain", or even "simple". It's just that "poof magic we have simple mathematical rules on which the universe runs" is a simpler hypothesis than anything else I have heard from (such as the God Hypothesis). There's also a certain… elegance in positing that every set of mathematical rules are actualized. That way, we don't have to pick a particular rule, making the master program even simpler.

On Quantum Mechanics, okay, I guess you're right. Just remember that while amplitudes are out there in the world, probabilities are in the mind.

Do you think plausible that we could, in principle, find a mathematical model that perfectly describes the universe? To me, the answer is obviously "yes", even though I'm not certain that we could find this model in practice. Now, assuming our universe does run on math, it is quite impossible to test for different kinds of ontological existence. Do we live in a simulation? If the simulation isn't buggy, we don't stand a chance at root escalation, and we can't tell. Does the mathematical rules exist, like "poof magic", or do they need some exterior force or entity to be actualized? Again, we can't tell, because there's just no way out of our universe.

> For example, we might expect things to be constructed from pure Platonic solids, we might expect the subatomic realm to be less fuzzy, etc.

Maybe not. The actual laws of physics may be even simpler than that, despite the complexity that arise from them. We'll see when we find them. Occam's razor doesn't favour surface simplicity, it favours simplicity at the deepest level.

> The fact that something is not surprising is not evidence.

Indeed. It is prior information, which is just as important as evidence. Without prior information, you don't stand a chance at interpreting evidence. Data can't speak for itself.

> To be more likely to believe something that seems unsurprising to you implies that you're choosing beliefs based on personal bias.

Or, it could mean that I tailor my surprise to my actual probability of the thing being true, based on the information I have. Heck, I have emotions, I might as well use them. By the way, didn't you notice that you're not surprised all the time? That emotion isn't as irrational as a Straw Vulcan would believe.

I'm not sure what "personal bias" you speak of, but there's no escaping the fact that different people have access to different prior information. They will inevitably make different probability estimates, even if they are perfect Bayesians.

You should read Probability Theory: The Logic of Science. It's an excellent book.


> poof magic we have simple mathematical rules on which the universe runs

I'm not arguing against the idea that the universe might be described by simple mathematical rules - after all, we have a fair amount of evidence that it can be, for some value of "simple". But it's an enormous unsupported jump from there to the idea that the mere possibility of such rules somehow gives rise to a universe that follows them, ex nihilo, or variations on that idea.

Generally, such handwaving is not accompanied by much serious exposition. Even Tegmark's writing on the subject doesn't get into it in enough depth to seriously evaluate. It's an amusing conjecture, but I'm not aware of anyone having developed it beyond that point.

It's also strangely reminiscent of other attempts to delegate the creation of the universe to a mysterious unexplained force: is "mathematics did it" really any different than "a god did it" as an explanation? Neither are actually explanations, they just give the superficial appearance of explanation via a sleight of hand in which the entity in question is implicitly assumed to somehow have the necessary wherewithal to do the job it's accused of. To me, "mathematics did it" is about as good an explanation as "the Great Penguin did it".

> Now, assuming our universe does run on math, it is quite impossible to test for different kinds of ontological existence. Do we live in a simulation? If the simulation isn't buggy, we don't stand a chance at root escalation, and we can't tell. Does the mathematical rules exist, like "poof magic", or do they need some exterior force or entity to be actualized? Again, we can't tell, because there's just no way out of our universe.

I agree with this mostly, although again I'll point out that the entire idea of seeing our models of the universe as being somehow responsible for its creation runs the risk of being a huge category error, so I don't take it for granted that the universe actualizes mathematical rules in that sense. The rules we like to model may simply be an occasionally emergent property of a chaotic physical substrate, and we find ourselves in a predictable corner of some randomly organized multiverse by virtue of the anthropic principle.

But to your larger point that it's likely to be impossible to test the ontological status of mathematical rules, that's a major part of why strong claims in this area seem incoherent to me.

> Occam's razor doesn't favour surface simplicity, it favours simplicity at the deepest level.

Occam's Razor is a simple heuristic that has no such bias. The only simplicity it favors is the removal of redundant aspects of a model, and redundancy can exist anywhere, whether superficial or deep. Further, "redundant" can be relative to one's purposes. Many physical models are deliberate simplifications of the phenomena being modeled, e.g. gases, fluid flow. This often seems to be forgotten when people start to confuse mathematical models of the universe with the universe itself.

> By the way, didn't you notice that you're not surprised all the time? That emotion isn't as irrational as a Straw Vulcan would believe.

I distinguish between everyday claims, like someone telling me they went to a movie, and claims about new discoveries about the universe. You don't need to revise your theories about the universe to provisionally accept the claim that someone went to a movie, for example.

But a claim that entails revision of a theory needs to be treated differently, and in that context, being "unsurprising" is not really particularly relevant - such claims can and should be evaluated on the basis of whether they are supported sufficiently strongly.

> Heck, I have emotions, I might as well use them.

I disagree with using them to justify a conclusion about the validity of a scientific claim. Certainly many people seem to operate on this basis, but it leads to a great deal of irrational behavior.

> You should read Probability Theory: The Logic of Science. It's an excellent book.

Thanks, I'll check it out.


On "poof magic": I agree, with a tiny minor reservation: when we say "mathematics did it", we stand a chance at speculating how.

---

> […] the entire idea of seeing our models of the universe as being somehow responsible for its creation runs the risk of being a huge category error […]

Oh yes. When you think of it, actual models are encoded in brains, paper, computers… Which aren't exactly responsible for the existence of the whole universe. So,

> I don't take it for granted that the universe actualizes mathematical rules in that sense.

Neither do I. I just give it enough credence to put it in a shelf, and look at it again once we know more.

---

> I distinguish between everyday claims, like someone telling me they went to a movie, and claims about new discoveries about the universe.

So do I. This is not a new discovery about our universe, however, not yet. This is a new discovery of a mathematical simplification. I wouldn't be surprised if this one yields no easily testable prediction, much like the Many Worlds Interpretation.

Anyway, I suspected for some time now that the fundamental laws of physics were simpler than they looked. I expected someone to eventually find simpler models. So, when some people claim they did, they at least get my attention. If you did not have the same expectation to begin with, then of course you would reach a different conclusion from seeing the claim.

That said, I do agree that

> such claims can and should be evaluated on the basis of whether they are supported sufficiently strongly.

---

> I disagree with using them to justify a conclusion about the validity of a scientific claim.

No no no, that's not what I was trying to convey. Actually, that's about exactly the reverse. First, I try to have correct beliefs about the world. Then I try and tailor my sense of surprise to those beliefs. That way, when I make an observation (such as reading about a claim), I can use my surprise (or lack thereof) as a hint (no more) about the credibility of this new information.

I won't try to use my sense of surprise to justify anything to anyone. It's only a descriptor of my own beliefs. It's a valid argument only to the extent you trust my beliefs. Which would be foolish: I'm just a random guy on the other side of the internet.


Things that work on simple models motivate work on more complex models. He STARTS OUT by saying N=4 SYM is a toy model. The klein-gordon equation is unphysical but motivated a lot of good physics.


In condensed-matter physics, toy models play with you!

http://arxiv.org/abs/cond-mat/0007254


I'm not questioning whether the work has value, but the article is hyping it way beyond justification.


What you're saying makes it sound comparable to a compelling software demo that only works on simplified test cases.


Sort of, except in that analogy people like Dijkstra and Knuth are saying "this is a big deal, let's take a closer look".


what are your qualifications? No offense meant, but I am trying to see if this is real or not.

Quite a few smart people seem excited by it, yet you dismiss it


I think the point here is don't hold your breath.

Science like this has to be slow and careful. Sudden magical breakthroughs are vanishingly rare, and this probably isn't one; but it is interesting, and worth getting excited about.

.. just don't go throwing your Standard Model quite yet. :)


> your Standard Model

If you don't understand your standard model, it's merely a standard belief.

If you understand it, it can be invalidated and thrown away by the 'magical breakthrough' if it provides evidence.

These things shouldn't be treated as a question of political buy-in into to the model.


"Quite a few smart people seem excited by it"

Why aren't you asking them for their qualifications, then? Serious question, think about what your choice implies. Is it that "you want to believe", perhaps?

You don't need a PhD in quantum physics to detect the hype in this article. The comment I responded to was reacting more to the unsupported claims and hype than any actual discovery.

"yet you dismiss it"

I'm not dismissing that the work could have value, but the article makes claims that go far beyond anything that's been demonstrated.


In other words you just spent a dozen or so paragraphs venting your bullshit detector in a space where numerous contributors that are much, much smarter than you have indicated otherwise.

It's like inverse bike shedding.


Perhaps you missed the part of my very short comment where I wrote "I'm not dismissing that the work could have value, but the article makes claims that go far beyond anything that's been demonstrated."

You might want to re-read my original comment and take note of the issues I commented on. If you disagree with what I'm saying, why not respond on those points? I can't help wonder if your problem is actually that I'm interfering with your "I want to believe" circuitry.

I like to distinguish between actual science and mathematics and nonsensical metaphysical claims like "...giving up space and time as fundamental constituents of nature and figuring out how the Big Bang and cosmological evolution of the universe arose out of pure geometry." YMMV.


There's reasoned discussion of possible philosophical implications of the recent findings and then there's your post.


At https://news.ycombinator.com/item?id=6406551 I'm having a reasoned discussion with someone who responded more substantively to what I wrote. Your petty sniping is not reasoned discussion.


Why aren't you asking them for their qualifications, then?

Don't need to, checked them as they posted their names and all their research is online. http://arxiv.org/abs/1212.5605

They also posted the math behind it (which went over my head) unlike you. Next time, if you want to be taken seriously explain yourself better and try to be specific. If you know what you are talking about, of course.


You're comparing me to the authors of the work? My misunderstanding then, I thought you were comparing my take to that of other commenters.

"try to be specific"

I did post specifics, include a link to an example of the issues with supersymmetry as a physical model.

"explain yourself better"

If you can identify what you didn't understand, I'd be happy to explain it further. I stand by my comment - it doesn't contain anything non-factual that can't be supported with references, although when it comes to silly metaphysical claims about the universe arising from pure geometry, those references are going to be to philosophy texts, not science texts. Which hints at the problem with the claim in the first place.


That is exactly the kind of thing that happens when one model is replaced with a deeper one.

Relevant quote: "Point of view is worth 80 IQ points." (Alan Kay)


From what I get they managed to vastly simplify calculations of super-symmetric Yang-Mills theory by representing results as volumes of geometric object.

Super-symmetric Yang-Mills theory is not shown to represent reality (yet?) but they hope they can use similar method to simplify calculations of the quantum field theories that are currently believed to represent reality.


It's worse than that--- N=4SYM is known not to represent reality. However, it has some features that make calculations particularly easy to do. The analogous calculations are doable for other theories, but are trickier. So they've done the simplest example with this new framework.


Discussion by people who are more likely to know what's going on:

http://physics.stackexchange.com/questions/77730/what-is-the...


Every once in a while I get the feeling that the greatest discoveries are still ahead and these next few decades might be the real "golden age of science". The "individual genius" phase is just behind us and the "collaborative brilliance" phase is just beginning.


It is possible that Gravity as we know it is an illusion. For example, the Newtonian take on Gravity was that it was a force. That according to Einstein was illusory, Gravity was really a geometric manifestation due to the curvature of space-time. With that take Gravity as we know it is different from the Gravity as was once known. However the Newtonian view that Gravity is a force can and still does facilitate useful everyday computations.

The go to illusion when we think of illusions is the mirage, you can see as water complete with the wavy nature of water and if you had no way of getting closer to inspect it, it might as well be the real thing. Depending on your philosophical take either it is always an illusion, or it becomes an illusion when you find out that it is not water.

A note of agreement with what seems like your main query, we might as well drop the word illusion when it comes to the physical description of the universe. Any framework of description is full of illusions that may never be proven otherwise.

I took QFT some years back and it also went over my head. There is still that mental gap of where the only way you can think about time is via time-(in)dependent wave function.


"Illusion" is a misleading way to put it.

We experience gravity as a force - we can measure it with devices designed to measure forces. It's not an illusion. However, the model of gravity as a force between massive objects does not capture certain aspects of gravity - it's an incomplete model.

That doesn't actually mean that gravity is "really a geometric manifestation due to the curvature of space-time". Gravity appears to behave that way, but what do you mean by "really" in that sentence? What we can say with confidence is that the spacetime curvature model is a more precise and complete model of gravity than the mass/force model.

Perhaps if we figure out how to unify quantum phenomena with gravitation, we'll find a different model for gravity in that context. Will you then say that gravity wasn't "really" spacetime curvature? That wouldn't make sense, because all that will have changed is that we would have another model for interpreting and understanding gravity.

When people say gravity is an "illusion" or that forces are "fictitious", what they're really getting at is that these observed phenomena are not fundamental - that they're consequences of some underlying phenomena. But they're real consequences, not illusory or fictitious.

(Of course "fictitious" is a technical term used in physics, but it doesn't mean "does not exist in reality" but rather means something more like "is not fundamental".)


Perhaps another analogy for an illusion of a force would be centrifugal force?


>I took QFT some years back and it also went over my head. There is still that mental gap of where the only way you can think about time is via time-(in)dependent wave function.

Total bullshit. The wave function is time dependent.


Given psi(x, t), if the relation between x and t is not non- linear. Then you can have a time-independent Schrodinger equation with a time-independent wave-function i.e psi(x).

However, you do not need start with a time dependent wave function to justify the time-independent wave function.


There are multiple ways of writing any equation: http://en.wikipedia.org/wiki/Time-independent_Schr%C3%B6ding...


Why do you think there will be or should be a golden age, rather then continuous progress forever?

http://beginningofinfinity.com


I always think of "golden ages" as arising naturally from a random distribution of "big breakthroughs" and similar. It's not unlikely that you'd get some clumps of many breakthroughs followed by time with very few breakthroughs. And we call these clumps "golden ages." And of course, the fact that they're not independent events, but that one breakthrough might precipitate another, just reinforces this.

So I don't think "golden ages" are going away any time soon.


To expand on what Osmium said, a "golden age" is more like a "goldrush age": A big breakthrough opens up a new conceptual frontier, and the golden age is scientists rushing in to homestead and start mining the new landscape.

This isn't to say that all progress is done in goldrushes, but the most basic and, therefore, best-known work of a given field will be done in that period.


Yes

There's no continuum because some things are harder, others are easy.

Computers have evolved pretty much "continually" since the personal computer era, but they took some time to be invented.

Quantum mechanics evolved "quickly" once we found out there was such a thing as quantum mechanics, up until them it was a lot of "banging heads" and freewheeling.

Not to mention delays because of other constraints: wars, political instabilities, etc


Not to mention modern computational power.


Nima Arkani-Hamed is a nortorious loudmouth.

I went to his lecture from 1-3 and we got an encore from 3-5 and another from 5-6. It is no surprise his paper is verbose clocking at 154 pages.

Scattering Amplitudes and the Positive Grassmannian http://arxiv.org/abs/1212.5605

Alejandro Morales has some lecture notes on a course on the Positive Grassmanian The professor was Alexander Postnikov at MIT.

http://www.thales.math.uqam.ca/~ahmorales/18.318lecs/lecture...

Feynman famously said "I think I can safely say that nobody understands quantum mechanics." That was in 1965 but it's still true.

The scattering amplitudes in a certain case of N=4 Super Yang-Mills theory are more symmetric than people first expected. This has applications to "total positivity" and "integrable systems".

This will not help you build an app, but it's beautiful and represents an important shift in mathematics.


The paper from the arxiv mentioned in the article: http://arxiv.org/pdf/1212.5605v1.pdf


A note -- when linking to arXiv, please link to the abstract[1], not directly to the PDF. The abstract has other information about the paper, and one can see different versions of the paper, and of course one can click through to the actual paper. Going the other way requires manually editing the URL.

[1]http://arxiv.org/abs/1212.5605v1


(non physicist, non mathematician) The current excitement seems to be to do with a forthcoming paper from Arkani Hamed and his student Jaroslav Trnka. The pdf at http://www.staff.science.uu.nl/~tonge105/igst13/Trnka.pdf has some details of that, I think.

Edit to add:

The pdf is slides from a talk by Trnka. Video of a later talk on the same topic by Arkani Hamed is at http://susy2013.ictp.it/video/05_Friday/2013_08_30_Arkani-Ha... - he sounds very excited!


I was talking to a friend in the physics department at Stanford a few months ago about how one of the people whose work is cited in this paper (Greg Moore of Rutgers) had found analytic expressions purely rooted in geometry for Bethe Ansatz[1] results in (1+1) dimensional systems. Based on that, I told my friend I believed these "physical mathematics types" (as they call themselves) were on to something. He concurred that their methods would be very important in the future.

(This was in May, lol)

[1] BA is a nifty numerical method involving a lot of number crunching to find scattering matrices for these theories, and is very useful in low-dimensional electronic systems.


I think there's a lot of exciting research going on in the context of integrable systems (~Bethe Ansatz) and N=2 quantum field theories. However, some people are also working on integrability in N=4 theories, which might be relevant to the ideas in this article. It's a thought some people (including Arkani-Hamed) have expressed, but it's too early to tell.


You don't know any of Neitzke's grad students by any chance, do you? A certain fellow with a fondness for army boots and camo fatigues used to live down the hall from me at Caltech....


A certain fellow working down the hall from me is quirky enough to match that description :-)


In other words, you made a comment four months ago that simple, geometric models are likely to be important?


The point was that these techniques that people are decrying as just applying to super idealized cases in N=4 SUSY produced real computationally and experimentally testable results in real lower dimensional systems. However, no one is going to be putting out excited press releases about thermodynamic Bethe Ansaetze.


Thanks for clarifying. That it's not limited idealized cases is a very good point, and my apologies for my earlier snarky comment.


And here's a (long!) recent review about developments in this area over the past few years -- http://arxiv.org/abs/1308.1697


This is really cool. I didn't understand most of it, even though I'm taking QFT right now. Shows you how deep the subject goes.

On another note, if I see the word "illusion" or "illusory" one more time in these science blogs, I'm going to have a fit. This must be the meaningless new buzzword of the day. 1) Spacetime is not an "illusion" 2) Gravity is not an "illusion" 3) <insert sciency word> is not an "illusion". That doesn't even make sense.


I think what they mean is that the force we call gravity is an illusion that we experience because we perceive spacetime as flat. In a curved spacetime, you don't need to posit a force to explain motion due to a nearby mass - the mass curves spacetime, and objects move along geodesics in the curved spacetime. So while it would be incorrect to say "gravity is an illusion" I think it makes sense to say "the force of gravity is an illusion".


He's probably quite aware of what the intended meaning is. The point is that since we're in effect always talking about 'verisimilitudes', calling this or that an "illusion" is silly, because it implies that we didn't realize our models are just models ("You mean to tell me the Standard Model isn't a 1-to-1 mapping from reality to our brains??? NOOOOO * worldview shattered * ").

For example, the spacetime explanation for gravity is also a model, and is also likely to be an "illusion." Making note of that is theoretical physics 101... or at least it should be.


> and objects move along geodesics

You say that like it's so obvious. But why do they move in the first place?


In short because things move from a state of higher energy to lower energy. Not subduing yourself to gravity would take energy in opposite direction to counteract it (or the geometric slope in this analogy). Newtonian conservation of energy is a pretty good model analogy (although isn't the same thing) and the geometric one is even better. But that's all they are, however accurate - analogies, constructed on the basis of available information and mathematical models.

Outside of that there really isn't a 'reason'. This isn't how physics works, although it's certainly in our nature to create and expect meaning and cause everywhere. Brain is bound to think in hierarchies, because that's how it's built to operate, but the universe doesn't have that limitation.


> move from a state of higher energy to lower energy

And what's the difference between this and saying they experience a force?


You hit the nail on the head. The real issue is that the force is not considered fundamental, because it is explained by some other model such as spacetime curvature.



I thought those seemed part and parcel. :) "I'm in the gravity well of a star" leads to "this alters the geodesic of spacetime that the I tumble down", which made me wonder "wait, why does it tumble down geodesics to begin with".

The answers seems to be "physics doesn't have to abide by your intuition or need for narratives, human"


That answers why they move along geodesics. It doesn't answer why they move.


The article states about unitarity and locality that "both are suspect". I'm happy for locality to be suspect, but in what sense is unitarity suspect? Surely only in the sense that it is apparently a consequence of more fundamental things. And when you think about it, that's philosophically amazing, that probabilities summing to one may have an underlying (timeless?) explanation. Or maybe I should infer that there has been some exaggeration!?


It's that "only" in "... only in the sense that ..." upon which the whole thing hangs. Presupposing unitarity (and locality) and constraining models to conform to that presupposition may be that complication, that blinder, that prevents discovery of the simpler underlying rule from which either or both emerge. And either or both may be necessary consequences of the much simpler, underlying rule (if there is one to be found), at least on the observable side of any predicted phenomena, but that can't be determined unless one is willing to first throw away the constraints that keep you from finding it. (In effect, it's analogous to simply adding two natural numbers directly rather than using Peano arithmetic and checking at each iteration that an increment occurs. Either way you will arrive at the same answer, but the unconstrained rules for addition allows for negative numbers and fractions.)


As a Layman, the concept seems reminiscent of Garrett Lisi's E8 idea. Are they comparable approaches?


I did not read the paper, but from the article; probably not. So the E8 idea is stab at a quantum field theory of gravity. On the other hand these amplituhedron idea seems to be 'just' a different idea to calculate usual qft calculations easier. The only connection which may be there is, that apparently the amplituhedron idea can be turned on its head and then serve as a new way of looking onto qft. And this may or may not lead into a interesting direction for quantum gravity theories.


Rejecting unitarity is a bold thing to do. What can it mean for the probabilities of alternatives to sum to more than 1?


No, they do not reject unitarity. The theory is still unitary. However, unitarity is not built-in to the theory... the theory talks about some other ideas and structures, and it turns out that all the answers you compute respect unitarity -- like it should!


I understand it's kind of like "let's start without unitarity and locality baked in explicitly in the model and see where it leads and if they come out implicitly in the results".


A video lecture by Nima Arkani-Hamed here on the topic:

http://susy2013.ictp.it/video/05_Friday/2013_08_30_Arkani-Ha...


"How should we make it attractive for them [young people] to spend 5,6,7 years in our field, be satisfied, learn about excitement, but finally be qualified to find other possibilities?" -- H. Schopper

(btw: entire careers have been based on susy, without a spec of experimental evidence: on might say, it is not even wrong)


I'm not remotely qualified to comment on the content, but I was very lucky to have Nima as a professor in college. He was an incredibly exciting lecturer--he practically glowed with intellectual energy.


Interesting thing that he mentions in the beginning something like "Amplituhedron is so simple I could explain it to high-school students". I'd sure love to hear that explanation as my knowledge of physics/maths is not much better than that of hs students.

edit: Also the hand-written presentation slides are lovely.


It reminds me of heim theory. Supposedly, in heim theory you could derive particle mass by from quantum numbers by a relationship in a six dimensional geometric structure.

http://en.wikipedia.org/wiki/Heim_theory

Wiki doesn't have a lot of info, but it's mostly been debunked from what I've read.


So this guy was right? The geometry-based particle interaction theory sounds similar to what this guy was saying a few years back on TED:

http://www.ted.com/talks/garrett_lisi_on_his_theory_of_every...


Completely different thing. Garrett Lisi threw us some weird looking thing that when you turn it to the left and spin it 3 times and do a backflip looks like an electron... and if you turn it up and around and up town it looks like a quark. However, if you're gonna propose a structure like this, it better behave in a way that any way you look at it, it better look like something you might see in the universe. This thing predicted WAY too much shit to have physical relevance.

The thing in this post is a new way to calculate scattering amplitudes that has nothing to do with space, time, and isn't motivated by unitarity or locality.


My favourite line, and something I wasn't aware of (as a layman)

In 1986, it became apparent that Feynman’s apparatus was a Rube Goldberg machine.

I think Feynman would have loved to have heard his doodles to be described in such a way (maybe he did, he had a few years left in him at that point!)


What are the consequences of it regarding P = NP? can this be used as a proof that P = NP since a 9 page formula is reduced to a simple function? can we use geometry to solve programs in the NP space?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: