Hacker News new | past | comments | ask | show | jobs | submit login
Mysteries the Standard Model can’t explain (symmetrymagazine.org)
183 points by jc_811 on Nov 18, 2021 | hide | past | favorite | 154 comments



I am so tired of seeing massless neutrinos being described as a "prediction" of the Standard Model, and finite neutrino masses as beyond-Standard Model physics or even a "mystery". This is especially disappointing coming from a supposedly serious magazine like Symmetry.

Neutrinos were originally hypothesized in order to solve a problem which did not require them to have mass, and for a long time after they were actually observed, their measured masses remained within error bars straddling zero. It therefore made perfect sense to model them as massless.

But to actually include neutrino masses in the Standard Model is trivial, and was done long ago.

The most straightforward way to do it is to give them quark-like mass terms. This requires introducing a right-handed partner for each known (left-handed) neutrino, which some people don't like because right-handed particles don't partake in weak interactions, and weak interactions are the only (known) neutrino interactions (apart from gravity), so you end up with undetectable particles.

The main alternative is to use Majorana mass terms, making neutrinos their own anti-particles, which some people don't like because it deviates from the pattern of all other fermions in the Standard Model.

A third way is to say "it's both", typically involving the seesaw mechanism, which some people don't like because it requires unfashionable GUT-style beyond-Standard Model physics.

Point is, there is neither a failed "prediction" nor a great "mystery" here. There is uncertainty about which kind of mass term we should use for neutrinos, because the experimentally observable differences between the alternatives are really, really tiny.


> This requires introducing a right-handed partner for each known (left-handed) neutrino, which some people don't like because right-handed particles don't partake in weak interactions, and weak interactions are the only (known) neutrino interactions (apart from gravity), so you end up with undetectable particles.

For anyone else wondering: yes, this does make said right-handed (aka sterile) neutrinos a candidate for dark matter, assuming some of them are much heavier than their left-handed counter parts to account for the 'cold' properties of dark matter that we observe.


> This requires introducing a right-handed partner for each known (left-handed) neutrino, which some people don't like because right-handed particles don't partake in weak interactions, and weak interactions are the only (known) neutrino interactions (apart from gravity), so you end up with undetectable particles.

I'd be curious to hear more about this. What mechanism forces you to add right-handed partners? Is it some conservation property? Are calculations too difficult without them?

Maybe it's also not clear if there really is a difference between saying "right-handed neutrinos exist but are undetectable" and "right-handed neutrinos are fake particles added for ease of modeling".

Edit: Wikipedia also says

> The neutral-current Z^0 interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs.

So could right-handed neutrinos be detected this way?


> What mechanism forces you to add right-handed partners?

A Dirac mass term (the kind used for all other Standard Model fermions) involves both left-handed and right-handed particles:

https://en.wikipedia.org/wiki/Sterile_neutrino#Mass

So you can't have one without both. (BTW, the linked section says "there are no Dirac mass terms in the Standard Model's Lagrangian", but it should really say that the Dirac mass terms in the Standard Model Lagrangian arise as a consequence of the Higgs mechanism.)

> Wikipedia also says

I can't find that quote, but I guess it's about experimentally observed particles. It would not apply to a right-handed neutrino:

https://en.wikipedia.org/wiki/Sterile_neutrino#Properties


I've seen another argument, but I lack the competence to assess its validity: if a (left-handed) neutrino has mass, it moves at less than the speed of light, which means you can pass it and look back at it. You'd then see it as having reversed spin. But that might be based on a classical physics analogy that doesn't hold.


That's correct. The more jargon-y way to say it would be that for massive particles, helicity isn't Lorentz invariant.

It's an approximation that helicity equals chirality (which is what matters for weak force interactions), but this approximation is pretty good for particles moving close to the speed of light (which neutrinos tend to do, due to their low mass)


You're thinking about helicity, not chirality:

https://en.wikipedia.org/wiki/Neutrino#Chirality


I think you're correct to say a lot of people simplify what the problem is with neutrino mass. In principle it seems like there is no problem, you just add a mass term for the neutrino just like any other particle. Just b/c at first we didn't expect that term to be there doesn't mean it's a problem to now, or that original expectation was all that meaningful. And again, as you point out, there are a couple of potential ways to add that mass term in, either the "normal" way with a right handed neutrino, or with some fancy see-saw majorana term, or some combination thereof.

The issue is though, right now the standard model is at least ambiguous in terms of the majorana mass term. If it ends up the neutrino gets its mass from only the "normal" mass term, then why doesn't it have a majorana mass term? There's no current symmetry that says there can't be a majorana mass? If the neutrino's majorana mass is zero, then you'd probably have to introduce a symmetry into the standard model that says majorana particles can't exist.

But if the neutrino does end up having a non-zero majorana mass term then that means the neutrino is a majorana particle, and can undergo lepton number violating processes (e.g. neutrinoless double beta decay). Again, that's new physics.

So no matter how you give the neutrino mass, you're gonna have to modify the standard model in some "significant" way to accommodate. Either by specifically saying majorana particles can't exist, or by allowing for lepton number violating processes.

Now you could say, well then it might the case that majorana particles don't exist b/c that would require lepton number violating processes, so I don't need to introduce a new symmetry, I can just take advantage of one that's already lying around. That might be a valid claim to make...I'm not sure. I think the issue with that comes down to the difference between lepton number a global vs accidental symmetry in the standard model.


The mystery part is: what is the correct way to include the mass term.


No mystery at all, you add it to the massless part of the Standard Model Lagrangian. :)


Honest question from a physics novice: Would it be wrong to say the same is roughly true of all of the first 4?

I have heard it is plausible that Dark Matter is merely another particle that fits in the standard model. I've heard it plausible that Dark Energy is e.g. a WIMP or other new particle in the standard model. And, on more-matter-than-antimatter, I imagine _some_ explanations of baryon asymmetry could come from outside the standard model but others (boundary condition, mirror anti-universe) would be fully standard-model-compatible, right?

That would leave only #5 as a mystery: Why is gravity as we know it in general relativity so different (weaker) than the force that a standard-model graviton would predict?


> I have heard it is plausible that Dark Matter is merely another particle that fits in the standard model. I've heard it plausible that Dark Energy is e.g. a WIMP or other new particle in the standard model.

Whoever told you that was getting dark matter and dark energy mixed up.

Dark matter could be "another particle", possibly a sterile neutrino:

https://en.wikipedia.org/wiki/Sterile_neutrino#Sterile_neutr...

Dark energy however is definitely something else. WIMPs in particular are hypothetical dark matter particles:

https://en.wikipedia.org/wiki/Weakly_interacting_massive_par...


If I understand this correctly, the "right hand" means antiparticles? Do I have that right?

And if so, then I have two questions.

1. Antiparticles don't participate in the weak force? So if I had antimatter, and I made a nucleus of some kind, then it couldn't beta decay? If so, does this say anything about the matter/antimatter asymmetry in the universe?

2. At various times, I have seen references to anti-neutrinos. They seemed to say that what made them "anti" was simply that the spin was in the opposite direction relative to the direction of motion compared to a "regular" neutrino. Were they wrong? And if they were right, what about the direction of spin should make them unable to participate in the weak force?


> If I understand this correctly, the "right hand" means antiparticles? Do I have that right?

No, it's about chirality:

https://en.wikipedia.org/wiki/Chirality_(physics)


OK, then: Why would chirality determine whether a particle can interact with the weak force? (ELI5, probably, or at least ELI20, if you can.)


Ultimately it's an experimental fact - a very surprising one when discovered:

https://en.wikipedia.org/wiki/Wu_experiment

In theoretical terms, the equation used to describe all massive fermions known at the time (like the electron) is built out of four-component quantities called Dirac spinors:

https://en.wikipedia.org/wiki/Dirac_spinor

A Dirac spinor can be viewed as composed of two simpler, two-component quantities called Weyl spinors:

https://en.wikipedia.org/wiki/Weyl_equation#Weyl_spinors

You can try to describe a fermion using only one Weyl spinor, but then it turns out that you can't build mass terms (unless you're willing to violate special relativity).

The massless equation you can write down with a Weyl spinor has two plane wave solutions with opposite helicity, left and right. A Dirac spinor combines a left-handed and a right-handed Weyl spinor; the mass term of the Dirac equation "mixes" them, in the sense that if you start out with a Dirac spinor having its left-handed component set to 0, the mass term will cause it to grow at a rate proportional to mass. If you set the mass to exactly zero, you're left with two uncoupled Weyl equations, one for the left-handed component, one for the right-handed one.

Knowing this, and faced with the experimental fact that weak interactions make a distinction between left and right, you write down separate weak interaction terms for the left- and right-handed components of your Dirac spinors. Eventually it turns out that the simplest choice, having only the left-handed components participate in those interactions, is the best fit to experiment:

https://en.wikipedia.org/wiki/Weak_isospin#Relation_with_chi...

¯\_(ツ)_/¯


Thank you for a clear, detailed response. (It would be more clear if I understood the math behind spinors...)

So, returning to a previous question: If I understand this correctly, it means that if we have a kind of nucleus that undergoes beta decay, and we built the exact same nucleus except out of antimatter, it would not undergo beta decay. Is that correct?

Has anyone actually done that experiment? Or is it beyond our ability to construct things out of antimatter?


> if we have a kind of nucleus that undergoes beta decay, and we built the exact same nucleus except out of antimatter, it would not undergo beta decay. Is that correct?

No. Until 1964 you would have been told that applying CP, i.e. the combination of C transformation (charge conjugation, i.e. change the signs of all charges) and P transformation (parity, i.e. swap left and right) would result in exactly the same decay rate. You would get anti-neutrinos (which are right-handed) instead of neutrinos, but that would be the only difference.

Then it turned out that CP symmetry is also violated by weak interactions, though far from as neatly as just P:

https://en.wikipedia.org/wiki/CP_violation


Then I don't understand something. (Or several somethings...)

I thought you said that going to antimatter meant that you reversed chirality, and that right-handed chirality meant that a particle did not take part in the weak interaction. I interpreted that to mean that it didn't take part at all, not with any probability. But the CP violation article seems to be saying that CP violation is a matter of differences in probabilities.

By the way, that article talks about anti-neutrinos, which from your initial post, I thought you were saying were impossible to detect if they existed?

I'm confused... can you make any of this clearer? I'm mis-understanding something - can you tell what?


Given a fermion species, let's say an electron, you can have:

- left-handed electron

- right-handed electron

- left-handed anti-electron

- right-handed anti-electron

Going from particle to anti-particle, you swap charge and chirality (that's the CP transformation). So for instance a left-handed electron with negative electric charge becomes a right-handed anti-electron with positive electric charge.

For neutrinos, given that left-handed neutrinos have weak interactions, then so do right-handed anti-neutrinos.

It's no harder to detect anti-neutrinos than neutrinos. It's right-handed neutrinos and left-handed anti-neutrinos you'd have a problem with.

The CP violation article talks about probabilities because CP violation is not nearly as neat as P violation alone. Instead of the clear-cut "left-handed neutrinos (and therefore right-handed anti-neutrinos) only" of P, you get small differences.


Thanks! I think that was what I needed.

One last question: I was under the impression that the difference between a neutrino and an antineutrino was only chirality. What other property is there to distinguish a neutrino from an anti-neutrino?


Weak isospin. Look at the table in

https://en.wikipedia.org/wiki/Weak_isospin#Relation_with_chi...

(including the text in the last two rows).


Thank you for your knowledge, patience, and persistence in these answers.


That was an amazing discussion. Thank you both.


This is more of an empirical thing. We observe effects such as parity violation that can be best explained by an interaction that only works on left-handed particles.


So the theoretical thing would be to account for the observed effects.


This is wrong.

We know that either:

a) there’s mysterious undetectable particles, or

b) that something beyond the standard model is happening, as in your second and third points.

There’s a failed “prediction” that all fermions have similar mass terms, and that failure suggests either something strange (undetectable particles), something strange (fermions without consistent mass mechanisms) or something strange (novel physics).

I think that qualifies as a mystery.


"either something strange (undetectable particles)" There is nothing strange here. Undetectable in this context means it would be only detectable by gravitation detectors - all masses/energies have gravitation changes.


One other mystery not mentioned is the problem of fine tuning. The standard model requires certain parameters (like alpha, the fine structure constant) to have their current values accurate to many orders of magnitude for the universe as we know it to exist. There are two philosophical schools of thought about that -- (a) we're in the universe we're in, so by definition it must exist and there's a selection bias there; or (b) there is an underlying detailed structure that gives the values of supposed 'fundamental' quantities their shape as an emergent property of something more beautiful – and thus they're not "free" at all. This is one of the things that SUSY was supposed to solve – but it's been experimentally found to not really exist by the LHC. A good introduction about this (in the context of the Higgs mass, where the need for fine tuning is really apparent) is here: https://www.physicsmatt.com/blog/2016/11/17/paper-explainer-...


“This is rather as if you imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.”

--Douglas Adams


The puddle being in a hole implies a world outside of the hole. If the puddle has no means to perceive the outside world except a careful inspection of its own bounds, isn't that an interesting mystery?


See also Professor Pangloss.


To me the fine tuning problem is a bit like saying "circles wouldn't exist without π being exactly the value it is" and wondering why π has that value specifically and not any other value.

I don't think A and B are mutually exclusive. The values seem perfectly tuned for our universe because we exist in our universe, and they are probably emergent from a more fundamental parameter, possibly something like the particular Calabi-Yau manifold topology that happens to correspond to our universe (in the case of superstring theory). If we lived in a different CY topology that was capable of supporting intelligent life then we'd wonder why that one's constants are so precisely tuned for us.

But then, I'm and idiot who just watches PBS Space Time and nods his head, not a physicist.


That’s not that great analogy for fine tuning. The quantized representative value of Pi is arbitrary based our numerical system, the relationship that derives it is fixed if the relationship would change circles indeed would not exist.

Let’s move the analogy to triangles a triangle has 180 degrees but that only holds true when it’s on a flat surface if you have a curvature it can have more or less than 180 degrees.

So this isn’t a fine tuning problem on its own, if you would lived in a universe where triangles have less or more than 180 degrees it would represent a universe with negative or positive curvature.

The issue with fine tuning is that the curvature of the universe is directly tied to the mass/energy density and any deviation from an extremely narrow range which our universe seems to sit in out of all possible values would not just produce a universe with triangles with fewer or more degrees than 180 but would either produce a universe that would collapse on itself within a blink of an eye or expand so fast that gravity would never be strong enough to cause even the most basic structures to form.

So the issue really is that to produce a universe which will form galaxies and stars and survive long enough to produce life you need a lot of parameters at a certain very specific value even the smallest of deviations would not produce a universe that would ever support life, yet alone an intelligent life. What’s even stranger iirc is that the values have to be really what they are now you can’t simply 2X all of them to maintain the proportions and get the same result.

And this is really what people are looking to solve, yes the androcentric is a solution if they were anything than what they are now we wouldn’t be here to discuss why, but the issue is that out of all other possible combinations you don’t seem to find another stable state and that is the true mystery.


> androcentric

I suppose you mean _anthropocentric_ (or the anthropic principle).

The prefix andro- refers to men, in the sense of male (adult) humans; anthropo- instead to men in the sense of all humans.


> yes the androcentric is a solution

Anthropocentric I think you mean. Andro- would refer to the male sex rather than humankind -- which is I claim I doubt many physicists would make.


Typing Antrocentric auto corrects to androcentric, you are correct of course, good thing this isn’t Twitter or I would be forced to apologize by now.


would either produce a universe that would collapse on itself within a blink of an eye or expand so fast that gravity would never be strong enough to cause even the most basic structures to form

It could do so in other chunks of space or time. If “time” and “expansion” have no final end, and these constants can fluctuate for some reason, eventually there would appear a universe like ours. There is no one who could count all the “failed” attempts. And maybe ours is also a failure in a sense, compared to a hypothetical much “better” universe.


Hmm, but isn't this very similar to taking the "multiverse" explanation and moving it into a "timeverse" ?


That is still fine tuning, in fact one of the more common interpretations of it just have an infinite number of universes one of them has to turn out to be alright…


> The issue with fine tuning is that the curvature of the universe is directly tied to the mass/energy density and any deviation from an extremely narrow range which our universe seems to sit in out of all possible values would not just produce a universe with triangles with fewer or more degrees than 180 but would either produce a universe that would collapse on itself within a blink of an eye or expand so fast that gravity would never be strong enough to cause even the most basic structures to form.

Well, the notion of "extremely narrow range" is arbitrary. There is no mathematical notion of "large numbers" vs "small numbers". If the constants you speak about were 10^-g64 (Graham's number) times larger or smaller than they are, nothing would really change. If they were 2 times smaller, the universe would be extremely different.

But 2 and 10^-g64 are just numbers. Neither of them is inherently large or small. We just happen to understand 1, 2, 3, ... much better than g64.

You could say that there is a surprising amount of leg-room in the values of some constants of nature, while others can only vary in a reasonable range. While the fine-structure constant can vary within a reasonable +-1/1000 range without massive changes, other values can vary by massive numbers such as +-2!


I'm not sure I understand your point. There is such a thing as "large" and "small" in physics, but they're relative to the scale you're considering. For example, if you're a million miles from the Earth, then from a gravitational perspective you may as well be infinitely far away (a ~ 10^-5 g). But if you were the same distance from the sun, gravity would be extremely relevant for you (a ~ 5 g).

Moreover, I find it unfathomable that the scale 1/g64 would be relevant anywhere in physics – I certainly can't think of any examples!


There is such a thing as "large" and "small" as related to measurement and impact.

But a change in a base parameter of the laws of physics is different. If a change of 1/10% in the value of one parameter is detectable, than it is "a large change". If a change of 200% in the value of another parameter is not detectable, then it is "a small change".

But the fine tuning argument relies on defining an absolute idea of "large" and "small". In the fine tuning problem definition, 1/10% is "small" and 200% is "large", and it's the parameters themselves that behave strangely, since one parameter produces visible results for "small" changes while another doesn't. Get rid of the absolute ideas of "small" and "large" numbers, and the fine tuning problem goes away entirely.

> Moreover, I find it unfathomable that the scale 1/g64 would be relevant anywhere in physics – I certainly can't think of any examples!

My point about 1/g64 was that, for all physical quantities, there is some scale at which they can vary without measurable differences. I took such a fantastically small number exactly to make sure I'm not risking a case where the factor I gave as an example would in fact cause detectable differences.


The range doesn’t have anything to do with “numbers” directly because yes you can always say there is an infinity between any two numbers no matter how close they are to each other. The very limited range is actually captured in physical quantities.

Back to the original analogy Pi is always exactly Pi not because we count it as 3.14~… but because of the relationship between the various innate properties of a circle.

So fine tuning isn’t that oh gosh we got very oddly specific values here is that we have problems such as https://en.m.wikipedia.org/wiki/Hierarchy_problem that are only solvable through fine tuning.

So back to the circle it’s really a case of use not understanding the relationship between the circumference and the radius and just brute forcing the value of Pi which is really what fine tuning is.


Sure, but none of this must have a fundamental explanation. It could just be the way the universe is, nothing in maths requires this 'problem' to have a solution.

Conversely, things like the measurement problem, quantum gravity, dark matter, dark energy, the mass of neutrinos and others must have some answer - they are facts we can see that just don't mesh with current theories.

So why study a problem that there is no reason to believe will have an ultimate answer, when you can study problems that must have such an answer?


Ok let me try to maybe make it simpler.

Back to the circle… take a string and pin trace a circle with it, take another piece of string and trace it.

Cut the traced string in half each half would not have the value Pi (granted the original string you had used to draw the circle has a value of one). That relationship is fundamental and the answer we are trying to find is exactly is this fundamental relationship that would explain why the constants have the values they have.

As in there needs to be a certain mechanism that defined those values in relationship to each other in a manner that does not require “fine tuning” because that would require either an infinite number of other universes which can have different values regardless of the fate of those universes or a cycle of death and rebirth in which these values somehow can be randomized (which opens a whole other question how does that happen?).


The circle is a nice example where there is a simple analytical relationship between the radius and the circumference (1:2pi). Other basic shapes, like the ellipse, have no such simple relationship, even in pure maths. And when performing the experiment you mentioned, the strings are very unlikely to be in 1:2pi ratio, they will be within some range of that ratio.

The universe has some set of independent initial parameters. Whether they are the ones we know or there are some other, more fundamental ones, ultimately some parameters are there.

Even with a single fundamental parameter, that parameter will have some range within which variations in value don't affect the universe at all (like the 1/g64 example), and some range where changes in them would lead to a very different universe, potentially one that can't sustain life.

This idea of "probability" of the observed values of these parameters is then meaningless. What is the probability distribution of the "universes"? To say that the "fine tuned" universe is "unlikely" you have to posit a probability distribution, but there is no empirical data to then verify if this distribution is correct, and, crucially, there can never be such empirical data.

You might as well ask yourself why the universe is four-dimensional (or 11-dimensional) and not 2-dimensional.


> You might as well ask yourself why the universe is four-dimensional (or 11-dimensional) and not 2-dimensional.

We do ask ourselves that the number of degrees of freedom is tied to the laws of physics, so it’s now really a question of why X but what are the physical laws that give us X.

And if we continue with the circle and parabola analogy then what we are tying to figure out is whether our universe is a circle or a parabola.. that’s a pretty worthy question to ask in my book.

We are thing to answer if the the values are actually random and thus have to be fine tuned to produce the universe we live in today or are they not random and if such what is the relationship governing them that makes them take the values they have.


> other possible combinations you don’t seem to find another stable state and that is the true mystery.

Is that really the case, I thought in theory there could be a massive amount of other stable universes with different values. The number of unstable universes is of course much larger.


That would be SUSY which solves issues like the hierarchy problems and other fine tuning constraints.


> And this is really what people are looking to solve

Why would you want to "solve" a basic fact of life?

Smells of ideology.


"explain" might be a better word than "solve" there. People are looking to solve the question "why are the fundamental constants of physics what they are?" which isn't a problem in the sense of something wrong but more something that needs an explanation for a full understanding to be possible.


According to Leonard Susskind[1], fine tuning is a compelling argument by itself[2], and the strongest case is the cosmological constant.[3] In a nutshell it is a sort of repelling force first proposed by Einstein to create a workable model for a static universe who later regretted it as one of the biggest mistakes in his career. However, the theory is now back with Nobel prize winning research showing that expansion is accelerating, which would require a positive number. It could explain a large portion of "dark matter."

When expressed in one way, it is 10^-122 "units of the square Planck length". I'm not smart enough to completely understand it, but it is (according to physicists) an incredibly precise ingredient in the various properties of physics that make our Universe possible. Any larger or smaller and the model falls apart. If it is an accident, that is one hell of a lottery ticket.

[1] https://en.wikipedia.org/wiki/Leonard_Susskind

[2] https://www.closertotruth.com/interviews/3081

[3] https://en.wikipedia.org/wiki/Cosmological_constant


> and wondering why π has that value specifically and not any other value.

Maybe the question should be "why circles" then. Early models of the solar system suffered under the assumption that everything should be modeled using circles when ellipses modeled everything better.


Isn't the whole thing made up to fit observation though? Why is that any more arbitrary than, say, electromagnetism existing?


It's an explanatory theory with a lot - a lot - of gaps.

It has been extended with some nice predictions like the Higgs. But basically it's a Franken-patchwork of math glued together quite awkwardly.

Because that is what it is. Literally. It was developed by thousands of grad students and their supervisors throwing math at the wall and keeping anything that matched observations. So there was a lot of random searching involved.

What's missing is a central guiding metaphor.

Relativity has one. In comparison, the Standard Model is very epicycle-ish tool for calculating Lagrangians with plenty of "Yes but" and "Except when".


I'm not sure I quite explained myself, although I do appreciate the reply.

See, I also don't see how one central guiding metaphor would be any less arbitrary than none, or ten metaphors.

What I am asking is why would α = 0.0072973525693 be more of a problem than electromagnetism or the speed of light being constant or e=mc^2? If the fine structure constant was exactly 1 would that make it better? Why is 1 less arbitrary or more good?

I understand the beauty in simplicity, fewer concepts or ideas, smaller formulas explaining more things, 1 being "nicer" than other numbers, etc. So I understand that aspect of goodness, so it is the connection to underlying observable reality I'm asking about.

Because isn't it also arbitrary to think that if we could explain things in more beautiful ways then it would be a deeper understanding or closer to the truth?


It sounds like you're asking why parsimony is desirable in science. From https://aeon.co/essays/are-scientific-theories-really-better...:

> The upshot is that there are three parsimony paradigms that explain how the simplicity of a theory can be relevant to saying what the world is like:

> Paradigm 1: sometimes simpler theories have higher probabilities.

> Paradigm 2: sometimes simpler theories are better supported by the observations.

> Paradigm 3: sometimes the simplicity of a model is relevant to estimating its predictive accuracy.

> These three paradigms have something important in common. Whether a given problem fits into any of them depends on empirical assumptions about the problem. Those assumptions might be true of some problems, but false of others. Although parsimony is demonstrably relevant to forming judgments about what the world is like, there is in the end no unconditional and presuppositionless justification for Ockham’s Razor.


I really hope it doesn't sound like that is what I'm asking. I went back and re-read what I wrote and it doesn't seem like that at all to me, but maybe I'm not a good writer.


Perhaps I am the one who is not a good reader! Please also note that I am not a physicist or a mathematician.

> What I am asking is why would α = 0.0072973525693 be more of a problem than electromagnetism or the speed of light being constant or e=mc^2? If the fine structure constant was exactly 1 would that make it better? Why is 1 less arbitrary or more good?

α = 1 is less arbitrary / more good than α = 0.0072973525693 because the former requires fewer bits of information than the latter. Fewer bits => more parsimony.

Similarly:

> See, I also don't see how one central guiding metaphor would be any less arbitrary than none, or ten metaphors.

1 < 10, therefore if parsimony is desirable, 1 is better than 10.


> if parsimony is desirable

That is what I'm asking. However you want to define this or label it.

The article says a whole lot as far as I can tell without really saying anything. Or at least not what I'm asking. The "paradigms" all start with "sometimes", and they don't seem to be backed by anything scientific beyond handwaving.

I agree that if you had two theories that are equally expressive and accurate, all else being equal the simpler one would be preferable. That's not really what I'm asking about though.

We do not have two equal theories one that requires the fine structure constant and the other which does not. We have a theory with a fine structure constant and lots of other things. The question is why is that "problematic"? We don't look at e=mc^2 and think that ^2 is problematic and think that it suggests there should be a simpler better theory without it. Why does alpha get singled out?


The article has some more instances of pysicists naming things:

"The partner of the Higgs, the higgsino."

"The partner of the top quark, the stop squark."


The stop squark is one of the sfermions (the superpartner particles of their associated fermions). As such they are all sparticles. Some of the other sfermions would be the sup squark, the scharm squark, the sstrange squark, the selectron or the stau. [1]

In my opinion physicists are great at naming things :D

1: https://en.wikipedia.org/wiki/Sfermion


No, I'm sparticles.


It seems kind of seems hipster.

"I'll name the particles something that I can later disavow as a joke if people don't like it, so I'm not risking myself."


I mean, you only get to use "Particle McParticleFace" once.


Pretty sure it's its own anti-particle


What alternative would you propose? Greek naming? Latin?


Oh yes, I loved it since the Big Bang and the Very Large Telescope!


Those supersymmetric particles have the disadvantage of not having any evidence they exist. I am sad for all those physics grad students who went into supersymmetry and string theory.



> (a) we're in the universe we're in, so by definition it must exist and there's a selection bias there;

This has basically 3 possible explanations - insane coincidence, multiverse (with many versions of constants; certain forms of mathematicism also provide such "multiverse") or "reason" (simulation admin/God).


somewhat related, I think, is the enormous disparity between the strength of the strong/week/electromagnetic forces and gravity (30 or 40 orders of magnitude); possible indication that there's something missing


Why is this a problem? If the other three forces were perfectly equal, or in regular intervals, maybe this would have meant something, but the other three forces' strength varies by x, y, z between them. Other than human intuition, there is nothing inherently different between x = 2 and x = 10^30-40.


I don't know that it's a problem, and I'm no expert; my understanding is that differences like that can sometimes indicate that there's something else at play


> there is nothing inherently different between x = 2 and x = 10^30-40

Fermat begs to differ. :)


Thanks for this detailed explanation. Did not know about the fine-tuning problem.


It's only framed as a problem when it is assumed that the values could be other than what they are. But we have no reason to suspect that they could be different. We have no idea how unlikely the current selection is given our observation is a single observable. For all we know, it's certain.

'What really interests me is whether God had any choice in creation' - Albert Einstein


> It's only framed as a problem when it is assumed that the values could be other than what they are.

Well then your problem would be "What fixed those values to what they are?", which I don't think is the lesser of a problem.


That's not really a problem in the sense that it doesn't align with expectations. That's a genuine unknown.


You may find this an interesting read:

https://www.bretthall.org/fine-structure.html


Well, thanks again!


Also this free book on the Anthropic Principle is great: https://www.anthropic-principle.com/q=book/table_of_contents...


The problem is that from an experimental / observational technologies perspective we have been for many decades now in some sort of "evidence desert" that pushes against fundamental technology boundaries and that is not conducive to solving big "mysteries".

Unifications, re-interpretations, new conceptualizations (new forces etc) are the mental tools through which we solve previous "mysteries" (and create new ones). Right now there are alive more physicists than ever and even a tiny piece of important news could lead to a revolution - in like a couple of years. But what you really want is a firehose of new data points, "a new window". This has not happened and it may not happen for generations (for the attentive reader: gravitational waves are at the very, very edge of the detectable).

As Feynman might say, the Universe doesn't owe us a continuous stream of gee-wow moments

But if I had to bet where the breakthrough might come from I'd say it would probably be cosmology rather than elementary particles...


I've recently heard a rather interesting and optimistic take on this. Since we have had so many brilliant minds looking in so many places for new physics and still have not seen evidence of it, that suggests whenever we do find new physics, it will have to be so bafflingly strange that all these brilliant people could never imagine it. It may very well be a bigger paradigm shift than the jump from classical to modern physics.


Yep, that makes sense. It doesn't give us a timescale for when such a "jump" might happen but suggests that it could be "big" in the context of our heretofore discoveries

My best guess at timescale (following up on the cosmology theme) has to do with our rate of utilizing the inner solar system as a clean and quiet laboratory for ultra sensitive observations and experiments (whether LISA) or extremely sensitive telescopes or any other probes.

So be patient for a few more decades :-)


It seems crazy to say that we have not seen evidence for new physics when the size estimates of dark matter and dark energy account for about 95% of known energy in the observable universe. How is that not evidence for new physics?

If our physical theories cover only about ~5% of what we observe… that seems like a bit of an issue, no matter how accurately they model that 5%.


Neither dark matter nor dark energy are new physics. We don't know exactly what particle or combination of particles is responsible for dark matter, but there's not yet evidence that dark matter actually is composed of something outside the standard model. Dark energy is, despite the name, well explained by general relativity. When people talk about new physics, they're referring to things that change our understanding of how the universe works on a fundamental level. By comparison to biology, these are like newly discovered species - of course they're interesting but our understanding of nature is unchallenged.


IMO, the breakthru will happen once they finish building that super collider that will find glimpses of the tiny 4th dimension (that Kaluza-Klein hypothesis). With that will come a whole new standard model worth of 4d particles and phycists will be busy for another century.


2, 3 and 4 are not Standard Model problems, but cosmological problems. There are cosmological theories that explain them by using only General Relativity without any changes to particle physics. For example, you can check out Nick Gorkavyi's cosmological papers:

https://pos.sissa.it/335/039/

https://academic.oup.com/mnras/article/476/1/1384/4848298

https://academic.oup.com/mnras/article/461/3/2929/2608669

https://arxiv.org/abs/2110.10218

https://www.sao.ru/Doc-k8/Science/Public/Bulletin/Vol76/N3/A... (this one is available only in Russian for now)


I would add, why do certain particles decay into other particles? For example, the tau particle contains nothing else, as far as we can tell, yet it decays into certain other particles (and not the same ones every time).

More generally, the standard model records a lot of particles and things that happen, but not why those instead of others. I suspect there's a simple model underneath, but I have no idea what it is.


As a physicist I'd say these things are well understood.

- Why do certain particles decay into other particles?

Quantum mechanics is totalitarian: what ever is not forbidden is mandatory. Forbidden: excluded by some symmetry principle (violation of a conserved quantity, like energy, angular momentum, ...)

- The tau decays into certain other particles (and not the same ones every time).

Tau carries electric charge, fermion number, angular momentum. The decay products' total quantum numbers match that of the tau. But the quantum-mechanical totalitarian principle says that every possible combination that satisfies that constraint happens with some amplitude.

- The standard model records a lot of particles and things that happen, but not why those instead of others. I suspect there's a simple model underneath, but I have no idea what it is.

If by 'those' and 'others' you mean all the varied observed phenomena, then yes, there is a simple model underneath and it IS the standard model. If by 'those' and 'others' you mean 'why is the SM the way it is', that's (likely) an out-of-bounds question for the SM in the first place. But a modern perspective on the SM is to think of it as a low-energy effective field theory anyway.


Coincidentally, Sabine Hossenfelder (a theoretical physicist) had just had a piece on this topic:

http://backreaction.blogspot.com/2021/11/why-can-elementary-...


I’d add to this, that we know very little about the spatial distribution of nuclear matter at and below the scale of a nucleus. The standard model excels at understanding the salient characteristics of asymptotic states before and after an interaction, things like spin, lepton number, etc. But we still can’t tell you how gluons are distributed inside the proton.


We're not all the way there yet, but this is a hot and rapidly-advancing topic. See, eg. https://arxiv.org/abs/2111.06948 and sources therein.


Nor can we tell why proton is a spin 1/2 particle because the gluon soup and particle-antiparticle pairs make this complicated.


I'd speculate it's platonic solids, highly complex and dynamic, but resembling those solids from outside.


The neutrino mass thing is much less a surprise than the rest. Neutrino mass always was quite possible as an optional add-on. Basically, the situation for the quarks, where all six have mass, can be copied to apply to the leptons as well.


They forgot at least one mystery: What determines the fermion mass spectrum and specific mass vlaues? Why do these particle masses take such seemingly random values?

Related: Why are these masses so much smaller than the Planck mass? For comparison, the electric charge is sqrt(alpha_em) or ~1/11th the value of the Planck charge.


Is there any evidence of "dark matter" or "dark energy" actually existing?

I find it more likely that we just don't understand the real mechanisms behind galaxy rotation and universe expansion and so we just made up unfalsifiable stopgaps to make the numbers work out.


As mentioned elsewhere, I'm an idiot not a physicist, but as I understand it dark matter is favored over modified gravity theories because all such theories are considered to be lacking in parsimony. Essentially, instead of showing how the behavior we observe arises from a fundamental simplicity, they are adding and tuning parameters to fit. With enough parameters one can fit any arbitrary shape, but this doesn't give the model any predictive power.

I also get the impression that while dark matter is popularly accepted as "actually existing", that is much less the case for dark energy.


There is vast amounts of evidence for dark matter existing.

Dark energy is an unrelated concept with a similar name, and there is not really evidence for it; it is speculated because the equations of general relativity imply the need for an additional term to justify the inflation of the universe, but it's not at all clear what this term corresponds to in terms of matter.


No there isn't. There's a bunch of evidence that the current model used for cosmological dynamics (Friedmann's equations) don't work unless there's a lot of undetected stuff called dark matter.

Until someone rules out stuff like using the full nonlinear Einstein equations vs Friedmann equations, or that the presumption of homogeneity/isotropy is wrong, (there's more stuff here that hasn't quite been excluded by observation) then the best we can say is that "We need some extra matter called dark matter for the standard cosmological model to work"


I always find it curious that both dark matter and dark energy aren't available locally to study.


It likely is, we just can't detect it on such small scales. It would be a form of matter that interacts pretty much only via gravity, and since gravity is already 40 orders of magnitude weaker than the electromagnetic force we usually use to detect things, it's understandable that it is kinda hard to see.


They might be, but only in very small quantities. The amount of dark matter that we would need to "fix" the galaxy rotation problem... would it affect the orbits of things in the solar system at all? (OK, yes, it would, but observably?) I haven't done the math, but I strongly suspect that the answer is "no".


Dark matter is especially dubious to me. Is it possible its literally matter which is dark? i.e. there are more planets and asteroids and mass than we can see because it is not illuminated? How can we be confident in our prediction of the mass of distant galaxies?

Maybe that is what is meant when a physicist says dark matter, and its the media that mysticizes it.


So dark matter doesn't just mean "not illuminated" it means it doesn't interact with the EM field at all. However it does interact gravitationally, and this has a measurable impact on matter we can see with our telescopes, the cosmic microwave background, etc.

Cosmology today is, in general, very sophisticated. I mean no offense but armchair contrarian takes here are very much just coming from a place of ignorance. Any criticism we can imagine as lay people has already been plumbed to a depth and detail we will never be capable of in our lives. One can point out that there's always some degree of uncertainty in absolute terms, but I think the appropriate understanding is Bayesian, and I'd sooner bet on getting hit in the head by an astroid walking through my neighborhood than that cosmologists have this all fundamentally wrong.

The wikipedia page does a reasonably good job of going through the multiple independent lines of evidence we have for dark matter. Sean Carrol has some great posts / videos in this topic area as well. The reason we're so confident of the current state of knowledge is not somehow that physicists and cosmologists suffer from hubris that blinds them to sophomoric criticisms, though sadly there's a number of people who continue to grind that grift.


(2018) though I don't expect much has changed...


What about the existence of magnetic monopoles?


Well the standard model doesn’t predict them, and apart from one detection that has never been reproduced we’ve never detected any pre-existing ones or any created in a particle accelerator.


The standard model doesn't predict the graviton neither, nonetheless, they mentioned it in the article.

Does the SM theorize about the impossibility of the monopoles' existence?


We have detected the effects of gravitons (...if it's gravitons and not something else) at macroscopic scales, however we have not done the same with monopoles. So that comparison is not apples to apples.

I have not heard anything about the SM predicting them being impossible, but I assume it does given that I haven't heard anything about the subject. Source: I have a master's in physics and a friend of mine did his bachelor's thesis in monopoles in classical electrodynamics (albeit modified) and his master's in quantum ones, so he would've told me if he knew.

However, if any GUT theory we have come up with is correct (because a few exist), then that does imply the existence of monopoles.


Gravitons are a universal prediction, so one doesn't get any credit for predicting them. _Any_ low-energy quantum field theory approximation to gravity will have them, so long as it looks like GR at large enough distance scales.


Can wave-particle duality be considered as part of the riddle?


Many years ago, a friend (who is far smarter than I am) said something to me that made wave-particle duality click:

"It's not that it's sometimes a particle and sometimes a wave. It's that very small things have a bunch of properties: some are shared with waves, and some are shared with small solid objects."

We're duck-typing what we see and going "aha! It's a particle! No wait, now it's a wave!" but the class simply has both sets of methods on it and does not care how we classify it.


That doesn't really help me understand how a single 'thing' interferes with itself when travelling through a double slit, unfortunately.


If you view a body of water as a single thing, would you think it odd that its wave interferes with itself in the double slit? The issue isn't unintuitive phenomena (intuitive being patterns in microscopic/macroscopic scale which match by analogy to patterns you've learnt all your life from phenomena at your scale), but inadequately modeled phenomena

https://youtu.be/citY6G8ePJw?t=223

> But what can I call it? I can say they behave like a particle-wave or they behave in a typical quantum mechanical manner. There isn't any word for it, if I say they behave like particles, I give the wrong impression if I say they behave like waves. They behave in their own inimitable way. Which, technically, could be called the quantum mechanical way. They behave in a way that is nothing like anything you have seen before. Your experience with things you have seen before is inadequate, is incomplete. The behavior of things on a very tiny scale is simply different ... Well, there's at least one simplification, at least electrons behave exactly in this respect as photons, that is they're both screwy, but in exactly the same way ... But the difficulty really is psychological, and exists in the perpetual torment that exists from your saying to yourself "But how can it be like that?" which really is a reflection of an uncontrolled by say an utterly vain desire to see it in terms of some analogy with something familiar. I will not describe it in terms of an analogy with something familiar. I'll simply describe it


Before opening the link, I read the text and thought "This sounds like Feynman!"

The man truly had a way of speaking that is instantly recognizable.


Because that's a thing it can do. It's properties include self-interference.

You are building your expectations of 'what a thing can do' based on the macro world you live in and the things you see and experience 'up here'. But down there, the rules are simply different.

Our expectations and intuition are simply not evolved to handle situations that occur in that level of reality. We've never thrown a rock and watched it self-interfere. But we should throw a lot of doubt at anyone who claims that it's all very natural to them, imho.


Sure, but there are still different interpretation of QM to sort out, and electron jumps have recently been measured to take time. Which contradicts the idea that jumps were instantaneous. We don't know everything about QM, so it's right to push back somewhat on claims that it's just different. Isn't that basically saying shut up and calculate?


I always struggled with that too - I found this Veritasium video helpful

https://www.youtube.com/watch?v=WIyTZDHuarQ&ab_channel=Verit...


Never heard of this, looked up whether there was some gap in the theory the video failed to mention, found a discussion on why this model isn't more pervasive: https://physics.stackexchange.com/questions/341400/why-would...

& one comment which specifically separates pilot waves from the bouncing droplet demo https://old.reddit.com/r/quantum/comments/7crdz6/whats_wrong...


I've always wondered if the particle is traveling at the speed of light, is there any sense of sequence in that frame? Interference would require it to be at point A before point B. If I understand it correctly, for the particle no time passes between when it is emitted on the front side and detected on the back side.

So could that interference just be what the path of least resistance looks like in a different frame?



The wave/particle duality is just an illusion, a feature of how we model physical objects, and ultimately how we teach physics.

Neither waves or particles exist in the real world, but we do model some features of the world through them. Particles and waves are just the solutions to the differential equations found in classical mechanics, and electromagnetism (which, interestingly enough, rely on the same mathematical framework). Then when teaching about them, we use analogies from the perceptible world. (And those analogies are wrong btw, waves in the sea don't really behave like physics “waves”).

In quantum mechanics, the equations are completely different ones, based on a completely different branch of mathematics. Then unsurprisingly their solutions have little in common with those above.


I have a particle physicist friend I asked this very question to last year. He laughed and told me that this is not something that keeps physicists up at night. After doing a fair bit of my own research and watching a lot of PBS Spacetime it does make more sense to me. Someone with more knowledge please check my understanding: Quantum Field Theory treats everything as a wave and 'particles' arise as a result of a 'collapse' of that more fundamental wave (probability distribution) as we decide to probe our uncertainty budget by getting specificity on the locality of the particle. It can also collapse of course by interacting with a combinatorial explosion of other particles (such as the back wall of the double slit experiment).

If the above is accurate I still think the collapse is a strange phenomenon that we shouldn't just take on faith (without probing deeper of course) - the disagreement between Copenhagen and Many Worlds (and the lack of a testable hypothesis) seems to indicate the collapse itself isn't well understood [1]. Many Worlds seems to have an elegant solution but it needs experiment and wasn't (and probably still isn't) 'accepted' by the overall community.

[1] https://en.wikipedia.org/wiki/Wave_function_collapse#History...


There is no experiment that can settle this - collapse can be shown to exist, but it cannot be shown to not exist, just not for a particular system at a particular scale. In other words, experimenters can push the possible size of a quantum superposition upwards, but they will never be able to disprove the claim "if it were but a bit bigger, it would collapse."


I remember a Physics PhD who thought that 'waveform collapse' was not a transition from 'waveform' to measurement. Instead, he felt that 'measurement' was just becoming entangled with the wider world, which causes the waveform to converge to a dirac-delta distribution.

Are there reasons to assume that 'random' entanglements cause the waveform to 'concentrate'? it would need to be that the probability of concentration at a given 'point' is proportional to the square magnitude of the waveform? Has this been studied?


That is the correct model of what wavefunction collapse is. And yes, there are huge amounts of research in how the effects of random entanglements -- https://en.wikipedia.org/wiki/Quantum_decoherence is the general concept.


Very cool, thanks!


If I have some system in a quantum superposition state and I simultaneously measure it in n different ways, will all n measurements produce the same result?


I think the problem is that you can't 'simultaneously' measure anything. Measurement means interacting with the wave function in some way.


The wave function can't interact with two (unentangled) systems at the same moment?


Not really in the same way.

The Standard Model is built out of quantum field theories that take as a given our experiment results that matter on quantum scales is unlike the "large scale" matter we see around us.

The problems described in the articles are unexpected results that we see in our experiments compared to the Standard Model, "weird/nonintuitive" aspects of modern particle physics would be a separate article.


No, physicists are pretty comfortable with that one. It isn't immediately intuitive, but the models we have are powerful and predictive. There is of course a somewhat open question as to the interpretation of quantum mechanics. That question is almost metaphysics, whereas the problems quoted in the article are more holes in our present theory.


I thought the most surprising thing about wave/particle duality is that the act of observation itself is what causes the waveform of particles to collapse. Do we know of a mechanism for that? Are we not still surprised that you change the outcome of events by changing where you look?


No one (today) really believes that it is the observation that 'causes' a change in the experiment. What is happening is that the observer is becoming entangled with the experiment.


I don't really understand the difference. It sounds like you're saying the act of observation entangles you, and the entanglement is the cause of the collapse. It's just adding one more link in the causal chain.


The difference is that there is no special thing called 'collapse'; when you write down the laws of physics, it doesn't exist. It's an emergent phenomenon, and it's not surprising that it happens, although it's very counterintuitive _if_ you are starting in the naive model of treating quantum waves like classical ones.


As I understand it, collapse is an artifact of the necessity to derive classical results from an experiment. You can think of it as "at a certain scale, calculating this as a quantum system as opposed to classical is no longer worth it." But it's of course still a quantum system, because everything is quantum all the time.


I don't think so. The double-slit experiment demonstrates light acting as a wave in some circumstances and as individual photons in others [1].

> the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.[5][6] The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.[5][7] However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen.[8] Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave).[9][10][11][12][13] However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. These results demonstrate the principle of wave–particle duality.[14][15]

This is the part I'm talking about, mainly.

> However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through.

Once they are measured, you know where they are and they stop acting like a wave.

1. https://en.m.wikipedia.org/wiki/Double-slit_experiment


Well, as I understand it the problem is if you measure the slit the photon passes through, then the superposition is no longer "left slit or right slit", it's "left slit and sensor or right slit and not sensor", and the slit/sensor paired configuration can't interfere with itself anymore because the sensor state is different. In other words, this is what you'd expect to see collapse or no collapse.

For instance, consider https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser which demonstrates that the sensor state has to be part of the superposition instead of causing an irreversible collapse: if you store the detected slit until the photon hits the back wall, then delete it, an interference pattern emerges once again.


Well, they're comfortable because the phenomenon is well-studied, we have very good models and can with good accuracy predict what will happen. What we don't know is why exactly it behaves in this way.


The word "why" (and other references to causation) don't work in a normal way when you're talking about fundamental physics, because fundamental physics is the lowest-level model within which causation takes place.

To ask "why" some part of the standard model is the way it is, you either invoke some rubric for comparing models, like Occam's Razor; or you indulge in metaphysics and speculate on the nature of whomever is running the Simulation. Either of these is a different meaning than the "why" of "why do rockets work in space."


Well, you made quiet a few assumptions here. We don't yet know how much fundamental quarks are and what (if anything) hides in the subquark level.

Given the lack of adequate instruments, our minds and unconventional approaches are our most powerful tools at this point. For example, we generally consider the space between particles as void. We can't see anything, we can't detect anything, so we assume there's nothing there. But for what it's worth, there could be trillions of unknown particles that don't interact with matter. So why would it matter, you ask? Because it's not impossible that under certain conditions some of these might cluster into matter or interact with it in unobvious ways. The existence of dark matter and energy (or the related phenomena) indicates this is not impossible.

But yes, I see your point. However, I hope we get deeper into understanding this phenomenon before I die. Who knows, maybe it's because of some yet-undiscovered aspect of photons, and metaphysical speculation is not necessary?


One explanation is the "Many-Worlds" interpretation of quantum mechanics: https://en.wikipedia.org/wiki/Many-worlds_interpretation


no the SM is a quantum field theory: particles are excitations of quantum fields, so wave-particle duality is implicit. these fields, at the classical level are function that take a space time coordinate as an argument (space time filling) and typically yields a vector in some vector space (except the higgs field which is a scalar). this vector space in turn is a representation of some symmetry group. the symmetry group of SM is U(1)xSU(2)xSU(3).


Afaik, the many worlds interpretation (MWI) trivially explains all these "duality mysteries", but people don't like it for philosophical reasons. A one-world with randomness is a much more cozy place to live in than a cold super deterministic " matrix" that doesn't even leave room for "free will".


No, it's quite clear in QFT that elementary entities:

- travel as unitary time-reversible non-local waves (propagate, interfere, entangle)

- interact as non-unitary time-irreversible local particle events (position, time, exchange, create, destroy)

Your interpretation of quantum mechanics will determine how you imagine the two views connect (Born Law, Many Worlds, ...)


I don't think it's correct to say that they interact as particle events, because off-shell interactions are a thing, and interacting strictly as particles would prohibit that, no?


The standard model and particle physics has become basically a religion for some scientists.

> “As for the question ‘What are we?’ the Standard Model has the answer,” says Saúl Ramos, a researcher at the National Autonomous University of Mexico (UNAM). “It tells us that every object in the universe is not independent, and that every particle is there for a reason.”

Because this makes no sense unless you're operating under some sort of belief system.


Except that all those physicists will tell you that they know that the Standard Model is incomplete, and therefore it is in some sense wrong. Religious people won't tell you that about their holy books.


History says otherwise that that’s always the case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: