Hacker News new | past | comments | ask | show | jobs | submit login
The Webb Telescope further deepens the Hubble tension controversy in cosmology (quantamagazine.org)
340 points by nsoonhui 3 months ago | hide | past | favorite | 346 comments



If you’re interested in learning more about the rich human history and ingenuity underpinning the Hubble “constant”, please do yourself a favor and scroll through The Cosmic Distance Ladder by Terence Tao of UCLA: https://terrytao.wordpress.com/wp-content/uploads/2010/10/co...

The slides are delightfully visual and comprehensive yet terse, walking you up the rungs of the cosmic ladder from the Earth through the moon, sun, and beyond. I can almost guarantee you’ll learn something new and fascinating.


There's also the video of this talk: https://www.youtube.com/watch?v=7ne0GArfeMs


He is a really good speaker! I also recommend his recent talk on AI in math research.


Delightful, thanks


I really like all the caveats and the time taken to explain things in the first part of that document, but later it starts to rush and gloss over important details and caveats. On page 151 of that link, when it starts talking about using parallax to measure the distance to nearby stars, it says "However, if one takes measurements six months apart, one gets a distance separation of 2AU." This is obviously incorrect because the whole solar system is orbiting around the galactic core, which itself is moving with respect to the CMB rest frame. I did a quick calc based on the 552.2 km/s galactic velocity value from Milky Way wiki [1] and found that it moves an additional 0.97AU in 6 months. I am assuming that this has been accounted for by scientists, and is being simplified to make it more digestible for the reader, but it hides a rather large dependency for every higher rung on the cosmic distance ladder. A cosmic velocity ladder that seems to be based off of Doppler CMB measurements [2]. If we are indeed using measurements many months apart and under or overestimating our velocity through the universe, even a little bit, every higher rung of the ladder would be affected wouldn't it?

In the process of writing this, I thought "Surely we have launched a satellite pair that can take parallax measurements at similar times in different places!" They could range off of each other with Time of Flight, be positioned much further apart than a few AU, and take parallax star measurements at more or less the same time without atmospheric distortion, but it doesn't seem like we have. Both Hipparcos and Gaia were satellites that were deployed to measure parallax, but not as a pair. My reading suggests they used multi-epoch astrometric observations (speed ladder dependent) to generate their parallax measurements and it seems our current parallax and star catalogues are based on the measurements taken by these two satellites. New Horizons got the most distant parallax measurements by comparing simultaneous* earth observations, but it was limited to Proxima Centauri and Wolf 359, far from a full star catalogue.

I would love if someone more knowledgeable can steer me towards a paper or technique that has been used to mitigate the cosmic distance ladder's dependency on this cosmic speed ladder. Regardless of how certain we think we are of our velocity through the universe, it seems to me that sidestepping that dependency through simultaneous* observations would be worthwhile considering how dependency laden the cosmic distance ladder already is.

[1] https://en.wikipedia.org/wiki/Milky_Way

[2] https://arxiv.org/pdf/astro-ph/9312056

* Insert relativity caveat here for the use of "simultaneous". What I mean in this context is more simultaneous than waiting months between measurements.


What if the universe doesn't expand at all? What if we're completely wrong and redshift is caused by something else entirely, like some yet-undiscovered phenomenon that occurs to spacetime or electromagnetic waves? How can we be so sure it's space that's expanding, not time?

The more I read about this, the more it feels like phlogiston theory[1]. Works great for describing observations at first, but as more observations are made, some contradict the theory, so exceptions are made for these cases (phlogiston must have negative mass sometimes/there must be extra matter or energy for galaxies to spin as fast as they do), and then finally someone discovers something (oxygen/???) that explains all observations much simpler and requires no weird exceptions.

[1] https://en.wikipedia.org/wiki/Phlogiston_theory


> What if the universe doesn't expand at all?

Not possible. Redshift is not the only observation we have. The totality of all the observations we have cannot be explained in any other way than an expanding universe.

> How can we be so sure it's space that's expanding, not time?

Our best current model does not say "it's space that's expanding, not time". It says that in one particular frame (the comoving frame), the overall spacetime geometry can be described using a "time" that always corresponds to the time of comoving observers and a "space" whose scale factor increases with that time.

> The more I read about this, the more it feels like phlogiston theory

This is an extremely unjustified comparison. Phlogiston theory never accounted well for actual observations.

> as more observations are made, some contradict the theory

None of the observations being discussed contradict the general model of an expanding universe. They only pose problems for the indirect methods we use to convert our direct observations into model parameters.


I agree with you on the overall point, but this statement.

> Not possible.

In all honesty, Cosmology rests on the principal that physics is the same in all directions, over all translations, and over time translation. While this is a good assumption (good luck testing alternatives!!). There are a variety of papers exploring the topic of how much these assumptions would need to be violated to mirror observations.

A good example being

what if the electron was more massive in the past?

All Redshift would then be explained away ;)

P.S.

There are very good reasons to believe that the electron was not more massive in the past.


> Cosmology rests on the principal that physics is the same in all directions, over all translations, and over time translation.

That the laws of physics are the same. That doesn't mean the geometry of spacetime is the same or that the configuration of matter and energy is the same.

> There are a variety of papers exploring the topic of how much these assumptions would need to be violated to mirror observations.

What papers? Do you have any references?

> A good example being

what if the electron was more massive in the past?

All Redshift would then be explained away ;)

What is your basis for this extraordinary claim?


It’s not just phlogiston, it’s the lifecycle of all scientific theories that they’re used for as long as they make accurate predictions, then we start seeing things they mis-predict, then they’re revised or replaced. You seem to think the expanding universe theory can still be saved by some data artifact or parameter tweaking, but that’s been hunted for years and we’re still at “we just can’t make it match everything we’re seeing”. Historically, that’s what precedes significant revision or replacement.

> Redshift is not the only observation we have.

What else is there?


> it’s the lifecycle of all scientific theories that they’re used for as long as they make accurate predictions, then we start seeing things they mis-predict, then they’re revised or replaced.

No, that's not what always happens with scientific theories. For example, Newtonian mechanics is still used, even though we now know it's not exactly correct; it's an approximation to relativity that works reasonably well for weak gravitational fields and relative speeds that are small compared to the speed of light.

The "mechanics" that Newtonian theory (and its predecessors, Galilean mechanics and Kepler's model of the solar system) replaced were indeed replaced--nobody uses Aristotelian physics or Ptolemaic cosmology any more, not even as approximations. But that does not always happen.


Galileo’s principle of relativity is used all the time.


> The "mechanics" that Newtonian theory (and its predecessors, Galilean mechanics and Kepler's model of the solar system) replaced were indeed replaced--nobody uses Aristotelian physics or Ptolemaic cosmology any more, not even as approximations. But that does not always happen.

Yeah, about that... A not-insignificant number of people would have a hard time explaining why the moon has phases, let alone be able to do something like explain the sidereal day.


>> What else is there?

The relatively CMB, which points to a past when the universe was more homogenous than it could/should be given SoL limitations on communication between distant locations. The only answer is that those locations were at some point in the past much closer together.


> The totality of all the observations we have cannot be explained in any other way than an expanding universe.

Surely there are infinite other possible explanations that fit the finite number of data points available to us. Probably what you meant is that the expanding universe theory is the simplest of them all and creates less problems then others.


> Surely there are infinite other possible explanations

If you think there are others, please exhibit one.

> Probably what you meant is that the expanding universe theory is the simplest of them all and creates less problems then others.

There are no other theories that I'm aware of that account for all the data we have, even approximately.


> If you think there are others, please exhibit one.

I could easily do that (god tinkering with our measurements? we live in a simulation?) but explanations by themselves are worthless. Any set of facts can be "explained" but it doesn't really help.

The science is concerned with theories that don't just explain known facts and measurements but also predict new ones. These are also known as falsifiable theories because a theory is falsifiable IFF it predicts at least one new observation that can be actually made and tested.

Now even these can obviously be constructed ad infinitum. For your question of alternatives to expansion, take a theory from the same field (e.g. tired light), then take facts it doesn't fit and make specific ad-hoc carve-outs for these facts in the theory. Sure it makes it ugly and complex but it remains a scientific falsifiable theory. Furthermore this exact things happens very often in the scientific community because theories don't fall immediately when the first contradiction is found. They only fall when a better theory appears to supersede them and until then they just develop a "protective belt of ad-hoc assumptions" as Lakatos called it. No need to mention that the same thing will happen to the "better theory" in due time.

I write all of this not because I have a good cosmological theory sitting in my closet but rather because your statement "The totality of all the observations we have cannot be explained in any other way than an expanding universe" is outright false. As I show above it not only can be explained in an infinite number of ways, but you can also construct and infinite number of scientific theories that fit the totality of observations. Because this "totality" is alas finite.

This reminds me of my school math teacher who once told me that "the sequence 1, 2, 6, 24, 120, 720 cannot be explained other way than by n!". A pity I didn't know about Lagrange Polynomials at that age and had to spend the whole evening to construct a fitting polynomial by hand.


A big problem of the modern cosmology is not only that we have observations that do not fit models but that we do not know if the observed discrepancies are the real problems with the models or are artifacts of calculations.

For example, to simulate a galaxy one should use a model based on General Relativity. But mathematics behind GR is too complex to allow simulations on the scale of the galaxy even with all that modern computation power. So instead of GR the calculations use Newtonian mechanics with minimal corrections for the light speed limit. Plus there are a lot of other simplifications like replacing star systems with hard balls that reflects when hit each other with no notion of matter transfer or ejection in the process.

Then we see that the simulation does not fit data. A typical explanation that is used is that of dark matter hypothesis. But this is unjustified.

We have no proof that numerical simplifications are mathematically valid and do not lead to a big error on a galactic scale. Moreover, there were recent papers that tried to account at least for some effect of General Relativity. Apparently it was enough to explain at least some effects previously attributed to the dark matter.

So it can be that the dark matter is just an artifact of inaccurate simulations.


Not a cosmologist, so I'll defer to any simulators who are more up-to-date with the field than I am, but the papers linked from wikipedia are about the fitting of models to existing data, and by using a more complete model, they can better constrain values in the models; whereas the massive 3D simulations are looking at a different problem, and go through a rigorous level of validation and cross-checking, with different microphysics tested and examined. Both dark matter and dark energy fall out of GR—they can both be zero, but they can also be non-zero.


Do you maybe have links for the papers you mention?


See references on Wikipedia page https://en.m.wikipedia.org/wiki/Galaxy_rotation_curve under “alternatives to dark matter”.


There are a fair number of extensions to GR that can reproduce the standard cosmology, typically avoiding or recasting DM, DE, or both. Examples include generalized teleparallel gravity and Cotton gravity. In such theories, any solution of the EFEs are solutions of the (sometimes very different) field equations of these families of theories, although the field content may have a somewhat different physical interpretation from GR.

However, generically, these extensions tend to have an under-determination problem frustrating attempts to arrive at a unique spacetime for a given distribution of matter or a unique distribution of matter which can exist in a given spacetime (or both problems). That makes them less attractive than GR, or even possibly outright unsuitable bases for initial-values formulations (and thus are unlikely to overthrow numerical relativity soon).


> If you think there are others, please exhibit one.

If everyone like you attempts to rail road the imaginative process at the beginning of hypothesis formation, then we'll never get to the point of being able to exhibit one, should one be possible.

The demand for rigour at this point in a discourse - which was pretty clearly signalled by the commenter to be offered at a stage prior to substantive hypotheis formation - just shuts down the imaginative process. It's not constructive.


There's a difference between being closed-minded and saying "yes, we've obviously thought about this thing that you, someone with no apparent background in our field, thought of in ten seconds". And if you're an expert in any field that gets a lot of people who are interested, but not a lot of people who are experts, you hear these kinds of half-baked theories all the time, often with this exact "oh you orthodox experts just can't handle my field-disrupting free-thinking!" kind of framing.

I'm a mathematician by education, and I cannot tell you how many people insist on things like 0.999... < 1 without an understanding of (a) what the left side of that expression even means, (b) what a real number is, or (c) what basic properties the real numbers have. Going "no, you're wrong, and it would take me a couple of full lectures to explain why but trust me we're pretty sure about this" is a reasonable answer to that, provided that you have indeed established that to your own satisfaction, at least.


https://en.m.wikipedia.org/wiki/0.999...

From Wikipedia, an intuitive explanation of an elementary proof:

> If one places 0.9, 0.99, 0.999, etc. on the number line, one sees immediately that all these points are to the left of 1, and that they get closer and closer to 1. For any number that is less than 1, the sequence 0.9, 0.99, 0.999, and so on will eventually reach a number larger than . So, it does not make sense to identify 0.999... with any number smaller than 1. Meanwhile, every number larger than 1 will be larger than any decimal of the form 0.999...9 for any finite number of nines. Therefore, 0.999... cannot be identified with any number larger than 1, either. Because 0.999... cannot be bigger than 1 or smaller than 1, it must equal 1 if it is to be any real number at all.

And then:

> The elementary argument of multiplying 0.333... = 1/3 by 3 can convince reluctant students that 0.999... = 1.


Sure, but we are talking about a physical theory here not a mathematical one. There are always alternative physical theories, infinitely many. They may not be good theories, but they clearly exist. This has nothing to do with cosmology but more is a fundamental principle of logic.

To put another way, there is a big difference between saying some specific alternative theory is wrong/unlikely/bad, and claiming there exists no alternative theories at all regardless of quality.


It's just that 0.999... is an awful notation, in the sense that it invites people to complete these allusive ellipsis with whatever fit their intuition, possibly even different meaning depending on the context.

If we mean 9×10^-i for i from 1 to infinity, then let's just state so. Let's not blame people to interpret towards other direction when they are provided misguiding hints.

Regarding infinity, there is a constructive proof that it doesn't exists which work perfectly provided that there is an infinite amount of resources allocated to it's execution.


I don't blame people for finding it counterintuitive. Lots of things in math are counterintuitive. I spent like three months learning Galois theory and I'm still pretty sure someone just snuck in there and changed a 1 to a 0 to make that sorcery work.

My point is that it's not closed-minded of me if I fail to provide a complete answer to someone making such a claim, particularly if that person hasn't done any of the research or rigor to handle what is - by the standards of an expert - a pretty easy question to answer. Outsiders can occasionally result in great insights, but they do that through very hard work, not from ten seconds of thinking about a field they haven't learned anything but the pop-science version of.

Most of the theories being speculated about in this thread are veering into "not even wrong" territory, in that they're not even necessarily well-defined. When you're talking about cosmology you'd better bring your general relativity game, which means you better bring your differential geometry game, which means you better have a graduate-level mathematics education. I have a graduate-level mathematics education and on a good day I could half explain to you what a metric tensor is and what the hell it has to do with curvature ("it's, uh, it's kinda like a Jacobian I guess, except the dx and dy are local vectors that can't be translated globally around the space").

Without those tools, you don't even have meaningful notions of what "distance" even is on a cosmological scale, much less how it changes with time! It's like speculating about biology without knowing what a protein is, or speculating about computer science without knowing what a for loop is. It's just not going to get you anywhere.


Like everywhere else in the intertubes, people think their experience domain is relative to every other specialty domain. We live in a conspiracy world now, where RTFM or "do the work" is a micro aggression.


Whoa! What? Not a mathematician in any way (in case that isn't obvious), but I'd have totally thought 0.999... asymptomatically _approaches_ 1, but never reaches it, and so is <1. Is there a short-form explanation (that I might have a chance of understanding, lol) of why that's incorrect? I'd love to have my mind blown.


There's a rigorous proof on Wikipedia, but there's simpler ways to show it.

For example, we know that 1/3 = 0.333...

3 * 1/3 = 3 * 0.333...

1 = 0.999...

You can also do it with subtraction. For example, 1 - 0.999... = x. Assuming x is greater than 0, then it should evaluate to 0.000...1.

But we can't have the digit 1 after an infinite number of zeros. If there truly were a "1" after infinite zeros, it implies reaching the end of infinity, which is a logical contradiction. So x can't be greater than 0.


An alternative to the sibling comments:

In this context, the notation 0.999... does not represent a process. It represents a fixed number.

Which number? Well, if you reason through it you'll find that it has to be the same number as that represented by the notation 1.

An insight that is crucial (and pretty obvious in hindsight, though many people seem not to be exposed to it) is to distinguish between a number and its representations. "1" is not a number, it is merely the representation of a number. And numbers have many different representations. As a member of this forum, you can probably appreciate that "12" and "0xC" are two different representations of the same number. In the same way, 0.999... and 1 are different representations of the same number.


Sequences can approach things. The sequence 0.9, 0.99, 0.999, 0.9999 and so on asymptotically approaches 1. The difference between 1 and the Nth term in the sequence is 1e-N, which goes to 0 with N.

0.999...[forever] is not a sequence, it is a number. Numbers have values, they don't approach things. The misleading part is that 'forever' is not something about evolution or the passage of time. It's not 'happening' or 'sequential' like the sequence. There is no 'and then another 9'. All the 9s are really there, at once. And it is closer to 1 than any term in the sequence. Since the sequence gets closer and closer to 1, converging to it asymptotically, 0.999...[forever] cannot differ from 1; if it did the sequence wouldn't converge.


Thank you, and everyone else who answered (I hope they see this reply). Your distinction between "sequence" and "number", along with the mathematics of 0.333... = 1/3, convinced me - and my mind is successfully blown.

Follow-up: is it the same for other repeating sequence-looking numbers? As in, would 0.9333... = 0.94?


Its true of numbers whose with a decimal representation which ends in an infinite string of 9s, so 0.939999... = 0.94. This is because we write numbers in base 10. If you write numbers in base 2 its equal to numbers whose binary representation ends with an infinte number of 1s e.g. 0.11111... (base 2) = 1.


0.9333... is equal to 9/10 + 1/30. To get 9/10 + 4/100 I think what you're aiming at is 0.93999...


1/3 + 1/3 + 1/3 = 3/3 = 1

.3… + .3… + .3… = .9… = 1

https://en.m.wikipedia.org/wiki/0.999...


0.xxx... is just a notation for certain fractions (specifically, the fraction x/9). If we set x = 9, then the notation 0.9999... is just a notation for 9/9 = 1. So it's just a silly notation for 1.


This actually makes the most sense to me. It’s an artifact of our chosen numerical notation system.


I think the simple one is x=0.999..., 10x=9.9999..., 10x-x=9.999...-0.999..., 9x=9, x=1


The simplest argument I can think of is to ask yourself: "Are there any numbers between 0.999... and 1?"

If not then it's logical to conclude they wind up at the same "place", that is, the same number. Or equivalently: it can't have any other value besides 1.

If you care about such things, then you're a mathematician.


It is a limit. A very powerful tool.

You can choose any arbitrary finite number in the sequence, and I can find a number greater. So the value of 1 exists, and its a continuous function, so therefore the limit exists.

I do care about such things, and yes, a degree in math for me was just slogging through every math course the college had, with a long string of high grades, and honor roll achievements.


     1/6 = 0.166666...6
     1/6 = 0.166666...6
     1/6 = 0.166666...6
     1/6 = 0.166666...6
     1/6 = 0.166666...6
  +  1/6 = 0.166666...6
  ----------------------
         = 0.999999...6
...so that 6 times 1/6 is zero point 9 repeating, with a 6 on the end ;-)


There are a lot of proofs of this, but they all rely on a certain level of rigor about what a real number is - and that, it turns out, is a much more difficult question than it sounds like. You don't typically get a rigorous definition of the real numbers until well into a college-level math education.

-----

First, you're making a category error. "0.9999..." is a single value, not the sequence of values 0.9, 0.99, 0.999, 0.9999... Single values cannot "asymptotically approach" anything, any more than the value 2 or the value 7 can asymptotically approach anything. It's just a number like any other.

To show what value 0.9999... takes on, we need to do two things. First, we need to show that this notation makes sense as a description of a real number in the first place, and second, we need to show what that real number is (and it will happen to be 1).

-----

So, why is it a real number?

Well, remember what we mean by place value. 0.9 means "0 ones, 9 tenths[, and zero hundredths, thousandths, and so on]". 0.99 means "0 ones, 9 tenths, 9 hundredths, [and zero of everything else]". Another way to say this is that 0.9 is the value 0 * 1 + 9 * 0.1 [plus 0 times 0.01, 0.001, and so on], and that 0.99 is the value 0 * 1 + 9 * 0.1 + 9 * 0.01 + [0 of everything else].

What that means is that if 0.9999... means anything, it means 9 tenths, plus 9 hundredths, plus 9 thousandths, plus 9 ten-thousandths, plus 9 hundred-thousandths, and so on and so forth forever. In other words, 0.9999... is the value of an infinite sum: .9 + .09 + .009 + .0009 + ...

Infinite sums, in turn, are by definition the limit of a sequence. This is where that "asymptotic" thing comes back, but notice the distinction. 0.9999... is not the sequence, it is the LIMIT OF the sequence, which has a single value.

To show that it's a real number, then, we need to show that the limit of the sequence 0.9, 0.99, 0.999, 0.9999... does in fact exist. But this sequence is clearly increasing, and it is clearly not greater than 1, so we can (among other things) invoke the Monotone Convergence Theorem [1] to show that it must converge (i.e., the limit exists). Alternately, you can think back to your algebra 2 or calculus classes, and notice that this is the geometric series [2] given by sum 9 * 10^-n, and this series converges.

-----

Now, why is it equal to 1?

Well, there's a few ways to prove that, too. But the simplest, in my book, is this: given any two different real numbers x and y, I'm sure you would agree that there is a value z in between them (this is not a difficult thing to prove rigorously, provided you've done the hard work of defining the real numbers in the first place). The average of x and y will do. But we can flip that statement around: if there is NOT a value between two real numbers, those two real numbers MUST be equal.

In more symbolic terms, we claim that for all real numbers x and y such that x < y, there exists z such that x < z < y. So if there ISN'T such a z, then we must not have x < y in the first place. (This is the contrapositive [3], if you're not up on your formal logic.)

So consider the values of 0.9999... and 1. What value lies between them? Can you find one? As it turns out, no such value exists. If you pick any real number less than 1, your 0.99[some finite number of nines]9 sequence will eventually be bigger than it - and therefore, since the sequence is increasing, its limit must be bigger than that value too.

Since there are no numbers between 0.9999... and 1, they must be equal.

-----

[1] https://en.wikipedia.org/wiki/Monotone_convergence_theorem [2] https://en.wikipedia.org/wiki/Geometric_series [3] https://en.wikipedia.org/wiki/Contraposition


This is my (new) favorite answer. You've made something counter-intuitive seem simple and obvious. Fantastic!

(I've never done this before, but: a pox on those down-voting my original question. Learning is the very essence of "hacker"-dom. Thank you to all who have seen this and taken their time to teach me something.)


This needs to be repeated time and time again for people who deny the basic tools of Calculus, and will suffer the misuse of them. ( specifically the sum of the sequence of the reciprocals of the natural numbers is equal to 1/12. I get that it is a useful tool for quantum chromodynamics, but it makes my skin crawl. )


It's one of the regrets of my life that I didn't take start calculus my first term in college. I'd done pre-calc in high school, but I'm a Humanities guy, and maths - even when I "get" it (and I'd done fine in high school) - drops out of my head pretty quickly. By the time I had an opportunity to continue, and signed up for a "refresher" pre-calc course, I realized I'd have had to go right the way back and re-learn enough algebra and geometry stuff that it was too heavy a lift. Maybe when I'm retired and have time on my hands I'll sign up for some courses at a community college. I still remember how satisfying it was to grok a problem or a concept that had seemed impossible.

So, ahem: is there any possibility that I'll be able to understand "the sum of the sequence of the reciprocals of the natural numbers is equal to 1/12"? I understand (I think!) each of the words in that sentence, but I can't make them make sense! I've got as far as

1 + 1/2 + 1/3, etc.

So how does something that starts off with 1 + [something] end up < 1?


It doesn't, and in fact I believe the person you're replying to has it confused with another result where the answer would also be "it doesn't".

The sum 1 + 1/2 + 1/3 + 1/4 ... - what we call the harmonic series - runs off to infinity, although it does so very slowly (the nth partial sum is approximately equal to ln(n) plus a constant whose value is about 0.58). Showing that this series diverges is standard calc-textbook stuff (it's the textbook example of the integral test for convergence, although there are plenty of other ways to show it).

However, in math well beyond basic calculus, there are methods for assigning meaningful values to series that don't converge in the conventional sense. Those methods assign the same value to convergent series as the regular old calculus arguments would, but they can also assign values to some divergent series in a way that is consistent and useful in some contexts.

For example, the series 1 + 1/2 + 1/3 + 1/4 + ... is a specific example of a more general series 1/1^z + 1/2^z + 1/3^z + 1/4^z + ..., for some arbitrary number z. This series only converges in the standard calculus sense when the real part of z is > 1, but it turns out that that's enough to define a function called the Riemann zeta function, whose input is the value z in the series and whose output is the sum of the series.

The zeta function, as it turns out, can be extended to values where the original series didn't converge. And doing so gives you a method for assigning values to "sums" that aren't really sums at all.

-----

It turns out that even THAT won't get you a completely nice value for 1 + 1/2 + 1/3 + 1/4 + ..., because that sum corresponds to the zeta function's value at z = 1. But the zeta function isn't well-behaved at z = 1. If you do even more massaging to beat a number out of it, you don't get 1/12, you get the about-0.58 constant mentioned in the previous section.

Which brings me to the way the poster you were replying to is probably confused. I think the sum they meant to refer to was the even-more-obviously-divergent sum 1 + 2 + 3 + 4 + ... This sum happens to correspond loosely to the z = -1 value of the zeta function, since 1/2^-1 is just 2, 1/3^-1 is just 3, and so on.

Again, the sum 1 + 2 + 3 + 4 + ... does not converge, but if we choose to identify it with a zeta function value, we'd identify the sum 1 + 2 + 3 + 4 + ... with zeta(-1). And the value of zeta(-1) happens to be -1/12 (yes, minus 1/12).

----

So the answer here is just that no, 1 + a bunch of positive numbers is never < 1. It's actually a decent basic calculus exercise to prove that any series a_0 + a_1 + ... a_n, where all the terms are non-negative, either fails to converge or converges to a value >= a_0. But because we're taking some extra steps here to leave the domain of convergent series in the first place, it turns out we can get results that (for conventionally convergent series) would be impossible.


I suspect the downvotes are just because this is a well-known result whose proofs are rather easy to google. But I like running into one of today's 10,000, I guess. https://xkcd.com/1053/


Fair enough. I'm enough not-a-mathematician that I didn't know it was well-known, and wouldn't even have known how google the proof! (Nor have much confidence I'd understand the real thing once I found it, lol.)

Anyway, I love that concept, and enjoy being on either side of the exchange. Thanks again.


> So consider the values of 0.9999... and 1. What value lies between them? Can you find one?

As long as it's true that you can always add two real numbers, there's always infinity + 1, or ultimately 0.9999... + 0. ...9


No one prevents new ideas from being presented, but simply suggesting the universe does not expand without giving any arguments for this position nor trying to explain the observed red shifts, contributes exactly nothing to the discourse. This, is not constructive.


> at the beginning of hypothesis formation

But the expansion of the universe has been thoroughly studied for over a century. We're past the brainstorming phase.

I generally think people should brainstorm to generate ideas and then filter them down. And it's true that filtering too early can significantly decrease the quality of ideas.

And it's also true that in a place like Hacker News there will be smart people from all sorts of backgrounds getting to experience the joy of exploring a new topic that they're not fully up to speed on yet.

The risk though is that somebody who thoroughly understands the field is reading your comment. So for that reason I think it's a good practice to always be aware that technical fields we're not expert in are usually more subtle than we initially think.


When a random techie comes up with a completely novel hypothesis that contradicts a broad range of theories accepted by the vast majority of practicing physicists, the proper response is not to stop and say "Hmmm. I wonder if he's right. Let's talk about it."


Thanks, but I don't think it's a fair description of what happened here. I'm a mathematician who noticed that the statement "The totality of all the observations we have cannot be explained in any other way" is obviously false.

Explaining is neither hard nor useful and it's not what science is normally concerned with. The goal is to predict new observations not to explain known ones.


Why not? Seems perfectly normal to just talk about it.

I learn things all the time by wondering if somebody else is right. Much better than just thinking everything is simply the way I think it is now.

Even if you know somebody is wrong, talking about it is absolutely harmless.


Because it's noise, and noise is distracting, and there's a LOT of noise out there.

Part of being an expert is knowing how to filter out the noise so that you can actually get some work done.

If one wanted to deal with noise all day, they'd join SETI. Or parliment.

If someone has a novel theory, let them come up with evidence to support it, and clear identification of how the theory can be invalidated.

That's not closed-mindedness; that's pragmatism. Could they actually be right? Yes, in the same way that a baby could beat Muhammad Ali - but you won't see anyone lining up to buy tickets.

You only have so much time in your life, so no need to waste it on peoples' 10-second "theories", like "How about achieving faster-than-light communication by stuffing so many photons down the fibreoptic cable that they "push" each other faster?". Some ideas are just plain dumb and obviously not worth a trained person's time.


> If one wanted to deal with noise all day, they'd join SETI. Or parliment.

> If someone has a novel theory, let them come up with evidence to support it, and clear identification of how the theory can be invalidated.

Ok, sure, but you can’t really be coming to _Hacker News_ expecting the latter, or none of the former, so it’s a moot point and poor expectations.


You absolutely can and should when they come at you accusing you of being "closed minded", "unimaginative", and "not open to new theories" (see the discussion thread).

It's one thing to politely ignore crackpot theories or state the established facts in response, quite another when the crackpots start attacking you.


The issue is that these physics threads always end up the same, with commenters having only popular-level background offering suggestions they came up in five minutes, mirroring obvious thoughts that actual physicists have of course already thought of decades ago, and in much more detail. It is really hard and quite unlikely to come up with novel ideas that haven't already been discussed and played out ad nauseum in that field.


I’m not suggesting the techie is correct, I just don’t think the right answer is complete dismissal instead of communication. Ok, so you know it’s obviously wrong, but there’s no obligation to then go and stifle their curiosity or imagination. Just don’t say anything or let them talk to somebody who has the time and care to indulge.

The original commenter I’m replying to, taken at their word, is ready to dismiss anything somebody says, regardless of merit with no further discussion, just because they think there’s “no way” that person could be right. Which is hilariously close-minded way to conduct oneself.


This is fine the first few times, but after a few dozens it gets exhausting. This is also not about stifling their curiosity or imagination. It's about understanding that in a field as advanced as physics and cosmology, how incredibly unlikely it is for some layman to come up with a worthwhile idea that hasn't already be tackled. To even be able to explain why an idea is impractical or beside the point, a solid knowledge and understanding of the field is often aready necessary. Articles like from Quanta Magazine dress the topics up in language that make them seem substantially simpler, and closer to human intuition, than they actually are.


Once we have open source AI training data and AI historians they could help make real science directly available instead of pop magazines.


Quanta Magazine generally write very well written, well researched and pretty comprehensive articles. They cite their sources and they're very careful to get science as correct as they possibly can (I'm a scientist, and they once contacted me to fact-check one of their articles).

Comparing their work to the dross that AI produces is insulting


Insulting? Sorry! I didn't mean to say "pop mag" in a bad way like "ai" in a bad way.

I'm actually excited for computers getting big enough to comprehend human language and science.

I hope one day Dross becomes Dos! AI needs more XP. VR AI and VR AI Science edu is what I'm most hyped for.


If you're not doing math, you're not doing physics.

Lookup what General Relativity actually is, what it looks like. The mass-energy tensor and the extremely complicated underlying partial differential equations it is actually encoding.[1]

Every parseable language explanation is irrelevant: the mathematics works. If you have an alternative idea...then the mathematics needs to work. What that means is irrelevant, provided it makes useful predictions and does not contradict established observations.

[1] https://en.m.wikipedia.org/wiki/Stress%E2%80%93energy_tensor


> Ok, so you know it’s obviously wrong, but there’s no obligation to then go and stifle their curiosity or imagination.

Dismissing uninformed ideas does everyone a service. If you're a complete outsider to a field and have no training it it, decide to come up with random ideas about difficult unsolved problems, and then feel stifled when an expert dismisses your ideas... well, that's a level of arrogance and hubris that I think is more than a little infuriating.

> The original commenter I’m replying to, taken at their word, is ready to dismiss anything somebody says, regardless of merit with no further discussion, just because they think there’s “no way” that person could be right. Which is hilariously close-minded way to conduct oneself.

That is a hilariously uncharitable interpretation of what they said.


Sorry, all this twaddle is like a culture war ... it's noise. Take the cosmologists and put them out of mind.

Now, exhibit a testable hypothesis. Better, try to explain the 67 to 75ish mpc/s ranges in other methods.

Then we can talk. Nuff said!


I had the idea that each of these phenomena were being influenced by our local gravity well, in a different way. Then I remembered my physics. Then I read a few of their papers. This is not just good science, it's great science.

I withdraw my idea, but continue to wave lengths of wire 11.8 inches long.

I do wish I could read a website called cosmogony news, every day.


Has anyone ever tested the theory of matter that is repelled by gravity instead of attracted? They would zoom away and seperate like helium escaping the Earth.


What theory is that? How would it be tested? Gravity is not even a force, it is a consequence of how space deforms around mass&energy. So, even if something like "negative mass" existed, it wouldn't repel, it would just cancel out.


https://en.m.wikipedia.org/wiki/Exotic_matter#Negative_mass

The cancle out your talking about is what happens with magnets and dipoles. What if negative mass was a monopole?

Testing the theory would be be the same kind of tests created for testing the Big Bang.


This would be way cool. A way to rid science of randos and their theories. For the fourth time today, I have to admit being facetious.


Today is a new day.


Every day.


Same with room temperature highly conductive matter :( Every day I wish we had Hoverboards, Flying Brooms, and Magic Carpets. Instead... every day is another car ride to the grocery store.


Also

> The issue is that these physics threads always end up the same, with commenters having only popular-level background offering suggestions they came up in five minutes

… is this really an issue?

I speculate you’ve taken something you don’t want to see personally and dressed it up as “the problem” when you could instead just find a way to be okay with it.

But maybe there’s something genuinely problematic with that behavior which I don’t know about.


Yes, it's an issue. I come here as a layperson who is interested in the universe; it's exhausting to read plausible-sounding but completely meritless theories brought up by people here with little more training than I have, and try to decide if these ideas are actually useful, or are just uninformed things some rando thought up in five minutes after reading a pop-sci cosmology book.

I'd much rather hear plausible theories made by people who actually know what they're talking about.


If you've spent any amount on HN reading physics posts, you will see it is absolutely an issue.


I remember the halcyon room temp superconductor days.


> mirroring obvious thoughts that actual physicists have of course already thought of decades ago, and in much more detail.

They will eventually die and a younger generation will enter the field who don't know why those ideas were dismissed. For all you know you're shutting down a 14-year-old (either directly or someone observing) who is actually interested and may become one of those physicists.

It's roughly equivalent to https://xkcd.com/1053/


> They will eventually die and a younger generation will enter the field who don't know why those ideas were dismissed

No, the younger generations are constantly entering the field, slowly learning and building up the tools required to think about these sorts of things. By the time they've done that, they don't need to have their silly ideas shut down, because they do that themselves, using the knowledge they've built.

And when they do have novel ideas, they have the mathematical and scientific tools to actually argue why their novel ideas deserve expensive, scarce telescope time, unlike the armchair pop-sci wannabe cosmologists (myself included) on HN.


On the one hand, I agree that those people are usually wrong and generally pretty annoying.

On the other hand, who cares? This is a random internet forum, not the Proceedings of the National Academy of Sciences, so maybe there is no such thing as a proper response?


What are you busy with? Are there any particular other subjects you would want to talk about instead?


It seems pretty silly to think that we are collectively “at the beginning of hypothesis formation” about the structure of the universe today, in 2024.


BS, nobody has to listen to your imaginative process. Imagine away, build something that conforms with the data, then show it !

For now, we dont know any other way to explain than to say it expands, except maybe imaginative fantasies from amateurs on Hackernews, but does it count ?


What if it expanded from anti-mass? Couldn't we form testable hypotheses from that?

I looked it up on the Wikipedia page for Exotic Matter and there isn't much exploration or tested theories on it.


You certainly could try. But don’t expect experts in the field to bother to do any of the legwork for you, and that includes learning about all the evidence that your model has to account for to be remotely as good as our existing ones.


It's exciting to me just even being able to ask experts if anyone has tried before! progress = legwork + new direction!


> > Surely there are infinite other possible explanations that fit the finite number of data points available > If you think there are others, please exhibit one.

One easy process for generating infinite explanations that fit a finite number of data points is taking the simplest theory you have, and adding extra rules that don't affect any of the existing data points.

e.g., if the standard explanation for observations like red shift, CMB, the abundance of light elements, etc. is H² = (ȧ/a)² = (8πG/3)ρ - kc²/a² + Λc²/3, One alternate explanation that fits all the data is H² = (ȧ/a)² = (8πG/3)ρ - kc²/a² + Λc²/3 + T, where T is the number of teacups in the asteroid belt. No observation has yet been made which would falsify this theory, and it fits the data just as well as the standard explanation. We reject it on the grounds of parsimony, not falsification.


> If you think there are others, please exhibit one.

One can construct such alternative theories quite easily: Everything is exactly as the "expanding universe theory" predicts except in a phone booth sized volume in a specific part of space 10 lightyears away from Earth where the universe is contracting.

Does not explain anything, and it is not testable in any practical sense. So it is not a good theory in any way, but it is a different one and it matches all the current observations.


> If you think there are others, please exhibit one.

Benevolent giant omnitech space squid manipulate EM radiation incident on the solar system to fool our primitive meat brains and sand computers into thinking we're alone in the universe so we don't go venture our and embarrass our local cluster.

See? There exist infinite theories explaining any set of data points. Parsimony, choosing the simplest theory that explains the facts, is what lets us make progress understanding and predicting the world around us. It's hard to predict space squid.


Thats like saying cars run because they want to, and only make us perfect the engine, and processes that happen inside it, to fool us into thinking we have a say in whether a car will run.

Sure, it is a hypothesis, but thanks to the Scientific Method the majority of people with knowledge in the field knows its most likely BS.

Going the same "it can be for ANY reason, guys!" in any field, will not get you much far, regardless if you feel justified in your ignorance or not.


The funny thing is, though, you can easily dismiss a crazy idea like "omnitech space squid", but a majority of humans on Earth today have entire worldviews based on equally silly ideas, and it's considered wrong somehow to mock these belief systems. In fact, many scientists subscribe to these silly belief systems, but then get offended if you make up an equally silly idea about space squid or flying spaghetti monsters.


> it can be for ANY reason, guys!" in any field, will not get you much far, regardless if you feel justified in your ignorance or not.

Of course not. The point is logical correctness: as a matter of logic, infinite theories explain any set of facts. Are almost all of these theories useless? Of course. We should restrict our attention to plausible theories.

But how do we decide which theories are plausible? We look for the ones that require the fewest assumptions.


Try that with computer technology. You are now not just scary, but really really scary.

"That is like saying computers run because they want to, and only make us perfect the CPU, and secret processes inside it, to fool us into thinking we have a say in whether a computer will run."

I am going to quietly turn off my computer.

I am just thinking about Steven Wolfram's causal networks.


Whence cometh the squid? By what mechanism does it manipulate EM radiation? And what experiment could be devised to detect the squid?

That's not a theory, nothing has been explained, merely further convoluted.


> See? There exist infinite theories explaining any set of data points. Parsimony, choosing the simplest theory that explains the facts, is what lets us make progress understanding and predicting the world around us. It's hard to predict space squid.

It’s not a scientific theory though. Which is a super important distinction in this discussion


> Not possible

Isn't it the case that we don't actually know if the universe is expanding, we only know that from our POV things are moving away from us and from each other, based on models and observations that are approximations at best?

In that frame an expanding universe seems to be the simplest and more elegant solution, but it's entirely possible it's not the correct one.

for example: what if, on the antipodes of the universe (assuming something like that exists), things appear to move closer to the relative POV? we'll never know


>> The totality of all the observations we have cannot be explained in any other way than an expanding universe.

If someone has a theory that incorporates "the totality of all observations" then physics is over. Redshift explains most observations, no other concept even comes close, but there are certainly things out there that remain unknown that are not explained by redshift. Dark energy is such a monumental observation that every theory in cosmology must remain caveated.


> Not possible. Redshift is not the only observation we have. The totality of all the observations we have cannot be explained in any other way than an expanding universe.

Well..? What are those other observations that point to expansion?


> What are those other observations that point to expansion?

The apparent brightness and apparent angular size of distant galaxies, and more importantly, the relationship between the three observables of redshift, apparent brightness, and apparent angular size. No known model other than the expanding universe predicts the actually measured relationship between those three observations.


That’s just redshift. Redshift alone wouldn’t be evidence of expansion, just relative speed, when people say redshift evidence they mean the relationship between redshift and brightness of a standard candle. And regardless of whether you call it redshift or the relationship between redshift and something else, it would be impacted by a change to redshift.


Its not just redshift. If you look at very distant galaxies you see their apparent angular size is larger than you expect.

https://en.wikipedia.org/wiki/Angular_diameter_distance

You know how distant objects appear smaller, well in an expanding universe that isn't completely true, very distant objects start looking bigger again. Roughly speaking this happens because the distance between paths different photons take to get to us gets stretched by the expansion.

Theres also the very obvious observation of the cosmic microwave background, which isn't explained by any non-expanding universe model.


To expand on the CMBR bit here for readers who aren't as familiar with cosmology: the current temperature of the CMBR is a result of redshift, but that's not what establishes the need for expansion. No, the problem is that the CMBR seems to be in thermal equilibrium (or more properly to have been in thermal equilibrium at the time of its emission a few hundred thousand years after the Big Bang).

The problem with that is that, if the Universe had remained the same size and has the finite age it appears to have, there would not have been time for information from the "north" side of our observable Universe to reach the "south" side. After all, the "north" side's light is only just now reaching us, at the center of our observable Universe, and would have to travel a very long way again to reach the south side. In cosmological/relativity terms, we say they're not causally connected.

The obvious explanation for this is that the early Universe expanded from a very small region that was causally connected for long enough to reach thermal equilibrium, and then expanded. So while the "north" side of the observable Universe and the "south" side of the observable Universe cannot communicate from their current positions (and in an expanding Universe will never be able to do so again), they were able to communicate in the past for long enough to establish equilibrium.

Without expansion, you need a way for two patches of sky that have never been able to communicate with one another to somehow "agree" to be the same temperature. And that's pretty hard to do.


> The obvious explanation

No.

“universe appeared everywhere at once at exactly same temperature” Same level of obviousness, if not more.

For some reason a disclaimer is mandatory these days: I am not saying I know more than cosmologists, just that inflation is nowhere near the “obvious explanation” territory.


"Couldn't this be explained by suggesting that the speed of light was faster in the past?"

I wanted to address this in a few ways: 1. The speed of light is absolutely invariant,

however the space it travels through, can not only be varient, but it can be varient in ways that we are failing to understand In two ways, one way is that we are reaching to understand it, and the other is that we may never understand it.

Is the speed of atoms invarient? I.e.is the temperature invarient?

Occams razor is one of the most powerful tools along with the belief in the both the elegance of the universe, and it's nature to make things very tricky to uncover that elegance.

There are some very deep thinkers here.


Couldn't this be explained by suggesting that the speed of light was faster in the past?


I don't know if it's an accurate description, but I found this passage from Wikipedia intuitive:

> Because the universe is expanding, a pair of distant objects that are now distant from each other were closer to each other at earlier times. Because the speed of light is finite, the light reaching us from this pair of objects must have left them long ago when they were nearer to one another and spanned a larger angle on the sky.


> That’s just redshift.

No, it isn't. I explicitly described two other direct observations that are not redshift.

> when people say redshift evidence they mean the relationship between redshift and brightness of a standard candle.

No, they don't. Redshifts of distant objects are directly observed. We don't need a "standard candle" to measure them.

Observations of apparent brightness are used to estimate distances to objects by comparing apparent brightness to the absolute brightness of a "standard candle" that is the same kind of object. However, such distance estimates are model-dependent; before they can even be made, the model parameters first have to be estimated using the observed relationship between redshift, apparent brightness, and apparent angular size.

> And regardless of whether you call it redshift or the relationship between redshift and something else, it would be impacted by a change to redshift.

I have no idea what you mean here.


> Redshifts of distant objects are directly observed

How do you know the object is distant? (notion being challenged “more redshift -> more distance”)

Please answer. Thanks for the patience.


Sit Fredrick Hoyles wimper theory and Joe Halddmens sawtooth theory.


There is a very old theory called the "Tired Light Hypothesis" which supposes that for some unknown reason light loses energy as it travels over cosmological distances. This would reproduce the observed redshifts, but it has issues predicting pretty much every other cosmological observation.

In particular it doesn't explain observed reductions in surface brightness (expansion has the effect of "defocusing" collimated light). And it doesn't explain observed time dilation effects.


I've always wanted to play a game based on defunct theories. I'm a fan of luminiferous aether myself. What are the impacts on a spacefaring civilization?

Sci-fi already grants alternative physics to enable FTL and other magic. What about hard sci-fi, but wrong-hard sci-fi?

Extra credit: go back to Zeno and all motion is paradoxical, what would you even do in the game?


If you haven’t read Greg Egan’s Orthogonal Trilogy, you might like it. https://en.m.wikipedia.org/wiki/The_Clockwork_Rocket


Luminiferous aether in what sense, what physics? Relativity doesn't exactly disprove it, it shows that everything distorts in a way that would make any aether unmeasurable. So if you just say aether:yes by itself I don't think anything happens.


It feels like all space combat games I've seen rely on Aristotle's theory that objects prefer to be at rest.


Try Frontier: Elite II if you want to try a "realistically modeled" view of space combat - things are pretty much strictly Newtonian.


Children of a Dead Earth is my suggestion https://store.steampowered.com/app/476530/Children_of_a_Dead...


Not space combat ... Flight Of Nova

> Flight of Nova is a simulator based on newtonian mechanics in which you control spacecraft in the Noren star system. You fly multiple types of spacecraft doing transport and search missions in an environment with realistic aerodynamics and orbital physics.

https://www.youtube.com/@flightofnova5746


Terminus seemed to have pretty decent physics too, but it's been a long time.


And in old Trek it seemed that if the engines ever lost power the ship was in danger of deorbiting and crashing within hours for some reason.


They're usually in geosynchronous orbit over the interesting area of the planet. That requires power to maintain if you're not just staying over the equator. Even over the equator it only works without power at a certain altitude.


My theory was always that ships hardly ever orbited unpowered, they usually went into much lower powered "orbits" or even just hovered "in place" using the (immense) power of their engines.


But to fly so low as to slow the ship down if unpowered, they'd generate enormous heat from atmospheric friction. They could use shields but then the ship would glow, alerting the natives below and violating the Prime Directive. And they called it "standard orbit".

Orbiting doesn't require power. Even a satellite in a very low orbit only needs a slight boost once in a while to counter the drag.


There's no reason for the shields to cause friction though. They're not made of ordinary matter so an extended, very angular shield could probably cut through atmosphere seamlessly.


Maybe they need a very low orbit to keep the planet's surface in range of the transporters.


Nice try, but:

"According to The Original Series (TOS) writers' guide, the effective range of a transporter is 40,000 kilometers."

https://en.wikipedia.org/wiki/Transporter_(Star_Trek)


Yeah, but maybe that's the maximum range when conditions are best. Much of the time, if there's an equipment problem called for in that episode, there's some issue with the planet's atmosphere ("electromagnetic storm" or similar) which causes trouble with the transporters.


> This would reproduce the observed redshifts, but it has issues predicting pretty much every other cosmological observation.

Not to mention contradicting the laws of conservation of energy and momentum.


To be fair we do already know that energy is not globally conserved over cosmological timescales. (Energy conservation is a consequence of time invariance, but cosmological expansion breaks that symmetry.)

Fritz Zwicky attempted to propose a mechanism of tired light that was caused by Compton scattering off of the intergalactic medium. But these kinds of scattering mechanisms produce far too much blurring in the expected images of distant galaxies and galaxy clusters.


> To be fair we do already know that energy is not globally conserved over cosmological timescales.

No, what we know is that there is no invariant global concept of "energy" at all except in a special class of spacetimes (the ones with a timelike Killing vector field), to which the spacetime that describes our universe as a whole does not belong.

However, "tired light" (at least the versions of it that aren't ruled out the way the Zwicky model you describe was) violates local energy-momentum conservation, which is a valid conservation law in GR (the covariant divergence of the stress-energy tensor is zero).


Somehow I learnt about Riemannian manifolds and Killing vector fields in a geometry class that didn't mention physics at all.

I am always entertained when some nonsense I learned actually has something to do with the real world.


That is a good clarification.


I like your theory more than the current setup.

I have an interesting addition to it:

Time dilation could be that going very fast in the space makes you relatively faster in one direction.

The thing is, atoms also have to travel; so the atoms (and matter in general) have a slightly longer distance to travel, to achieve the same chemical reaction. Which means interactions between atoms is slower, giving illusion of a slower time due to slower inter-atoms reactions.


I think you just described relativity from the perspective of atoms. It's still the same old relativity though.


I don’t think this would match the observations that can be made from earth of things on earth?

Though the phrasing seems a bit ambiguous. Could you put some math behind those words?


We can create and observe doppler shift by making things move towards/away from us. Thus it is proven that if something is moving away from us, it will produce a redshift. In the absence of evidence that something else is causing the redshift, the assumption should be that it is a result of things moving away from us.

As an obvious example, doppler shift often needs to be accounted for to communicate with spacecraft.


> In the absence of evidence that something else is causing the redshift, the assumption should be that it is a result of things moving away from us.

But that is not what our best current model of the universe actually says. Our best current model of the universe says that the observed redshift of a distant object tells us by what factor the universe expanded between the time the light was emitted and now (when we see the light). Viewing it as a Doppler shift is an approximation that only works for small redshifts (much less than 1).


X causes Y does not mean that Y implies X. It’s reasonable to suspect X given Y and an absence of other such causal relations, but it’s not necessarily reasonable to spend decades building layers and layers of models that assume X at the most basic level.


Everyone knows this.

But without looking at the direct rules of the system, this is the best you can do.

It’s not like you can just open the source code of the universe. You observe and make a theory that explains the observations, then the theory holds at least until a new observation contradicts the theory.

Is the current theory wrong? Maybe. But everything can be wrong and the world is always welcome to hear a new theory that completely explains all current observations.

But to just say a theory is wrong without providing a completely explained new one adds nothing.


>But to just say a theory is wrong without providing a completely explained new one adds nothing.

It certainly does. You don't have to know how something works to be able to know how it doesn't work. And there is value in knowing how something doesn't work, even if you don't know how it works.


Everybody knows it, but the principle is selectively applied.

For instance, our observations imply both general relativity and quantum field theory are necessary to model various aspects of the world. That’s an example of a Y. The only known X that’s ever been discovered that can encompass all aspects of that Y at all energy levels is string theory. Yet we are rightly careful to assume string theory is correct and enshrine it into the core body of scientific consensus. That does not mean we cannot or should not investigate it or even theory build on top of it, but it does mean we should refrain from assuming it must be true just because nobody can find anything better.


>but it does mean we should refrain from assuming it must be true just because nobody can find anything better.

The vast majority of science is not disproving a theory but adding nuance to it. Newtonian physics wasn't disproven by quantum physics, but quantum physics showed that Newtonian physics has limitations as a model. It's not unreasonable to assume our best model is true until there is a reasonable amount of data to the contrary.

As others have said, your point adds little to the conversation until you bring good data to the argument.


I don’t accept that I owe anyone any kind of additional data on any of this. Just like I don’t have to have a proven solution to the problem of finding a theory of everything to suggest that string theory may not be the truth of the universe while still acknowledging that it can be a worthwhile thread of inquiry and study.

Suggesting that we avoid possible dogma has intrinsic value. Let us step back and consider the fact that the combination of general relativity and the Standard Model already cannot explain our most basic cosmological observations. We cannot explain the stability of even our own galaxy based on current models. This situation clearly calls for some basic caution before we enshrine possible unproven explanations into humanity’s view of the universe. There’s a lot of evidence, both long established and newly growing, which shows that our models can’t consistently explain many of the basic things we see around us when we look up at the sky.


If you are trying to make a scientific argument, you should bring data since that's the cornerstone of science. You seem to be implying that people treat these models as gospel. I suspect most of those who are deep enough in the field understand they are models and respect the limitations. To that end, it's not dogma. Saying "this is the best model we currently have" is not the same as dogma. The article is specifically about using data to either support or reject a model so I don't know where you get the idea that anything is being "enshrined" and above reproach. Ironically, saying you don't need to bring data to support your point pushes your position closer to dogma.

Your original point says very little. Yes, science acknowledges that you can never 100% say "X causes Y". Science is about getting closer and closer to that 100% with better models and better data while acknowledging it's impossible to get there completely. That's why people are saying your point is a nothing-burger. It's stating the obvious based on a strawman position.


> If you are trying to make a scientific argument, you should bring data since that's the cornerstone of science.

He is making an epistemic argument, and epistemology is a part of proper science, though not of scientism, which is what you are bringing.

Binary is not the only form of logic available, but it is the most popular in discussions of the unknown.

> Your original point says very little.

You are literally mixing up subjective and objective.

> Yes, science acknowledges that you can never 100% say "X causes Y".

Careful though: science also does the opposite. Do you know why? Because science is composed of scientists, and scientists are Humans, and Humans are famously unable to distinguish between facts and their opinion of what is a fact. In fact, doing so is almost always inappropriate, and socially punished.

> Science is about...

It may intend to aspire to that, but what it is, is what it is. And what that is, comprehensively, is unknown, because it is unknowable. But we do know portions of what it is: there's the part you said, but there is also deceit, hyperbole, delusion, etc...again, because it is composed of Humans, and this is how Humans are. In my experience, all Humans oppose extreme correctness, I have never met a single one who does not.


I don't think anyone is claiming that science isn't biased because it's conducted by humans. Just like I don't think anyone is really claiming that the OP is incorrect in their statement. The comments I've read are merely pointing out "X causes Y does not mean that Y implies X" is a given in the context of a scientific discussion. It reads as if you and the OP are getting wrapped around the axle by treating science as an outcome rather than a process and, in doing so, fighting a claim that was never made, and one where the counterclaim is generally well understood in the scientific community. So well understood that it doesn't really need to be said.


You are practicing Rhetoric & Scientism, under the guise of practicing Science.

I do not expect you can be persuaded you are not 100% in the right so I will not try.


I've not made any claims that science is the only path to truth. But we are talking in the context of scientific domains of physics and cosmology, so using science as a benchmark is probably apt. If you want to discuss philosophy, that's all well and good but probably more appropriate for a different thread.

And I'll help you: I acknowledge there is plenty of room for error on my behalf. I also acknowledge there is probably plenty of value in things that can't be measured by science, but I'm not sure they belong in the topic of physics or cosmology. However, I don't think the wordsmithing is the way to illuminate error in the context of this discussion. It seems to fall into the realm of modern philosophy that is more about arguing words in the vein of trying to be smart, instead of good.


> And I'll help you: I acknowledge there is plenty of room for error on my behalf.

This is what I like to hear!

I propose that the ability to apply this recursively is an extremely rare, arguably ~superhuman skill.



The fact that your comment is downvoted speaks volumes to the close-mindedness of your critics.


If all the phenomena in reality, this one is my favorite. Though, I am not a fan of it.


Strong reactions like yours to what should be a very mild and uncontroversal statement that you evidently don’t even disagree with are exactly why these things are increasingly viewed by many as having elements of dogma.

What I wrote needed to be said, despite evidently containing very little interesting content, precisely because of how severely it provokes certain people who claim not to even disagree with it. The degree of the provocation proves the value of the statement.


What makes you think I have a strong reaction?

>What I wrote needed to be said

There are apparently plenty of people who disagree (myself included) based on the comments. I think the reaction you're getting is because it's not a particularly fruitful comment because it adds nothing to the conversation, while being veiled as a profound statement.

>The degree of the provocation proves the value of the statement.

Except the response isn't a response to the claim, it's in response to the absence of one. If a researcher publishes some incomprehensible word-salad and lots of people write to the editor saying it's a worthless article, it doesn't somehow translate value to the original work. I think what you're experiencing is people being protective of HN in terms of having meaningful debate and what you said isn't particularly meaningful despite the wordsmithing.


> I think what you're experiencing is people being protective of HN in terms of having meaningful debate and what you said isn't particularly meaningful despite the wordsmithing.

As of the writing of this comment, every single comment I wrote here was upvoted. So, no. I can’t control the future, but what you wrote here was absolutely false when you wrote it.

> There are apparently plenty of people who disagree (myself included) based on the comments.

And that’s great — that means it’s thought provoking enough to generate debate with complex views on both sides. In other words, yet again proving the value of the statement.


If internet points are how you’re measuring the validity of a point, there’s all kinds of things going wrong.

Again, people pushing back doesn’t mean you’re fostering fruitful debate anymore than flat-earthers are generating debate “with complex views.”


You are the one who brought up reception here, not me. I merely responded. So if you don’t like it, don’t choose it yourself first. All that happened is you selected a metric that you thought was favorable to yourself, but you miscalculated and are now claiming the metric was never even valid.


>you selected a metric that you thought was favorable to yourself

Please point to where I first used forum upvotes as a useful metric.


You used reception here to argue against me, going so far as to claim that people are “protecting HN” from my comment. Given that, I pointed out another form of measuring said reception. Suddenly, you were up in arms about using reception and continue to protest it.


>What I wrote needed to be said, despite evidently containing very little interesting content, precisely because of how severely it provokes certain people who claim not to even disagree with it. The degree of the provocation proves the value of the statement.

The point of science isn't to punk the researchers. So no, what you wrote didn't need to be said.

As I (and others) have repeatedly pointed out, our models are wrong. We know they are wrong. What they are is less wrong than previous models. That doesn't make them "right" or "dogma." Rather it makes them the model that currently provides the best explanation for observed reality.

That neither requires or suggests that research/investigation into modifications of our current models and/or into completely different models is unseemly or inappropriate.

What I (and presumably others, as they've expressed similar thoughts) require, if you want me to accept modified/brand new theories/models is, at a minimum, a logic-based argument as to why a modified/new model describes the universe more completely/accurately than current models. Assuming you can convince me that it's plausible, the next step is to present observational data that supports your logically argued hypothesis -- and that such data is described by your model/theory more completely/accurately than other models. I.e., that your theory/model is less wrong than our extant models which are also wrong, but less wrong than previous models/theories.

And if you can't present such data (e.g., with M-Theory[0]), then it's not science, it's just math, philosophy and/or metaphysics.

That's not to say math/philosophy/metaphysics aren't useful. They absolutely are. However, without data (or the means to collect such data), it's impossible to falsify[1] such hypotheses and, as such, aren't science.

[0] https://en.wikipedia.org/wiki/M-theory

[1] https://en.wikipedia.org/wiki/Falsifiability


>The degree of the provocation proves the value of the statement.

This is a great heuristic to practice often. I just told my wife she looks fat in her new dress, and the degree of her provocation proves the value of my statement.


I’d expect a bit more nuance on Hacker News. What I wrote is not provocative by being offensive or hurtful to anyone. It’s provocative by introducing an idea that intelligent, educated people cannot come to a consensus on in a purely intellectual way.


What you’re missing is there is not a debate about your point. The debate is about whether it adds anything to the discussion.

Saying “the sky is up” and having people respond by telling you that is a trivial point shouldn’t be conflated with fostering a meaningful dialog.


And yet you yourself have been passionately engaged at length in the very debate you say doesn’t exist. The evidence abounds here from your own comments that it is not purely about the triviality of my comment. In fact, your first objection was about whether you thought I owed you additional data before I’m allowed to make the statement that I made. Moreover, there are plenty of other comments engaging in discussion and debate that clearly go beyond a mere discussion of triviality but rather actually engage with the concept.


That's fair. I'm trying to point out that the degree to which your statements illicit an emotional response from someone, by any means, has no bearing on the validity of that statement.


And I acknowledged that by providing additional clarification.


>but it does mean we should refrain from assuming it must be true

A good scientist will tell you that we don't assume it is truth. Instead, it is the closest thing to truth we can get at this time, but we are always seek better. But like a limit, we can only ever approach closer and never arrive at truth. As the other poster mentioned, we don't have a way to open up the source code of the universe.

Some scientists get a bit too attached to theories and can move them from "closest we currently have to truth" to "truth", but I think the bigger issue is that the non-scientists involve in transmitting science too often present it as truth, instead of the best approximation we currently have. Often because fake confidence beats out measured modesty, and the one claiming to have truth is more convincing than the one saying we can't know truth and only better approximate it.

A scientist will say science is true for the sake of simplifying the philosophy of science to those unfamiliar with it, but any scientist who thinks they have captured objective truth has lost the philosophical foundations of science.


> The only known X that’s ever been discovered that can encompass all aspects of that Y at all energy levels is string theory.

This is not correct; string theory is not the only candidate we have for a theory of quantum gravity.


I didn’t say it’s the only theory of quantum gravity. You can get a theory of quantum gravity by taking standard QFT — it just won’t work at extremely high energies. Other options like loop quantum gravity do not reproduce the rest of our models about everything else.

I said it’s the only theory of gravity and everything else at all energy levels, and that’s true. In fact, you’ll notice I did not even use the term “quantum gravity” to avoid the exact confusion you fell into anyway.


If we can suspect X given Y, but we shouldn't build models on top of the assumption of X, then what are we supposed to do with Y?

To me it seems like you're arguing that it was a bad idea to build on the assumption of Newton's theory of gravity because eventually it would be replaced by Einstein's theories of relativity. Which is obviously not sensible, since Einstein's theories were in part the result of trying to explain inaccurate predictions made by building on Newton's theory.


>but it’s not necessarily reasonable to spend decades building layers and layers of models that assume X at the most basic level.

If the only option for finding out better evidence for or against X is by building those models and watching them either keep matching observations or finding a contradiction that can lead to the downfall of X as the suspect, then it is if you want to progress science any further.

Maybe there is another area that will give results faster, but much of the easy and fast science has already been science. And if someone finds a better option we missed, which does happen from time to time, add some rigor to it, verify it with testing, and they'll likely have themselves a Nobel prize.


You are not a making insightful point at all. Nothing in the world can guarantee you that "Y implies X", after all, we can be living in a simulation. Does that mean we should shutdown all scientific discussions by repeating what you stated? Of course not.


Point out where I said we should “shutdown all scientific discussions.” You won’t be able to, and you will then realize how incredibly absurd what you just wrote is.


The parent is pointing out that the prior comment is literally meaningless, and self defeating too, since the same logic would apply to your own existence, or simulated existence. Including any possible words you could ever write.

(As far as any other HN reader could ever perceive)


Don’t you think jumping to a complete existential crisis over such a simple comment is a little extreme? That alone is a red flag that maybe dogma has taken over. No, nothing I wrote suggests one must shut down all scientific discussion or inquiry. No, it does not mean you cannot investigate X and its implications. No, it does not mean you cannot build speculative models on top of X. Yes, it does mean it’s important to be careful with language and avoid enshrining what is only an assumption as an unassailable fact of reality for generations.


Who said it’s a ‘complete existential crisis’?

It’s just a fact of reality that no one else on HN can verify with certainty whether or not you are a genuine, bonafide, human being, especially online.

Everyone who is engaging with you is just assuming you are.


I said that. If you thought I was wrong in that assertion, then all you had to do was stop questioning your entire existence here. The fact that you continued to do so lends credence to my assertion rather than refuting it.


Who is ‘I’?

If you mean ‘nilkn’, the presumably human created HN account ostensibly still operated by that same sole human being, writing their own thoughts down and not on behalf of other entities, then thanks for providing an excellent demonstration.


I think everybody would be happy if you came up with a different explanation! what's happened so far is that we have a known mechanism, and no alternative explanations have worked yet.


This is wrong on several levels:

1. As other commenter said, X causes Y does not mean that Y implies X. There can be another cause for the Doppler.

And surprisingly, 2. There is at least one known mechanism that cause Doppler WITHOUT moving: when the observer is in a gravity well (ex: earth) and observing a stationary object outside the gravity well (ex: some fixed point in outer space)


As I've mentioned in another post, that leads to questioning one of the most well tested theories in physics, so you need extraordinary evidence to prove it over something as elementary as doppler shift. Like, if it's Earth's gravity well causing us to see things differently, then things that are further from Earth should observe things differently.


There is already extraordinary evidence that something in physics is behaving differently at larger scales: the behavior of galaxies (spin, gravity pull) doesn't match their mass. Any mass. There is no single mass value that predicts their entire behavior correctly. Thus dark matter, dark energy etc competing theories, which are so far untestable.

It wouldn't surprise me if we discover someday another Doppler mechanism that occurs at those same large scales


> There is at least one known mechanism that cause Doppler WITHOUT moving: when the observer is in a gravity well

This is gravitational redshift, not Doppler shift. Doppler shift specifically means redshift due to an object moving away--but to really be correct that definition has to be limited to flat spacetime, or to a small enough region in a curved spacetime that the curvature can be ignored.


>What if the universe doesn't expand at all? What if we're completely wrong and redshift is caused by something else entirely, like some yet-undiscovered phenomenon that occurs to spacetime or electromagnetic waves? How can we be so sure it's space that's expanding, not time?

I suppose that's possible. Does that hypothesis adequately explain our observations?

Is the model we currently have completely "correct"? Almost certainly not. But it appears to be less wrong[0] than earlier models.

If you (or anyone) can show how the above describes our observations better and more completely than our current models, then it's likely "less wrong."

But you offer no evidence or even logical argument to support your hypothesis. As such, it's not much more than idle speculation and essentially equivalent, from a scientific standpoint, as suggesting the universe is a raisin dropped into a sugar syrup solution[1] and absorbing the liquid -- hence the expansion of the universe.

[0] https://en.wikipedia.org/wiki/The_Relativity_of_Wrong

[1] https://en.wikipedia.org/wiki/Compote


Easiest method is to simply take your idea at face value.

In our first version, imagine all of the stars at rest. Now, we emphatically know this not to be true locally due to all kinds of measurements, but let's go with it. What happens? The moment you let these stars "go," they begin to draw toward one another due to gravity. You would have gravitational collapse. We do not see that.

Next iteration: we throw the stars, and the galaxies, and the galactic clusters away from one another. No expansion required. Here we have two options. In the first, we did not throw with enough speed, they expand out ... slow to a halt ... and the gravitational collapse again. Again, unobserved. Option two, you have thrown at escape velocity and what you would see is an asymptotically decreasing speed, never quite hitting zero, since gravity works "forever away." Also unobserved.

What you're suggesting is basically the Steady State concept, a kind of static universe. This is a very old idea. So old it was given a kind of courtesy term in general relativity, which would eventually be set to zero.

Here is a rule for any armchair astrophysicists: whatever you think of, that was most likely an idea at one point and was eventually ruled out.


> Here is a rule for any armchair astrophysicists: whatever you think of, that was most likely an idea at one point and was eventually ruled out.

The relevant XKCD is Astrophysics - https://xkcd.com/1758/ ( https://www.explainxkcd.com/wiki/index.php/1758:_Astrophysic... )

The transcript from explain:

    [A sign on two posts, in the grass in front of a building with windows and double doors, a window on each door, and bars facing outwards. There is a cement walk leading to the doors. On the sign is the text:]

    Department of Astrophysics
    Motto:
    Yes, everybody has already had the idea, "Maybe there's no dark matter—Gravity just works differently on large scales!" It sounds good but doesn't really fit the data.


How would you explain the CMB? We can literally see that the universe used to be much denser.


And if the universe was much denser, doesn't that imply that all that matter affected its surroundings gravitationally? And as we know, time runs slower near large masses. And when something falls into a black hole, according to our very own theories, it would also red-shift because of the black hole's gravitational pull without anything having to expand.


No, it implies it expanded in the meantime. We can see that it was a hot plasma up until 300k years after the big bang. This isn't some redshifted illusion, the matter was literally packed so densely and thus so hot that it was in another aggregate state.

Don't get hung up on redshifts for evidence of the big bang. The CMB is the real smoking gun. Read up on it, it's entirely worth it. I can recommend Simon Singh's book "Big Bang".

There is also a plethora of other probes that in concordance all point to the same thing: that the universe is almost 14 billion years old and expanded from a very hot, dense state. It's settled science, really.


Speaking of the big bang, how did time work back then? :)

It's cool to say "in the first milliseconds of the existence of the universe X and Y happened", but how did time supposedly run as usual while everything else was on the fringe of our understanding of reality? There don't seem to be any answers to this (or I haven't looked thoroughly enough) but it feels like a very important question that's always overlooked by everyone talking about this.


Yeah, it is overlooked because the real answers are 'hidden' behind a lot of graduate level math. And most people don't really want to learn a bookcase worth of math first to talk about it, but they talk all the same.

Like, if you'd like to really dive into it then you're going to need to go through a lot of textbooks first.

If you are moderately familiar with multi-variable calc, then here is a good book to get started down the GR hole: https://www.amazon.com/Mathematical-Methods-Physicists-Compr...

Suffice to say, yes, there have been a lot of grad students that have the exact same questions and issue that you currently have. Further, once they have reached the end of the mathematical education required to understand how space time works in the first few minutes of the universe, they focus those questions into the issues we have with inflation. Those issues mostly come from our lack of understanding about how GR and QM interact, so the first 10e-43 seconds or so. At least, that is my understanding. Physicists are welcome to tell me how dumb I am right now!


(former) Physicist with focus on cosmology here. Your reply is one of the sanest in this thread.


There's a lot of attempts at investigating those questions. Here's a couple of pages I'd recommend to peruse:

https://en.wikipedia.org/wiki/Cosmic_time

https://en.wikipedia.org/wiki/Chronology_of_the_universe

If you're into podcasts at all, I'd strongly recommend Crash Course Pods: The Universe. The first (full) episode goes into detail on that first fraction of a second in our universe and it's pretty enlightening without being to thick on the math.


> doesn't that imply that all that matter affected its surroundings gravitationally?

It did; it caused the expansion to decelerate. That was true until a few billion years ago, when the matter density became smaller than the dark energy density and dark energy started to dominate the dynamics.


It's really trippy to think about how hawking radiation becomes 'real' once its sufficiently 'far' away from a 'strong' gravitational well, and how this can be thought of as a Doppler shift giving real physical presence (in that we can interact and be affected by ) to what was once a 'virtual' particle

I think scienceclic does a good job visualizing this, but end of the day I can't see a way to distinguish event horizons regardless if they're a black hole or the distant past/big bang.

https://youtu.be/isezfMo8kWQ?si=9wGliV-Qo1bXCTRy

Specifically look at the relativity of the vacuum section, which builds to a great insight at around 5:45


For example the entire atomic composition of the stars in the observable universe depends exquisitely on the expansion parameters at the big bang. The ratios can be traced back through the expansion to the quark-gluon soup stage. Changing the expansion rates changes the delicate balance between the particles that form at that stage, and when the various particle fractions "freeze out" during expansion when the temperature cools (btw we're talking about seconds from the big bang here :) which can be subsequently observed in stars all over the universe by spectroscopy. It's pretty beautiful.

There are so many intricate dependencies between these pathways that it's pretty unrealistic to postulate anything else than a big bang + cooldown process IMHO.


This is the equivalent of finding your keys an inch off from where you remember setting them down and concluding that someone broke into your home, stole your keys, took your car for a joyride, and broke back in to place them there.


Expanding universe and Big Bang Theory go hand-in-hand. There are multiple independent observations besides the red shift that make it nearly certain there had to be a BBT event to explain what we see. The universe is too hot, chaotic and clumpy for there not to have been a massive explosion to kick it all into motion. Since there is good confidence BBT happened, transitioning from that event to a steady-state non-expanding universe would require some sort of mechanism to slow then freeze the expansion. Not aware of any support for that model.


The name “big bang” was a pejorative epithet coined by Fred Hoyle, who believed in a steady-state universe that was expanding (as Hubble had argued convincingly) but had some hypothetical mechanism for creating galaxies, such that the universe could have an unbounded past and future.

So, historically, they did not go hand-in-hand.

https://en.wikipedia.org/wiki/Fred_Hoyle#Rejection_of_the_Bi...


True, but with the BBT-affirming discovery of the cosmic microwave background in 1965 they've been hand-in-hand ever since.


I remember reading, a long, long time ago, a paper where the authors suggested if the universe was slightly hyperbolic, it would also cause a redshift effect. I can't seem to find it (and as far as I remember it was purely theoretical), but at the time I thought it was an neat idea.

Not that I have the background to know what else they might not have accounted for to reach this conclusion.


While I don't necessarily think at lot of alternative ideas proposed are correct, I always love seeing alternative concepts being considered. Very cool to see ideas that could solve standing issues even if they themselves could have issues.


My guess is that scientists are considering this, but until now no better theory has been presented.

Part of this is the distinction between what is happening and why the model says is happening. Does any physicist believe they have the perfect model? Or is it that they use the model that best fits the observations and are open to any other model, as long as it is either simpler or produces fewer contradictions than the current model (and is just as testable).

I think too often we hear reports of "science says X is what happens" when the reality is more like "science says that the current model based on X happening is what best describes current data and observations".


The _conclusion_ that the Universe is expanding is based on the long accepted premise that the Universe is _flat_. And this premise can not be proven or disproved unless we travel great distances to actually _observe_ if the Universe does, in fact, look the same from any point you look.

The Copernican principle is, indeed, attractive to the modern mind because its neutrality. But it's not neutral. It's just as loaded as any other principle, no matter how crazy it may sound today, philosophical, religious, or merely personal.


The observation of the Hubble constant requires us to measure distance to an object in space. This is very hard to do at the extreme distances required (https://en.wikipedia.org/wiki/Parallax). In the end, the variation in the Hubble constant might be only due to our limited accuracy in measurement.


The universe is a 4 dimensional sphere, so everything could be moving away without increasing 3 dimensional space. Eventually in trillion or quadrillions of years everything would start to blue shift as things move towards us on the other side of the sphere.


So how do I travel backwards in the 4th dimension? Or conversely, where's the source of the force that propels us through the 4th dimension?

I'll leave this comment here since I'm about to get rate limited: I read/heard a great idea recently, what if gravity is an emergent quantity like heat? Maybe dense fermions just radiate gravitons just like a hot mass radiates photons?


> I read/heard a great idea recently, what if gravity is an emergent quantity like heat? Maybe dense fermions just radiate gravitons just like a hot mass radiates photons?

Gravity is the result of spacetime curvature, if I remember correctly.


Yes, but it's irreconcilable with quantum mechanics. So what I described was one of many attempts to do so.


In that case, wouldn't it have to be more complex than just a particle which creates gravity? Something which is compatible with the finding of spacetime curvature?


Whispers: Halton Arp


I got an idea! Anti-mass!

Anti-mass is mass where Gravity goes Out instead of In.

The anti-mass and mass accelerated away from eachother at the start and the redshift is it's repulsion away.


I suspect, in a few decades, when the smoke clears and the very latest sub-infrared, stadium-sized space telescope finds fully formed galaxies several billion years "before" the alleged moment of creation, then the astronomical community will finally start to question the logic of prevailing cosmological theory, from the ground up.

The big bang was first postulated by an agent of the vatican, and scientists raised in any religious context tend to generate experiments that confirm their beliefs.


Naive question: why should the expansion rate need to be uniform or constant everywhere?

I'm likely misinterpreting the article, but it seems to frame things in a way that first assumes expansion should be constant and it's a question of what the right constant value is between the measured/theoretical discrepancies.

(*yeesh, editing all those spelling errors from typing on my phone)


The controversy is that we get 2 different numbers depending on which method (cosmic microwave background vs cosmic distance ladder) we use to calculate the present rate of expansion. These numbers used to have their error bars overlapping, so we assumed they would eventually converge to the true value. But as we get more data the opposite is happening: the numbers are diverging and their error bars are shrinking such that they no longer overlap.

This tells us that either our model of the universe is wrong (therefore the cosmic microwave background method is giving us an incorrect answer) or that something is wrong with how we're calculating the distances along the cosmic distance ladder. The latter was originally the assumption that should be proven true with more and better data from newer telescopes. This is now turning out not to be the case: our cosmic distance ladder calculations seem to have been very good, so it now seems more likely that our model of the universe is wrong.


> our cosmic distance ladder calculations seem to have been very good

Not according to at least one research group described in the article: the Freedman group, which is only getting the higher answer using Cepheids, but gets a lower answer, one consistent with the CMBR calculations, by two other methods. Which raises the possibility that it's the Cepheid part of the cosmic distance ladder that's the problem.


Right, but that's been Freedman's project all along. Riess is continuing to publish results consistent with a higher H0 including both Cepheids and TRGB stars.

Perhaps we need some outside groups to take a look at this to see if either group is making systematic errors in their analyses.


> Perhaps we need some outside groups to take a look at this to see if either group is making systematic errors in their analyses.

I agree this would be a good thing.


Here's a 25 video critique on the CMB projects by Pierre Robiteille

https://youtube.com/playlist?list=PLnU8XK0C8oTDaiwe8Us_YNl4K...

He also has an alternative theory for stellar physics that might call into question the interpretation of Cepheids and SNs, tho I don't think he's done a major analysis there, at least afaik


Thank you for this explanation. I would like to emphasize that our model being wrong and not our numbers sounds like progress.

It also sound like progress that we seem to have 2 “scales” to play with to try to develop a consistently measurable distance.


I remember reading that the local group, Laniakea Supercluster and the great attractor [1] are new developments that helped us refine our understanding of H0 but didn't fundamentally remove the controversy.

It's exciting to see how the question drives many new discoveries.

[1] https://en.wikipedia.org/wiki/Great_Attractor

I'll try to paraphrase what it meant: measuring H0 comes down to measuring the relative velocity of galaxies around us. The great attractor was a relatively recent discovery that the "closer" galaxies, the ones we can use in the distance ladder, all have a common component in their velocities which we've recently begun to understand better.


Interesting, though, that they're getting different numbers using different kinds of stars, which does suggest problems with the distance ladder.


Distance ladder seems much more error prone than the CMB.


It's not just the CMB, it's the CMB + LCDM. You know, that theory that posits a form of matter that we have consistently failed to find and that has technically been falsified a number of times, but which was then adjusted to work around those problems, and yet problems still remain. You might have "slight" bit of unaccounted for error sneaking in there.

I personally wouldn't trust the CMB calculation at all until the dark matter issue is resolved.


Isn’t there loads of evidence for dark matter including direct evidence like gravitational lensing? We’re just not sure what it is, though primordial black holes look like a strong candidate.


> Isn’t there loads of evidence for dark matter including direct evidence like gravitational lensing?

There is an equivalent boatload of evidence for modified gravity that can't be explained by particle dark matter [1], but one set of observations is hand waved away as a minor inconvenience while the other set is taken as definitive proof.

If gravity doesn't work the way we expect, then lensing in galaxy collisions isn't necessarily telling us what we think it is.

But setting that aside, if we take even the most naive modified gravity, MOND, then lensing does require a form of particle dark matter, like sterile neutrinos, but the total amount of dark matter required to explain the evidence is only double the amount of visible matter, rather than almost 10x the amount of visible matter as in LCDM. This has significant implications for calculating the Hubble constant from the CMB [2].

[1] https://tritonstation.com/2024/06/18/rotation-curves-still-f...

[2] TeVeS is wrong, but it shows that modified gravity significantly effects the H0 calculation, https://arxiv.org/abs/1204.6359


All of the cosmologists expected that too!


there is no "seems" with provable error bars


ehh ... everything we measure relies on our understanding of the universe in some way. It's perfectly reasonable that our distance measurements could rely on a shakier foundation of assumptions than our understanding of the CMB. I don't know enough to say one way or the other, but GP's comment is not unreasonable on its face. Whereas talking about "provable" in cosmology, and certainly in this case, does seem unreasonable -- especially with error bars, which by definition have a small chance of not including the real value. Normally I'd say that we can generally refine our assumptions to be extremely good, and just take more measurements, and keep narrowing that error bar, until we hit a level of certainty that anyone reasonable would call "proven", but the entire point of TFA is that this isn't happening in this case. We seem on our way to "proving" two inconsistent things.


> so it now seems more likely that our model of the universe is wrong.

Whenever a scientist says that it's not possible that the model is wrong, then I just roll my eyes. Of course models can be wrong - and isn't that exciting? Good on them for making sure that there are no errors in the measurements - that's incredibly valuable and absolutely necessary - but I'm really excited to see creative models being thought up that are drastically different. My personal hell is the universe being consistent and boring.


Scientists have to cope with "you just said your model is wrong therefore I am right about everything ever". It makes them sometimes shortcut their way out of conversations that they know will not lead anywhere useful.


That seems like an exaggeration.


which part?


> "therefore I am right about everything ever"

I'm sure scientists have to deal with people jumping on them about their model being wrong, but this part is clearly exaggeration.


That is comic exaggeration, but you've almost certainly heard people insist that the evidence for their position is that some scientist was wrong at some point. It's particularly comic from creationists.


They don't say it with those exact words, but not only would they claim to be right based on when a scientist was once wrong, they are very keen to claim to be right based on what they wrongly think a scientist was once wrong on.

I no longer engage with these people for sport, but about 15-10 years ago I had two scientific creationists that I kept around as virtual pets. one was Hindu and one was from the religion of peace. neither was very stable and both were prone to getting a bit emotional about it.

each time, they would pick one piece of settled science, but they didn't have the mathematical machinery to understand the model itself, so they would rely on the layman cartoon versions and misunderstand something crucial there. for example, the existence of error bars on the concordance plot of radioactive dating. from this they could throw doubt on the whole chronology of the formation of the solar system and the evolution of man. for one of them, a technological civilisation from Hindu mythology existed millions of years ago. for the other, their very peaceful god created everything personally in one go, and evolution by natural selection didn't happen.


> Whenever a scientist says that it's not possible that the model is wrong, then I just roll my eyes.

But no one said that. Im fact, scientists are known to say things like: all models are wrong, but some are useful.


From the article:

> That the three methods disagree “is not telling us about fundamental physics,” Freedman said. “That’s telling us there’s some systematic [error] in one or more of the distance methods.”

Freedman is saying that the model is not wrong.


What she means is that the bar for proving that this is an error in physics is much higher than that of proving that it's a measurement error. Like, if you're measuring acceleration due to gravity, and your sensor/calculation gives you 5m/s^2 rather than the real ~9.81m/s^2 that everything else measures, you can't immediately resort to arguing that physics is wrong, you have to rule out that your sensor/calculation is wrong first.

To argue that the physics is wrong, you are likely to be arguing that very well tested theories like general relativity, special relativity or electromagnetism are off in some way. That's a much higher bar than just the measurements of either the ladder or CMB being wrong in some way.


to add to this, it's equivalent to the difference between trying to justify that one experiment (or one class of experiments) is wrong vs several dozens of classes of thousands of experiments are all subtly wrong so that this one experiment can be right.


And in this case, your sensor isn't giving a wildly wrong answer like 5 m/s^2, but rather something close to the correct answer, like 9.15 m/s^2.

It's easy to think up ways that your sensor could be 5-10% off. It's very difficult to come up with an entirely new theory of gravity that explains everything we observe about the world, but also makes gravity a few percent weaker in this one case.


She's saying that a different model -- one of the three disagreeing methods for distance ladder measurements -- must be wrong, because they disagree with each other. But if one or more of those models are wrong, then there's not much evidence that the LambdaCDM model is wrong.

Conversely, the hypothesis that LambdaCDM is wrong does nothing to explain why the distance ladder methods disagree.

She clearly isn't saying that any model is infallible, she's just saying that clear flaws with one set of models throw into question some specific accusations that a different model is wrong.

You actually need to pay attention to the details; the physicists certainly are. Glib contrarianism isn't very useful here.


That researcher has a personal conviction that the model isn't wrong. That is spurring them to spend the years and decades necessary to assemble the experimental evidence to test the model. Either it'll turn out to be wrong or right in the end, but the conviction is what gives that individual researcher the impetus to keep scratching at the problem for a good chunk of their life.

You shouldn't really roll your eyes at that. They're ultimately doing all the work which will prove it right or wrong. They might wind up not liking the answer they get, but the conviction is necessary to get them there because human emotions are weird.


She's following a hunch, it's what scientists do. In this case the hunch is that the model is not wrong. That's a far cry from saying it's impossible to be wrong.


Where does she say she's following a hunch? She was very certain when she said that.


when she's certain, you'll know, because she'll publish it.


Who said it was impossible? In fact, someone just said it was quite likely.


> so it now seems more likely that our model of the universe is wrong.

Anyone with an ounce of sense should have concluded that LCDM was wrong long ago. Hopefully this will finally cause physicists to actually try something different.


You're not a cosmologist, are you? I'm not anymore either, but when I was in grad school, everyone agreed that LCDM is probably wrong. Most researchers pretty much wanted it to be wrong. All they did was try something different (quintessence, bigravity, modified gravity, f(R) theories, Horndeski gravity, phantom dark energy, chameleon cosmologies, coupled dark matter, etc. etc), but nothing fit the data as well as LCDM. When new data came in, and it pointed yet again at LCDM, it was considered a disappointment.

Now the Euclid satellite has been launched, which I remember being the big hope that it will give us data that will finally enable us to tell which LCDM alternative could replace LCDM as the standard model of cosmology. You won't even get funding for such a mission unless you make a case for how it can improve our understanding of the universe. "We hope it will confirm our currently accepted view" will not get you one cent.

So I'm not sure where you get the confidence that nothing different is being tried and that professional scientists wouldn't have an ounce of sense. I find it a bit disrespectful actually.

I invite you to read the introduction (and introduction of part 1) of this review paper of the science behind the Euclid mission: https://arxiv.org/abs/1606.00180

It's understandable to laymen, co-authored by reputable, leading scientists in the field (disclaimer: I'm on the author list) and I hope that after reading it you would revise your opinion.

Here is a short excerpt:

> The simplest explanation, Einstein’s famous cosmological constant, is still currently acceptable from the observational point of view, but is not the only one, nor necessarily the most satisfying, as we will argue.


A minor addendum:

> So I'm not sure where you get the confidence that nothing different is being tried and that professional scientists wouldn't have an ounce of sense. I find it a bit disrespectful actually.

Try talking to a MOND proponent and you'll get a very different picture of how open scientists are to alternate approaches and questioning LCDM.


> but nothing fit the data as well as LCDM.

I think you mean, "nothing could be made to fit the data as well as LCDM". See below.

> When new data came in, and it pointed yet again at LCDM, it was considered a disappointment.

This is highly revisionist, and exemplifies the kind of cherry picking that has kept LCDM alive when it has outright failed numerous predictions and had to be adjusted after the fact to fit the data:

From Galactic Bars to the Hubble Tension: Weighing Up the Astrophysical Evidence for Milgromian Gravity, https://www.mdpi.com/2073-8994/14/7/1331

LCDM is a complete mixed bag when it comes to predictions, where MOND has a very slight edge. It's absolutely clear that MOND is not correct or complete either, but it is undeniable that it has had greater predictive success than anyone anticipated and with far fewer free parameters than LCDM, mostly in areas where LCDM is weak. If LCDM can't explain why MOND has been so successful, which it currently cannot, then it is incomplete at best.

Basically, the only good, almost definitive, evidence for particle dark matter is gravitational lensing, but if we're considering modified gravity anyway, then lensing isn't necessarily telling us what we think.

On the flip side, no possible cold dark matter halo is compatible with recent observations that rotation curves are flat past a million light years [1]. I can't wait to see what epicycles they add to LCDM to save it this time.

[1] https://tritonstation.com/2024/06/18/rotation-curves-still-f...


So a lot of astronomy is based on the principle that we are not in a special pocket of the universe.

See https://en.wikipedia.org/wiki/Cosmological_principle

Basically, if this weren't to hold true, a lot of astronomy would fall over, even physics.


Yes, cosmological principle is probably the most fundamental assumption in astronomy.

Most people don't realize that science—and even everything in life—has to start from some axioms/assumptions, just like math. I first realized this fact when I was reading the Relativity book written by Einstein himself, who challenges the assumptions in classical physics.

As time goes, some of the assumptions could be proved to be unnecessary or even wrong. There must be still some assumptions left, though—because without them, we can't talk about science, or anything, really.


Though it is worth noting if this were the case you would expect to see boundaries: if the laws of physics change due to spatial position, the discontinuity should produce an effect of some sort where matter and light transitions between regions.


I suppose there could be a gradual change over distance, i.e. the first derivative of this change never varies.


Seems there are 2 ideas at odds. One is that the universe is infinite, in which case this is all localized and has no bearing on the universe outside of our small observable region. The other is that we are seeing enough of a bounded universe where the observations we make are of a significant enough chunk to make theories about it.



Indeed. Some researchers have proposed quintessence, a time-varying form of dark energy [0].

> A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable.

[0] https://en.wikipedia.org/wiki/Quintessence_(physics)


> what should the expansion rate need the be uniform or constant everywhere?

It doesn't.

"The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space" [1]. That's the cosmological constant.

Dark energy is a thing because we don't assume that to be the case. Irrespective of your dark energy model, however, there will be a predicted global average.

[1] https://en.wikipedia.org/wiki/Dark_energy


There has been some interesting recent work that may get rid of the need for dark energy.

Briefly, recent large scale maps of the universe suggest that the universe might not be as uniform as we thought it was. In particular we appear to be in a large region (something like a couple billion light years across) of low density.

Dark energy is needed to make the solution to Einstein's field equations for the whole universe match observations. However that solution was derived based on a universe with matter distributed uniformly. At the time it was first derived that appeared to be the case--we thought the Milky Way was the whole universe.

When we learned that the Milky Way was just a small galaxy in a vastly larger universe than had thought we were in and that there were bazillions of other galaxies, those galaxies appeared to be distributed uniformly enough the the solution to the field equations still worked.

Later we found that there is some large scale structure in the distribution of galaxies, like superclusters, but those seemed uniform enough throughout the universe that things still worked.

If that couple of billion light year low density region turns out to exist (large scale mapping of the universe is hard enough that it may just be observational error) the universe may not actually be uniform enough to for the field equations based on uniform matter distribution to actually work.

Some researchers worked out the solutions to the field equations for a universe that has such large low density bubbles big enough to invalidate the uniform universe solution, and found that such a universe would have an expansion force without the need to invoke any kind of dark energy.

There was a recent PBS Space Time episode that covered this: "Can The Crisis in Cosmology be SOLVED With Cosmic Voids" [1]. The above is my summary of what I remember from that. See the episode for a better explanation and references to the research.

[1] https://www.youtube.com/watch?v=WWqmccgf78w


It's not constant (the early universe inflated quite quickly), and it doesn't need to be uniform, but it sure does appear to be. We measure it via redshift, pulsar timing arrays, and the temperature fluctuations of the CMB, and it looks pretty much the same in all directions.


Spacetime is apparently extremely rigid as it supports the transmission of gravitational waves originating billions of light-years away, as detected by the LIGO experiments. This suggests smooth and gradual uniform expansion, at least spatially. Temporal variation (speeding up and slowing down uniformly at all points) might be possible but seems hard to explain.


The issue here is that it's not constant depending on the type of star we use to measure it. It's not a discrepancy in location in space. Or at least that's how I read the article.


That's the first thing that occurred to me too. It could also not be constant even at the same place, i.e. could it not be speeding up and slowing down as the universe expands?


Certainly when I look at convection currents in the ocean or the atmosphere, I see plenty of variation. Shoot, the earth's atmosphere constantly produces moving blobs of relatively high and low pressure.


Seems perfectly possible. General relativity, after all, was precisely the discovery that the curvature of space, well spacetime, is not uniform.


"researchers started using Cepheids to calibrate the distances to bright supernovas, enabling more accurate measurements of H0."

It seems like if there were some error in the luminosity measurement for cepheids, it would propagate to the measurements with supernovas...

I would expect that stacking measurement techniques (as is common with cosmology, where distances are vast and certainty is rare) would also stack error, like summing the variance in gaussians...


It'd be cool if we launched several space telescopes on Voyager-like trajectories.

In 50-100 years they'd get a much better angular fix on stars that are too distant for Earth-orbit-sized angular measurements.

https://en.wikipedia.org/wiki/Stellar_parallax


They'd need some very big RTGs to last that long, and I don't think we manufacture plutonium at the necessary volumes for that anymore.


I would use Americium-241 instead, longer half life and much more availability.

Lower power, but a telescope like this does not need constant power, so some kind of short term power storage (capacitor I would assume, or some kind of ultra long life battery) could handle that.


in terms of the potential plutonium shortage, wikipedia: Americium-241 is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239. The latter needs to be produced first


Last I remember, RTG manufacturing was very constrained, period. And that was before Russia took a massive shit in Ukraine and got themselves embargo'd by most of the rest of the developed world.


The other solution is to increase the accuracy of parallax. This is what the Gaia project is doing. It can measure distance to stars in galactic center to 20%. It will measure distance to 2 billion stars and be super accurate within 300 ly.


New Horizons has taken some star pictures from the Kuiper Belt and you can easily spot the parallax of some nearby stars just by eyeball. I'm not sure that it has a good enough camera for any kind of precision measurement, but it was really cool to see that.


Why compromise? Might as well launch a solar gravitational lens [1] telescope.

[1] https://en.wikipedia.org/wiki/Solar_gravitational_lens


https://www.centauri-dreams.org/2020/12/10/developing-focal-... (and its critical analysis) https://arxiv.org/pdf/1604.06351

I believe that work on such a project would be interesting, even if it isn't ever done. It is something that if you worked on it as a grad student, and then became a professor and your grad students worked on it ... and they became professors and their grad students worked on it - they'd be the ones seeing the results.

Not so much a "this is a way to do things..." but rather a "thinking about research that spans generations and the problems that they solve in the process of doing that great project has useful spinoffs."

A mission out to 550 AU at New Horizons speed of 2.9 AU / year (it still boggles the mind of talking about those speeds and distances) is nearly 200 years from launch to primary science objective.

It's "only" been 45 years since Mariner 2 to New Horizons.


That'd be a nice use for Starship.


Well, no it wouldn't actually.

It'd be a nice use for falcon heavy - to get them the necessary delta-v. But the constraint isn't cargo space on launch. This isn't a starlink constellation, the orbits are necessarily massively different, meaning each spacecraft needs its own large delta-v so a single-launch, multiple spacecraft option is less attractive.

The constraint is budget, fuel, and ambition.

Now what you could do is get a telescope out around a far outer planet and use the orbital parallax like we do from earth. A Starship might have a bunch of extra cargo space for this. But I just don't see how it is better than a big faring on falcon heavy.

EDIT: You know what? I am completely mistaken here. I was not thinking about the diff b/w FH and starship correctly.


You can easily put a third stage bigger than the whole F9 second stage in the payload bay of the Starship and you likely wouldn't need a super complicated unfolding deployment procedure for the payload thanks to the enormous volume.

Once the thing becomes reliable there are zero missions the FH can do that starship can't with a payload equipped with a third stage motor.


OK I'll bite: What does starship add here? It's a very large re-useable faring that carries a stage + payload into orbit? Why not attach to the top of FH?

Why: FH + Starship w/( rocket + payload)

And not: FH + (rocket + payload)


Starship doesn’t add anything. Starship removes complexity by sheer brute force, cheaply due to reusabilty. You can architecture a mission with ridiculous (before starship, that is) mass and volume per dollar budgets.

Falcon Heavy can lift a lot of mass but unless you’re launching tungsten into orbit you can’t fit interesting things into the fairing.


FH is already very thin and tall, probably not practical to stick another stage on top even if it could work payload wise. Besides that, SLS and New Glenn are the only ones with comparable lift capacities, SLS is way too expensive for the capability and the massive SRBs make the ride pretty rough, New Glenn is probably a feasible alternative, although probably more expensive than Starship.


I think you are mixing some things up. Starship doesn’t launch on FH, it launches on Super Heavy, which is its only first stage option.

A full Falcon Heavy stack can take 63,800kh to LEO. A Starship stack can take 150,000kg to LEO. The available Δv is substantially higher.


Gaia is pretty small, 710kg. It was launched on Soyuz.

Falcon 9 has payload to Mars of 4 tons, and Falcon Heavy has 9 tons to Pluto. I bet both would work.


Starship could give a single telescope a stronger boost than an FH could, and depending on the mass of one telescope (along with all the redundancies, power sources and transmitters such a long term mission would need), the telescope could be launched with an extra boost stage. So, several Starship launches for several telescopes.


More lift capacity means more fuel capacity onboard the telescope(s).


I think the problem with such a mission right now is the high probability we could launch a faster mission in the very near future - i.e. with NASA looking at spaceborne nuclear propulsion again, we could send much more capable telescopes out faster - which is not just an "I want it now" benefit: time in space is time you run potentially having components wear out or break. So getting them onto their missions ASAP is a huge de-risking element.


I wonder - and I am sure this has been examined to death - if there's some calculation that can be performed to find the "optimum" wait-or-release-now pattern, given a certain rate of technological development vs. distance/years ...

I am sure there are calculations for this ...

PS. Of course, the rate of technological development is the unknown variable hete, I am sure.-


I imagine such calculations immediately break down when you make the input (funding, interest) depend on the output. Which is the case in reality.

For example, say your calculations say that the optimal time for the mission is 10 years from now, once a currently in-development propulsion technology matures. You publish that, and the investors, government and the public, all motivated to support you by the dream of your ambitious mission, suddenly lose interest. Your funding dries out, as you're repeatedly told to call back in 10 years. The fact that the 10 year estimate, having been dependent on existing funding, is now "literally never", escapes them.

See also: "nuclear fusion is always 30 years away". It is, because original 30 year timeframe assumed continued funding that never happened, and it's not happening because "it's always been 30 years away".


Literally a moving target ...

> You publish that, and the investors, government and the public, all motivated to support you

Interesting how PR/culture indeed is a factor - a tangible factor in this. Indeed optimizing for "PR Goodwill" might be a thing ...


I recall a US commander in Afghanistan saying more or less explicitly that they were doing that - his rationale was that whatever military objectives they were given would be unachievable if Congress pulled them out, therefore the highest priority was always implicitly the preservation of goodwill via e.g. avoiding excessive casualties. Feels back-to-front to me, but maybe it worked for him.


It's called the 'wait calculation' btw if you or another reader are not familiar: https://en.wikipedia.org/wiki/Interstellar_travel#Wait_calcu...


As Mark Twain once said, “the best time to launch a tree into space was twenty years ago. The second best is now”


Dunno about Mark Twain, but it appears the best time to launch men to the Moon was more than half a century ago. The second best is now...ok, a year from now...I mean a few years from now.


These uncertainties in Cepheid luminosities are accounted for in Type Ia distance measurements. Particularly with Gaia we can now calibrate the luminosities of Cepheids in our galaxy using parallax observations.

(Knowing this field I'm sure there are some astronomers who argue that there are still some systematic uncertainties that are not fully being accounted for, but from what I understand it's pretty hard to account for it with the Gaia results at this point.)


"But according to Freedman, the galaxies’ supernovas seemed to be intrinsically brighter than the ones in farther galaxies. This is another puzzle cosmologists have yet to understand, and it also affects the H0 value. "


Some cool background about the Hubble constant here, including a nice explanation involving blueberry muffins https://news.uchicago.edu/explainer/hubble-constant-explaine...


I just finished off a blueberry bagel, the taste is still in my mouth. Maybe the universe is torus-shaped?


The opening sentence of this article is 100% wrong. Hubble was a good scientist and correctly made no assumptions regarding his observations that objects that are further away by parallax are more red shifted.

The assumption that these observations indicated an expanding universe was delivered to us by LeMaitre; if you believe in an expanding universe with a finite age, then give credit where it is due...


One of the frustrating aspects of cosmology is how difficult it is to actually apply the scientific method to it. You can't make a couple stars in a lab and see how they behave, the same way you can for particle physics. Fundamentally, most of cosmology comes down to observation, not true experimentation, where the experimenter is directly acting and comparing that to a control group. There are some experiments that can be done, but there are just some fundamental limitations. This is also the case in the so-called "soft sciences" like economics and psychology. But it's even true in some corners of the "hard sciences" like evolutionary biology.


Everyone expected the sharp vision of the James Webb Space Telescope to bring the answer into focus.

I think people forget that, due to the longer wavelengths to which it's sensitive, Webb actually has a poorer angular resolution than Hubble.


One thing that’s not mentioned is what error there is in the theoretical calculation and what is the measurement error. From the theoretical POV I’d expect the theory to have an upper and lower limit based on values from initial assumptions. Getting the theory to 8% of the actual value is a pretty big achievement. It’s pretty difficult to predict much simpler, real systems to within 8% let alone something as complex as the expansion of the universe.


Mold: a sci-fi short story[1] that connects this with the recent estimate[2] on the signature from a failed (or not) warp drive.

[1] https://www.youtube.com/watch?v=8URdhSigzjs

[2] https://news.ycombinator.com/item?id=41101144


Not an astronomer by any means. Just can't see it as a mere coincidence that the stars at the distance of the "age of universe" light years run away with about "c". I.e.

"c" / "age of Universe" = Hubble constant (i.e. "c"/13.7 billions ly / 3.26 ly per pc = 71.3 km/s per mpc.)


Frustrating that all the comments seem to be jumping in to talk about dark energy and quintessence and multiverse pontification, when the actual contention in the linked article is that all of this may turn out to be a measurement error and that the Hubble tension may not actually exist after all.


Did I read it wrong, or does the universe expand at 10% of speed of light!? Could that possible be why the measurements are off? A close object vs an object very far away might look like they are in different places relatively.


The observable universe has a radius of about 14 Gigaparsecs. If H0 is 67.4 km/s/Mpc, then a naive calculation puts the edge of the observable universe expanding at 943,600 km/s, or about 3 times the speed of light. Of course we still observe this as merely "close to" the speed of light, but the point is that most of the universe is shooting away from us so fast that we will never see them as they are "now", even if we wait billions of years. We have no way of ever interacting with most of the "modern" universe, even theoretically. They might as well be in different universes. All we will ever see is their images from billions of years ago, even if we wait billions of years from now.


Rest assured, this has been taken into account. The scientists who spend their life working on this topic have had the same thoughts you had within minutes of learning about the problem. It's extremely basic stuff actually.


If it's extremely basic stuff and they've spent their entire life working on it, why did the two teams find different results? Thinking of objects moving 10% of light speed, or even two times the speed of light away from you, is not intuitive to a layman, and I can't imagine the Math.


Because the problem lies somewhere else which is not extremely basic. Relativistic effects are not the issue here.


It's likely you read it wrong; there is no sense in which the universe's expansion has a fixed speed. The Hubble parameter is speed/distance [the figure's axis is km/s/Mpc, for example]. That is the natural unit to explain an expansion rate: things that are farther away ALSO move away from you faster (because the space between you and them all grows at a fixed rate).


Does quanta magazine manage to reach this level of detail in other fields?


Is there a good place to find redshift data and distance estimates for many galaxies?


What a wonderful read - thank you


I love this controversy. I swear it's the most exciting thing in modern physics. The thought that there's something fundamentally wrong with the cosmic distance ladder and the way that we measure the expansion of the universe.

I'm no mathematician or physicist, but this stuff just fascinates me. I interpret it something like:

The further one looks, the faster objects in the universe are expanding. However, when one looks out at the universe, they are looking backwards in time, to a time when the universe was expanding at a more rapid pace. Right? Close to the big bang? Because there was a period of rapid expansion after the big bang, so the universe had to have moved faster in the earlier universe? So the only part of space that actually appears to be static would be around our local space, the stars we can see?

Often the universe is depicted as a giant bubble, expanding outwards in all directions. It is how the human mind is built to think, a classic blunder dating back to the days of Ptolemy, where Earth was the center of everything.

At the edge of our observable universe is the beginning of it all. We can fast forward then through time to see the most modern picture of our universe, the reference frame that is our own galaxy. We are not at rest in a static galaxy, so why should the laws of relativity and dilation not apply to massive objects

Everywhere else we look is in the past, and the cosmic background is visible from every direction. So once expanded in 3D space, and accounting for time, all of space would appear to be accelerating towards the cosmic background and point of the big bang?

“[...]But in 1929 astronomer Edwin Hubble measured the speed of many galaxies and found, to his surprise, that all were moving away from us-- in fact, the further away the galaxy, the faster it was going. His measurements showed that space is expanding everywhere, and no matter where you look, it will seem as if all galaxies are receding because the distance between everything is constantly growing. Faced with this news, Einstein decided to remove the cosmological constant from his equations.” -some scientific American article

It’s not moving away from us, it’s moving towards the beginning of time at a faster and faster rate, but only because we’re looking backwards through time. In reality, due to our reference frame, and other subsequent frames of observed bodies, we are the only point in the observable universe that is in the “present”. To that effect, when everything appears to be moving away from us at a faster and faster rate, it is moving away from the origin (big bang) at a slower and slower rate.

Galaxies are not moving away, they are showing an accelerating speed due to the time difference, and the slightly higher cosmological constant several hundred million years in the past, a constant that scales with time and its relation to distance according to metric expansion and the speed of light. It is the higher constant with relation to distance that gives the illusion of a universe whose expansion is accelerating.

It can be assumed then, that as you move between vast points in space, the universe will update; showing that astronomical objects aren’t accelerating away, but are not in fact moving at all. If not moving towards each other and closer together.

So no matter where you travel, it is likely that the bubble of the observable universe travels with you, you do not accelerate away faster the closer you get to the galaxies that appear from Earth or other reference points from Earth to be expanding faster away.

If you look at the night sky from a planet in one of these far away galaxies, the overall structure of the universe would be very similar if not the same to the structure as it appears from Earth. With all galaxies appearing to be accelerating away from each other at a faster and faster rate.

Sorry im high on shrooms


I have a simple mind ... and I can't wrap my head around how they compensate for the huge lens flare from JWST.

https://www.youtube.com/watch?v=Y7ieVkK-Cz0


when did it start that the storytelling around every piece of physics news was framed as a controversy? I know it's been a while, but I feel like it wasn't this way 20 years ago...


The framing is not as contrived as you make it seem; the Hubble tension specifically is a genuine mystery and it is unknown why the indirect and direct methods don't corroborate when our theories otherwise indicate they should.


Fuck, now they've got animated nags that are autoplaying and don't get filtered out by uBlock Origin. One more site that uses dark patterns too chase away visitors.


If you really want to overload your mind thinking about this, imagine this universe is only a bubble crowded into a group of other bubbles, like a kid blowing soap bubbles.

So the pressure around our bubble is not uniform, there are more bubbles on one side than another, other bubbles are much larger and some are very tiny causing tiny "lumps" of pressure in various places on our bubble.

Decades ago I really liked the "big collapse" theory that has now been abandoned, it was so "simple" in comparison to a universe that keeps expanding and not uniformly at that.


Just because we are natives of this universe does not mean its behavior or characteristics will be naturally sensible to us. There is no “real” reason it should be something “simple” or reasonable to us. The universe simply is; us as well.


"This extrapolation predicts that the cosmos should currently be expanding at a rate of 67.4 km/s/Mpc, with an uncertainty that’s less than 1%."

I can't measure my own weight with an uncertainty that's less than 1%. I wonder what these peeps are on...


Depending on which end of the scale you are interested in, the NIST would be an interesting place to work.

How To Measure The Tiniest Forces In The Universe https://youtu.be/pXoZQsZP2PY and World's Heaviest Weight https://youtu.be/_k9egfWvb7Y - both from Veritasium.

From the expanded description on the heaviest weight:

> Before visiting NIST in Washington DC I had no idea machines like this existed. Surely there's an accurate way to measure forces without creating such a huge known force?! Nope. This appears to be the best way, with a stack of 20 x 50,000 lb masses creating a maximum force of 4.45 MN or 1,000,000 pounds of force. I also wouldn't have thought about all the corrections that need applying - for example buoyancy subtracts about 125 pounds from the weight of the stack. Plus the local gravitational field strength must be taken into account. And, the gravitational field varies below grade. All of this must be taken into account in order to limit uncertainty to just five parts per million (.0005%)


Skill issue.


What is "Mpc"., if anybody knows?


It is a distance unit, megaparsecs.

Or approximately 3.25 million light years.


Appreciated. Big distance

I have difficulty conceptualizing "distance over time ... over [huge] distance"

I guess it means "chunks about so [megaparsecs] large are moving [themselves] at a speed of so many Km. per second" but I could be wrong


The measurement for expansion is linear with distance, so two spots one Mpc from another moves away front each other at 67.4 km/s while two spots two Mpcs from each other moves at 134.8km/s. This means the expansion is accelerating and some parts of the now visible universe will eventually move away from us faster than the speed of light resulting in them disappearing from our view.

The distances, time and speeds are indeed very hard to comprehend from our usual references :)


It's crazy that some parts of the universe will actually for all purposes vanish

Thanks for taking time to break it all down


At some point there will only be the galaxies in our local group visible, it's interesting to imagine a future civilization only having such a limited universe to view.


Doesn't this mean that on a long enough time frame, an observer anywhere in the universe won't be able to see anything because everything else in the universe is too far away to be visible?


Simply - yes. Furthermore, civilizations that arise in that era of the universe will likely have a different cosmology than what we are able to understand today. If you could only see the galaxy that you are in, you wouldn't be able to see galaxies that were forming shortly after the Big Bang, or be able to use supernovas in other galaxies to measure the scale of the universe.

Kurzgesagt did a video on that - TRUE Limits Of Humanity – The Final Border We Will Never Cross https://youtu.be/uzkD5SeuwzM


>> civilizations that arise in that era of the universe will likely have a different cosmology than what we are able to understand today.

That is just mind-boggling


It means that a chunk of space with length 1 megaparsec will be 67.4 km longer a second later. If you divide that new length by the old length, you get the factor by which space expands each second. It’s a very small factor (i.e. very close to 1), but there are also many seconds.


megaparsec (1 million parsecs)


Thanks much


Reading your very confident response it makes me wonder the same about you.


Their wallets are much bigger than yours.


As is what they are trying to measure. I don't believe 1% measurement error in any universal element except perhaps the speed of light...


I suggest you take a look at this list of physical constants, paying special attention to the "uncertainty" column, and then get back to us on why you don't accept any of them except the speed of light.

https://en.wikipedia.org/wiki/List_of_physical_constants


That's an absurd statement. For example, planck's constant is known to better than 1%, as is the mass of various particles. Heck, the Earth, which is sufficiently non-spherical for it to matter only differs in radius (between polar and equatorial) by 0.3%!


Here's a nice list: https://en.wikipedia.org/wiki/List_of_physical_constants

G (the gravitational constant) is an interesting one: the value is only known to about 5 significant figures, but GM (the gravitational constant multiplied by the mass of the Earth) is known a lot more accurately, unsurprisingly, considering how well GPS works. Some of those constants seem to be known to about 12 significant figures.


If you can measure the speed of light extremely precisely, you can measure a lot of constants extremely precisely.


The current scientific consensus is actually pretty good - the consensus being that standard theory, quantum theory, big bang theory, particle theory, universe expansion model all have as good a likelihood as not of going down in history the same way as miasma theory, phlogiston theory and Newtonian classical mechanics, given the apparent and vast shortcomings of basic science around our universe's constitution, composition and origins. It's a mature and constructive recognition of our limitations and where we can improve.

One of the proximate causes around our failure to progress in this and other areas is the funding model of publish or perish. Many researchers are trying to carve out a career, but not necessarily to contribute to progress or advancement. An examination of the funding structure and incentives for universities and researchers appears to be in order.

One suggestion would be to limit grants for private universities and colleges. Another would be to cap compensation for university and college staff. Yet another would be to add funding or tax breaks for technology scale up and application development in the private sector. And another would be cutting funding to masters', PhD and post-doc levels, and increasing funding for 1-, 2- and 4- year career oriented and skill development programs. Yet another suggestion would be limiting loan eligibility to 1-, 2- and 4- year degree or lower programs. Another would be tying university and college funding to the success of attached technology scale up and application development programs. Another would be requiring undergraduate and lower grants and tuition revenue to be spent directly on those programs and facilities, and research funds to be kept and spent separately.

I would like to know some examples of how recent, publicly funded PhD, masters degree and postdoc work or research has materially advanced or will advance our world's knowledge and progress and has resulted in material benefits to society, and not just unreproducible studies on paper and unviable technologies and products.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: