Interestingly this is the first issue of Nature Astronomy, I hadn't known until now that they were making a topical journal (similar to Nature Geoscience, etc.) Looking through the different abstracts in the issue, I have to be honest, they don't seem very different than normal papers you find everyday on arXiv. I don't think it's a good thing to have another journal in this case, since Nature is not open access friendly.
To be honest, I don't know too much about Nature Publishing Group, but what I do know is that the standard astronomical journal in America, the Astrophysical Journal (ApJ, "ap-jay"), is very well. It's owned by the American Astronomical Society (AAS) and published by the Institute of Physics (IOP) [1] which is itself a non-profit and does great work in education and other fields. They have a lenient policy where articles become open-access after 12-months.
My point is, they're not blood-suckers like Elsevier :) And I would be unhappy to see Nature use it's prestige to muscle in a journal with more onerous access policies.
Isn't this playing a bit fast and loose with physics?
Isn't the the same thing as saying, wow, I'm heavy on the planet earth. The sky is putting a force on me, by not having enough mass to counter the gravity of the earth.
Our galaxy is being pushed by a lack of mass on one direction?
We often visualize the effect of mass on spacetime by using a trampoline and a bowling ball. The outer edges are flat and the closer you get to the bowling ball the more spacetime is warped downward.
On the outer edges, the effect is almost nonexistent and the trampoline's surface is nearly flat. What this suggests is that spacetime without mass is flat and that mass warps spacetime in a downward direction. So no mass = flat, mass = curved downward. So the effect is in the range of [0, black hole].
Obviously this is an very loose analogy but bear with me.
What if spacetime acts more like a waterbed? Not only does mass warp spacetime "downward" but something (in our analogy the displaced water) pushes the rest of spacetime upward.
To use a signal processing analogy, a filter will (nearly?) always affect the passband to some degree, with very high Q filters creating a knee where amplitude is increased significantly before the filter rolls off. That seems counterintuitive (at least to me) but it's nonetheless true.
Using another analogy, curved spacetime creates regions of "downhill-ness". What if the space between masses get a little bit of uphill curve as well? That could explain e.g., the Pioneer anomaly and possibly at much larger scales the effect in the article.
General relativity, which describes the effect of mass on space-time, is a very elegant theory with beautiful maths. The trampoline and ball have incidental similarity, but they're for illustration only.
There's nothing that makes the waterbed theory necessarily wrong, but you'd have to posit some new maths different and more complicated than we have with GR.
Yeah, I understand that the trampoline analogy is inadequate but I believe that we don't consider empty regions of space to have any type of "opposite" curvature or to exhibit negative gravity. I believe curvature is thought to be [0, black hole], not [-black hole, black hole].
Happy to be corrected on that, IANAP.
The disparity between what GR predicts and what we observe has led to the cosmological constant / dark energy / dark matter. Of course, those aren't really things in themselves, they're just a measure of that disparity, a fudge factor.
I'm suggesting that the disparity could be explained by spacetime "responding" to regions of curvature by mass with an opposite curvature.
Unfortunately I'm not qualified to do anything with that idea. I just wanted to throw it out there. It's something that's been bothering me for a long time.
Didn't plenty of new ideas start with analogies? I can think of Feynman and Einstein who described imagining or noticing things in real life which gave rise new ideas in physics.
You would have to explain to some extent what the analogy is, and what it means for the real world, before we could begin to discuss whether it is in any way correct, let alone useful.
Probably just a semantic difference in what they mean by "start." In this instance, the waterbed analogy would suggest some sort of negative mass-energy and a mass-energy conservation law.
Just to be clear- are you implying that people cannot be inspired by natural events and think of an analogy elsewhere in physics? Or are you saying that an analogy cannot be the firmament of a new theory?
I would agree with the latter, but it sounds like you're stating the former, which is categorically incorrect
(0 to black hole) makes more sense because in this analogy it represents somthing like a length of a vector (which details are lost in the process of constructing analogy). So -42 doesn't make much sense because it's exactly the same as 42 in opposite direction.
Question: why don't you try and learn the foundations of the current theories (it's incredible how many man-centuries of high quality intellectual work you can find condensed into a book) before launching into inventing new theories out of nothing?
Newton thanked the giants whose shoulders he was standing on. You are pretending you are a giant on your own. Do you sincerely believe it?
Right but he was also hit on head by an apple according to the myth. So there place for analogies like that. Also I thought your comment seemed a bit too disparaging. GP had an idea they wanted to share. There is probably a nicer way of saying it won't work than "why don't you go to the library and learn the maths before speaking up..."
> There is probably a nicer way of saying it won't work ...
The question is not why it doesn't work. The real issue is why someone who worked for years in order to become expert on a certain field should spend some of his time to consider a theory coming out of absolutely nothing. There may be a nicer way to say this; please help me find it ;-)
There isn't one because it doesn't need to be said.
It's also not true: the theory came out of questioning what would happen if you replaced one set of equations in the model with another, and whether that might make a more interesting (or accurate) model. It's simply expressed in informal language, but the formal (and completely sensible translation) is straightforward to anyone with the necessary background.
As for why experts would care, anywhere from dinner conversation (as is likely to be the case for myself; not a physics expert, but my company is, and Im curious of the answer) to it strikes a cord related to their work by giving them a new analogy, allowing them to bring more expertise to bear on the project.
For my professional work, some of the biggest influences have been questions by amateurs (and the subsequent trying to address them).
If it were actually such a useless idea, you'd have spent less time just refuting it than with your unnecessarily negative posts. Instead, you were negative for no clear reason (though, several uncharitable reasons might be inferred).
Please refrain from making such negative posts here. They make the community worse.
The base question, as I understood it, was if regions of curvature caused by mass cause regions of opposite curvature between them (rather than no curvature).
Mass effectively creates positive curvature in a region, so the question is if that "tugs" and create regions of negative curvature to keep the "total" curvature 0, in some sense. (There's actually several models here, depending on how you want to distribute the negative curvature.)
This isn't totally absurd on the face of it, for any reason I can think of, and brings up an interesting curvature conservation law.
The next question is if this model (curvature conservation) explains things we see, is contradicted by facts, etc. In parallel (though usually after) mechanisms by which the tugging would happen are proposed.
Because it's fun to pinpoint what exactly is wrong with some superficially plausible idea.
I can uderstand that it gets very old very fast for some people but there are physicists who like educating laymen by putting themselves in their shoes and pinpointing exact spot where they believe something that actually is not true.
Do I believe in myself? Yes. I'm a genius (by I.Q. score) and work hard every day. That's why I own a software company and have a beautiful family, money, a passport full of stamps, etc.
Do I think I just solved one of the biggest mysteries physics? No. I think you'd have to be a moron to have read it that way as I went out of my way to say I wasn't qualified, that it was just an idea, etc. If I were a physicist and working on the idea in earnest I wouldn't have presented it here.
I don't want to make this too much of a personal attack because it's less about you and more about HN (and the internet) in general:
People like you really (actually, as in for real) make me sad. I'm 100% confident that you're not bold enough to call me an idiot in person but you're happy to do it via a textarea on a web page. That's cowardice. That's a lack of character.
Why are you like this? Because you're a loser. You don't have the courage to carve out a real life for yourself so you attack anyone you can -- especially people who seem to have the qualities you lack. That's what losers do.
Let's turn this around:
You're obviously lacking something. I'm 40 and successful and violently independent but I have had some help along the way -- people who were willing to give me advice, insight into the life of a successful person. If I can help you by way of advice or mentoring to turn your life in a positive direction, reach out to me m@lattejed.com. I'm 100% serious.
Having said that, this will be my last contribution to Hacker News. This used to be a great place for intellectual discussion. Not so anymore. Leaving has nothing to do with someone not liking my waterbed analogy -- I've been thinking about leaving for a long time. I waste too much time here and get nothing in return.
I think you're taking this too personally. The burden of proof is on you, the person suggesting a new idea to support it somehow. You're asking other people to do the work involved which is not fair to them.
The analogy to trampolines and bowling balls is leading you astray. There's no "pushing down", or even a downward direction to push in in general relativity.
Not a physicist, but I've been quietly hoping something like this was the case for years and would usurp Dark Matter as the reason for rotational anomaly at the outer galactic edges.
It makes sense that space-time is both attractive or repulsive based on the presence or absence matter. Throw in an overlapping inverse square law at the galactic boundary and perhaps this solves the anomaly by way of a steep gravity wall where attractive and repulsive forces meet.
Again, not a physicists either, but I agree, it seems to make sense spatially.
Can someone explain their distance units to me? I'd read "km s-1" as kilometers per second. which can't be right and isn't just a typo as the magnitude is way to small for "km".
"It was suggested a decade ago that an underdensity in the northern hemisphere roughly 15,000 km s−1 away contributes significantly to the observed flow"
To get the distance that you'd understand you have to divide the number presented with the H0 which is still estimated somewhat differently by the different observations (but the future projects should improve the estimate).
If we take the approximate value of 70 km/s/Mpc (the last unit is mega parsec) we get that the 15000 km/s corresponds to 214 Mpc, which multiplied with 3.26 gives you the number of millions of light years: around 700 million light years. Or, for even easier "rule of thumb" a little less than 22000 km/s is a billion light years using the current approximate constants.
The Milky Way is estimated to be at least 0.1 million light years across, so the 15000 km/s which you quoted is up to 7000 diameters of the milky way away from us.
The reason to use the speeds and not the distances in the papers is that we actually measure the speeds (redshifts, to be precise), and the distance estimates will improve.
> The reason to use the speeds and not the distances in the papers is that we actually measure the speeds (redshifts, to be precise)
Comoving coordinates are a more important consideration. The coordinate distance (which is the comoving distance) between galaxies is fixed in the standard cosmological model.
Standard candles like supernova light curves do indeed rely on redshift, and it is straightforward to convert cosmological redshift into a recession velocity.
However, standard rulers include angle-diameter-surface brightness relations which do not measure redshift; one calculates the redshift for these objects through e.g. the Mattig or Sigma-D relation.
Extragalactic distance ladders combine these and other classes of observables.
Using redshift and a scale factor for cosmological distances rather than Gly or Gpc also defocuses one from distracting differences between cosmological proper distance and relativistic proper length, for example, while bringing into focus peculiar velocity. It's also handy when dealing with different systems of units, such as geometrized ones in which G=c=1, as is common practice in cosmology.
Just by way of example, starting with your H_0 value, and being slightly sloppy (especially wrt sigfigs), for ~1 Gly, I plug in H_0 = 70, z = 0.076 , \Omega_{m} = 0.31, and the standard values for the other densities, and get four useful distances: comoving = 750 h^-1 / Mly, proper distance at z=0 = 1043 Mly, proper distance at z=0.078(a=0.928) = 969.4 Mly, time interval = 1.004 Gy.
Is it really useful (rather than simply nifty) to think of a galaxy at a slightly greater redshift moving away at ~ 2x Earth's diameter every second?
Thanks. You, of course, have much deeper knowledge of the subject, I hoped somebody with more knowledge will eventually comment and add some more details.
For those who want to know what you talk about (at the start) and I've avoided for simplicity, lacking some more exact link, I hope wikipedia is good enough:
If I may ask, not doing astrophysics professionally, but being interested, I don't understand why you started with z = 0.076 and then calculated proper distance at z=0.078. I also don't know what (a=0.928) there stands for.
The two proper distances can be thought of as, for the larger, what a notional instantaneous (>> FTL) analogue of radar measurement would decide the distance to the galaxy we are observing is, and for the smaller, what we would read if we had optical telescopes that could read the display of such a measurement device showing a counterpart in the distant galaxy what the instantaneous distance to us is.
(Or instead of an instant FTL analogue of radar, you could use a pair extremely long measuring tapes with the 0 end anchored on the observed party; if we look at a telescope at their tape's indicator, it would read 969.4 Mly, but looking at our own tape would read 1043 Mly.)
The numbers are close because ~1 Gly is near enough that the metric expansion of space hasn't carried our galaxy cluster and theirs very far apart in the past billion years.
They would see similar numbers looking at a galaxy at z=0.076 in a different part of their sky than where the Milky Way is; we would see a different and almost certainly higher redshift when looking at that galaxy from here.
Since the universe is expanding and the relative velocity is greater for further objects, you can express the distance from Earth as the relative velocity.
Seems like cheating. The expansion of the universe is a correlation between distance to things and their red shift. You still need to have a proper measure of the distance to something rather than using it's velocity as a stand-in for distance. Particularly if you're looking at things like flow fields.
It's not. It's honesty, and keeping the measurements in the form that will remain more usable. What we measure directly is the redshift. The estimation of the distance (that is, the exact value of the H0, the Hubble constant) is done using various methods and the results aren't bad, just not precise enough to use it combined with the redshift. That's what makes us preferring the redshift now, as the speed can be exactly calculated from it: https://en.wikipedia.org/wiki/Redshift
Knowing that the Universe is "flat" ( https://en.wikipedia.org/wiki/Shape_of_the_universe )
allows us also to do good calculations, even if we still don't have the exact value of H0. In fact, it's also not so bad: the estimates based on the Planck satellite, measuring the state of the Universe 13.7 billion years ago as the galaxies and stars haven't existed, and the ones based on the Hubble satellite data (measuring the state of the Universe much more recently) mismatch only some 9%. It is amazing enough for me. This difference, after more research, can produce some new physics discoveries or make us improve some engineering (and we surely should invest in more research) but it's quite good for everybody else: even if the "emptiness" from this article affecting the Milky Way is not 700 million but 730 million or 670 million light years, it won't affect our plans: for example, the Andromeda galaxy is just some 2.5 million light years from us and we estimate it will collide with the Milky Way in some 4 billion years, give or take. Should we care much if the "emptiness" is 268 or 292 times farther than the Andromeda is at this moment?
See my other response here for the way you can calculate the distance from the speed, based on your preferred estimate of the H0, it's simple, just less precise at the moment than simply stating the speed (or the redshift).
Yes, clickbait. The "invisible force" here is gravity.
This is the key bit of context:
> When describing the gravitational dynamics in co-moving coordinates, by which the expansion of the Universe is factored out, underdensities repel and overdensities attract.
So it's basically a coordinates transformation thing.
Upon reflection, Hoffman et al. (2017) itself may arguably be clickbait. To the extent that I understand it, they're looking at inhomogeneity in cosmic expansion. It seems equivalent to turbulence in fluid dynamics. That is, non-uniform flow.
And in considering airflow, for example, one doesn't say that underdensities repel air, or that overdensities attract air. Rather, it's the non-uniform flows that create the underdensities and overdensities.
Anyway, it seems to me that Hoffman and coauthors should have stuck to describing correlations, because that's all they have. Using the language of causality is confusing. They don't mention it, but maybe they have an agenda to promote varieties of dark matter with positive and negative gravity.
> Oh, I see. The title of the HN submission has been creatively edited and is not the title of the original Nature article.
Well yeah, they seem to have put it into laymans terms, but as a layman it appears to be an accurate representation of the article. It's certainly more descriptive than "The Dipole Repeller", so not click bait.
Yeah, its about the motion of our Local Group, but yes. Here's the summary provided in the xml:
>The presence of a large underdensity, the dipole repeller, is predicted based on a study of the velocity field of our Local Group of galaxies. The combined effects of this super-void and the Shapley concentration control the local cosmic flow.
Yup, saying invisible in there was redundant. That's not all, though. Saying force is additionally also redundant too. All things pushed around are pushed around by force.
"Milky way pushed around" would have been sufficient to convey the interesting part. It's interesting because at galactic scale isn't it usual to be pulled around rather than pushed around?
To be honest, I don't know too much about Nature Publishing Group, but what I do know is that the standard astronomical journal in America, the Astrophysical Journal (ApJ, "ap-jay"), is very well. It's owned by the American Astronomical Society (AAS) and published by the Institute of Physics (IOP) [1] which is itself a non-profit and does great work in education and other fields. They have a lenient policy where articles become open-access after 12-months.
My point is, they're not blood-suckers like Elsevier :) And I would be unhappy to see Nature use it's prestige to muscle in a journal with more onerous access policies.
[1] https://en.wikipedia.org/wiki/IOP_Publishing