I'm not sure if this is the researchers' issue or is an issue with the scientific press, but it would help if the "discoveries" theorists made with their hitherto unestablished and untested theories like various string and QG based theories were not touted as discoveries about the real world.
"Discoveries" about the mathematical structure of these theories are actually incredibly important even if no current day experiment can conceivably test them. Special and general relativity were discoveries made in this way (at least to an extent), and now general relativity is important for the proper functioning of technology as "trivial and boring" as GPS. Even earlier in our history, much of thermodynamics was developed in a similar fashion.
See the sibling comment for another version of this argument, but finding the weird mathematical coincidences between the competing mathematical theories of nature is very useful. In other words, there are mathematical and logical tests that can be as important as experimental tests for the progress of our understanding of nature.
Came here to (wrongly) correct your statement, as I remembered that the only relevant effect was from special relativity.
It turns out it is the exact opposite: general relativity effects prevail [0]. Thanks for forcing me to check!
> Special Relativity predicts that the on-board atomic clocks [...] should fall behind clocks on the ground by about 7 microseconds per day [...] due to the time dilatation effect of their relative motion.
> Further, the satellites are in orbits [...] where the curvature of spacetime due to the Earth's mass is less than it is at the Earth's surface. A prediction of General Relativity is that clocks closer to a massive object will seem to tick more slowly [...]. A calculation using General Relativity predicts that the clocks in each GPS satellite should get ahead of ground-based clocks by 45 microseconds per day.
> The combination of these two relativitic effects means that the clocks on-board each satellite should tick faster than identical clocks on the ground by about 38 microseconds per day (45-7=38)!
I think the key here is the word "fact". Scientific models are models. They are good models if they correspond with measurements that we make in the real world. The are even better models if they make a prediction that we can test in the real world. However, a model is never a "fact". "The FSM did it" is always going to be a possibility, even if it is an incredibly remote one.
We use scientific models because they are useful not because they are fact. Even if our model does not always match our observations it can still be a useful model in some circumstances. It's very important when talking about science that we understand that it isn't fact. It is a model. It might be a good model. It might be a bad model. It might be a useful model for our application. It might be a poor model for our application. That's it.
When you see a headline that says "X is Y" and it's about a scientific model, what that means is "If our model is consistent with the real world then the model predicts that we should find that X appears to be Y in the real world". But that's a bit too long to put in a headline and you would have to do it for every scientific conversation.
It is extremely unfortunate that on the first day of junior high school when you start doing "proper" science classes that they don't sit people down and explain what a scientific model is. Mostly I think it's because the teachers don't know, because their teachers never taught them.
Mathematical theorems (especially no-go theorems, one of the most powerful tools in physics) are at least as much of a fact as an experimental measurement. I can concede that a pop-sci outlet should be clear about what is an experiment and what is a mathematical theorem, but was it really not clear in this case?
This is a sincere question, and I care to hear your opinion: do you believe that a reader of phys.org would need more than a glance at the article to know this is a discovery about the mathematics of general relativity, not an engineering breakthrough in the creation of a scifi-like hyperdrive?
> Mathematical theorems (especially no-go theorems, one of the most powerful tools in physics) are at least as much of a fact as an experimental measurement
That's an overstatement. Measurements are more important than theory. We cannot form theories without measurements, and yes theories inform our measurements, but science always begins and moves forward with measurements.
General relativity, quantum computing, some parts of thermodynamics/information theory/statistical physics, some parts of quantum field theory, some parts of condensed matter physics are all things that started as purely mathematical statements. I am not trying to diminish the monumental importance of empirical measurements, but please do not diminish the importance of logical consistency and mathematical constraints. We can say a lot about our universe with great certainty even though in same cases making direct measurements is orders of magnitude beyond our technology.
GR was amazing, but it wasn’t well known and accepted until the first measurements confirmed it. Quantum Computing is still very much up in the air, but it’s also weird to bring it up in this context, because QM is one of the most tested theories of all time. QED In particular is tested to the most decimal places of any prediction, ever. It’s also weird to act as though GR and QM started out in a vacuum, instead of what they really were, which was based on centuries of math, theory, and experiment.
Edit: Unrelated, but I see you’re a fellow Greg Egan fan, good to meet you! Planck Dive has to be one of my all time favorites.
We are talking past each other: It is a bit contradictory to say "quantum computing is up in the air" and "QM is one of the most tested theories" as an argument against what I am saying. Yes, both are statements are true, but they are in no ways counterpoints to what I am saying, if anything they support it. QM is a well established and empirically tested theory. Quantum Computing is a purely mathematical construct that emerged from that well established theory without any experimental evidence for it, and only now, 30 years later, we are starting to have a chance to employ this purely mathematical construct in practice.
Same with GR: it was a purely mathematical construct for a while until we could test it, but there were too many theory clues that it must be right.
And this is not selection/survival bias: it is extremely rare for humanity to find a robust mathematical construct for which we can have a high degree of certainty that it describes the universe well. In the few cases where we have had that certainty, due to mathematical proofs in seemingly disjoint fields, we ended up being right.
To push it to stuff like super strings and quantum gravity: few respected scientists would claim that their pet mathematical construct is correct, but many of them will say "the vague commonalities between all these diverse and seemingly unrelated mathematical constructs definitely point to an underlying fundamental construct".
Everything you list was formulated in response to measurements that didn't fit with existing theory, with the possible exception of quantum computing. I'm not diminishing the value of theory, merely correcting what I think was an overstatement of its importance on your part.
There are some results around QG, such as holography, which seem to be universal. That is, they make sense with strings, loop quantum gravity or any of about ten other
lines of work that people can do quantum gravity calculations with today.
If you can prove that all of those have to say the same thing, that is a powerful result and probably means something about our world.
It seems pretty reasonable that wormholes could exist so long as they don't form closed-timelike curves, so I think these people are getting at the quantum roots of the "cosmic censorship principle".
> It seems pretty reasonable that wormholes could exist so long as they don't form closed-timelike curves
"Wormholes" are two entangled black holes that have been subsequently moved light years apart, preserving the entanglement during the move. They are a great mathematical toy for exploring the nature of entanglement.
But that's it. We have trouble preserving the entanglement of just a few ultra cold particles for a few milliseconds. I am far^^^far more likely to fall onto the floor by quantum tunnelling through my chair than a useful wormhole ever appearing. Postulating they actually exist is not in the slightest bit reasonable.
I think it goes beyond the scientific press. There seems to be a general tendency to squash the intricacies of anything and react impulsively to some red herring or whatever; such comments dominate all online forums I've ever seen.
Here is a fantastically interesting article that maybe has some new insight on the nature of spacetime; top comment on HN thread about it is some person loudly and incredibly pretentiously mistaking their own misunderstanding of the intricacies of 'truth' in science (and how there are different levels) for some flaw in the work itself.
You know, the underlying sentiment here is a good one. How do we know what we know, what do our theories really tell us, etc. All excellent questions. But you didn't express that at all, instead you say these things shouldn't be 'touted as discoveries about the real world.'
That's all you man! No theorist ever has wanted people to think this of their work, it slaughters the beautiful intricacies of it and leads to public relations disasters like this.
Don't go misunderstanding shit and then confidently be blaming it on other people man. Rhetoric matters.
The whole "it's just a theory" thing has done so much harm to public understanding and appreciation. It's an impossible starting point for a conversation, if you think "it's just a theory", you're not wrong, as much as entirely missing the point. How do you start with that and get to an understanding that things aren't always absolutely right or wrong, that contexts and assumptions matter, that things can be true in one setting (the math) and uncertain in another (the real world). That for most things what we are more than anything is uncertain. That there are many different kinds of uncertainty.
How are the subtle intricacies of things (that the answer to almost everything starts with "well, that depends...") supposed to survive forums where conversations are heavily selected for ability to grab attention? Not a rhetorical question, if you got ideas I want to hear them.
PS noobermin, I don't mean to be overly-critical of you although it sure sounds like it. I'm using 'you the pronoun'. Just something I think about a lot that you catalyzed me into trying to express...
IMO, your premise is a strawman. I don't detect any "it's just a theory" sentiment in the top level comment. What the comment says is that mathematical statements about "hitherto unestablished and untested theories" should not be taken as statements about the world.
This, to me, shows a rather sophisticated understanding of scientific truth, namely that we should demand that a scientific theory (as opposed to a mathematical construction) should be falsifiable. AFAIK, there is no question that the mathematics of string theory et. al is sound; the question is whether these theories are physical, and whether they are falsifiable. These are the exact objections the top-level comment makes when referring to "unestablished and untested theories".
There is an important point you are not addressing: it is one thing to make a statement about one single mathematical construct (it is cute, but not a big deal). It is another thing to make a statement that constrains a whole family of seemingly disconnected mathematical constructs (this is now a serious constrain on what the laws of the universe can plausibly be).
From a sibling comment:
> There are some results around QG, such as holography, which seem to be universal. That is, they make sense with strings, loop quantum gravity or any of about ten other lines of work that people can do quantum gravity calculations with today.
> If you can prove that all of those have to say the same thing, that is a powerful result and probably means something about our world.
I'm not exactly sure which version of "If you can prove that all of those have to say the same thing...." you're arguing for. It doesn't scientifically matter what statements you can make about any number of mathematical constructs if none of those constructs make testable predictions. Remember, falsifiability is a necessary condition for a construct to be a valid scientific theory.
OTOH, it does matter if you are able to make a statement of the form "Any mathematical construct that accurately describes X must have property Y." It is not clear that quantum gravity (or the linked article) meets this latter standard.
Ah, I think we have a deep "philosophy of science" disagreement. Basically, it is not just "falsifiability", but also "probability" of being correct. Probability that can be informed by things like formal versions of Occam's razor (e.g. Kolmogorov complexity, Akaike information criterion, etc).
Here are a few premises on which I based my statement (I will pick one particular venue of scientific research, so I will not speak in all generality. You can of course disagree and say that it does not generalize at all, but I think this would be a philosophical discussion, i.e. it does not matter at all that we have different opinions, as both are equally valid ways of pursuing scientific truth):
1. Computational complexity is a science on its own. Computational complexity also makes useful statements about the limits of the physical laws in our universe (a la "extended Church-Turing thesis" or Aaronsons "NP-complete problems and physical reality").
2. NP=P vs NP!=P is a question that has a definite answer within the relm of science but we do not know the answer yet. The answer will constrain what the permitted laws of physics are due to point 1 above.
3. There are a lot of clues that NP!=P coming from very separate disjoint fields of math. Mainly because we already know there are a lot of "phase transition"-like phenomena when we apply approximate algorithms to problems that can be parameterized to be NP-complete only if some ε is larger than some critical value.
4. These "phase transitions" are purely mathematical constructs, but due to the previous points they constrain what is permitted in the universe as strongly as any 5-sigma measurement in a particle accelerator.
To bring this back to the discussion of speculative theories of quantum gravity: if all of the competing theories of quantum gravity (all of them being very speculative and unproven) have a handful universally agreed on predictions (although derived in completely separate ways), this is as strong of a constraint as any 5-sigma measurement in a particle accelerator or cosmological observation.
Or if you permit me to attempt to phrase is it in yet another way: there are a ton of theory assumption behind any 5-sigma (or 10-sigma) experimental measurement. These assumptions are the same ones that inform the purely mathematical constraints. If both the math constraints and the interpretation of an experiment are based on the same assumption, why are you taking the interpretation of the experiment any more seriously than the pure math.
Sorry for the length of this text - I am finishing my dissertation this week (doctorate in physics), so this is very much on my mind at the moment.
Yes, I think we do have a deep, philosophical disagreement here. You seem to be taking a more "coherentist" approach (very roughly, if theories X, Y, and Z all explain observed phenomena P_1, P_2, ... P_n, then there's probably something to all of them.") OTOH, I would counter that if theories Y and Z offer no testable predictions beyond that which established theory X does, then they are of limited value. In this way, I would follow Lakatos and Kuhn most closely.
The irony here is that, although I am a professional software engineer, the entirety of my post-secondary education is in mathematics. I did some research in graph theory when I was in grad school, so I am actually quite familiar with this notion of "phase transition" that you're referring to. But, I also don't consider mathematics to be a science, because I define "science" in the same way as Wikipedia: "...a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe." There are many things mathematics can describe that are inherently unphysical, which, IMO, makes mathematics itself not a science, however excellent a tool for science it is. And, it's on this basis that I'd come at all of your points 1-4.
Regarding QG theories and their explanatory power, I would say that the fact that they have a handful of universally agreed upon predictions does not make any one of them correct in the least. The value in any one theory of QG is not in the areas where it agrees with other theories of QG; it's in where that theory disagrees with established theory in a way that we can see in a lab, a telescope, or an accelerator. What we're trusting here is the physical behavior of the universe, and we're hoping that it's correctly predicted by the mathematical construct that is the theory.
I really appreciate your thorough description. If you are a fan of "mathy" scifi, check out "Permutation City" - it takes what I have described as my philosophy of science (physical reality being dependent on math) to a very entertaining extreme. I think you might find it entertaining even if you have the opposite views (math "just" being an indispensable tool for physics).
> If both the math constraints and the interpretation of an experiment are based on the same assumption, why are you taking the interpretation of the experiment any more seriously than the pure math.
Because math models the experiment, it does not define it's results.
Even math as fundamental as the laws of thermodynamics does not constrain what is permitted; they are axioms which describe the universe as best we are able to view it. If all competing theories of quantum gravity agree on a point, it would mean almost[1] nothing until that point is observed.
From the outside it feels like there is a fallacy common in at least some branches of physics (string theory...) where over-fitting math is equated with insight.
[1]I suppose you could break out Bayes' theorem and try to work out a prior for comparable theories being wrong, degree of independence between theories, etc...
Maybe a 100 years ago. With today's fundamental physics experiments (accelerators, non EM telescopes, and especially quantum foundations tests) the mathematical constructs are an inseparable part of the interpretation of the experiment.
And there it is hardly over fitting when the same results come from vastly different mathematical constructs.
I really appreciate the clarity of this comment. While hacker news is certainly one of the better forums, the amount of armchair analysis and pseudointelectualism, especially on scientific non-IT posts, is pretty frustrating. It reminds of the "middlebrow dismissal" style https://news.ycombinator.com/item?id=5072224
I think for anyone for whom this theory is at all relevant, it's already clear that this is a theory, not a proven fact.
The fact that its theoretical is mentioned in the article for anyone that bothered to read past the first few paragraphs. For the rest of us, it just makes for interesting party conversation whether the theory pans out or not (which couldn't be proven by actually transiting a wormhole within any of our lifetimes)
So no harm, no foul -- anything that makes theoretical science interesting in the popular press is a win in my book.
Can someone point out where the article says why it's slower? I'm having difficulty unpacking:
"The new theory was inspired when Jafferis began thinking about two black holes that were entangled on a quantum level, as formulated in the ER=EPR correspondence by Juan Maldacena from the Institute for Advanced Study and Lenny Susskind from Stanford. Although this means the direct connection between the black holes is shorter than the wormhole connection—and therefore the wormhole travel is not a shortcut—the theory gives new insights into quantum mechanics."
Based on the colloquia I watched by Lenny Susskind, the problem is that you can put something into a wormhole, but to get it out the other end, you have to carry the encoding you get to the other end of the wormhole to decode the output. Because the distance to the singularity grows at the speed of light, the length of the wormhole is essentially infinite. It is only the operator you get from putting the object into the wormhole that lets it tunnel through that infinity, so no shortcut.
> the problem is that you can put something into a wormhole, but to get it out the other end, you have to carry the encoding you get to the other end of the wormhole to decode the output.
That sounds a lot like how entangled particles can't be used to communicate faster than light.
While you can make observations immediately, you can't turn them into useful information about the other end, at least not until you get missing data which has to reach you by normal means.
Hence the focus on using entangled particles as a kind of tamper-detector or "reusable random one-time pad", rather than for transmitting information.
That is precisely the lesson of the ER=EPR principle, which states that every entangled pair really has a 1-qubit wormhole connecting it, and the space inside a macro-scale wormhole is the consequence of combining all those tiny wormholes together. Wormholes are entanglement.
Two entangled wormhole apertures function as Star Trek style transporter pads. Feed mass into one hole, scanning it as it goes in. Transmit the pattern to another aperture of the hole, using electromagnetic energy at the speed of light, and when that end ejects a mass, apply the pattern to it, so it resolves into an exact copy of what you put in.
The distance the mass travels may be shorter than the distance the energy travels, but it can't resolve exiting mass as that object until the information--that had to travel at the speed of light--arrives.
So you could potentially use a wormhole as a suicide machine that transports a copy of you, that thinks it actually is you, to a distant location no faster than the speed of light, in zero subjective time for the thing that will then think it's you.
I don’t think the suicide booth model of a teleporter fits a quantum teleporter model, only the 3D printer teleporter model. From a quantum point of view, all (e.g.) electrons are indistinguishable excitations in the same field, and it’s the pattern that makes you “you”. If you’ll forgive a bit of poetic licence, quantum teleportation seems more like Discworld magic, where you have to exchange two identical sized lumps of matter.
As the sibling poster implies, not only are there black hole horizons, but we are ourselves inside of a cosmological horizon. Objects outside the horizon are impossible to see, and objects approaching it from the inside will redshift and disappear from view the same way they do when entering a black hole from the outside.
At least theoretically we should be able to receive signals from arbitrarily far away no matter the expansion of the universe. They'll be redshifted like hell though, as you say.
Special Relativity states that no two objects may pass each other faster than the speed of light. Space itself isn't bound by this restriction. To wit: no thing can move faster than the speed of light, but space is not a thing.
Space is dynamical. It can stretch and expand and even break, but it doesn't move. The result of this is a cosmological horizon that prevents these objects from communicating, because the distance between them is growing faster than light can cross it. This is the same reason you can't escape a black hole.
It really does feel like the two concepts are related, but there are enough differences that you can't just say we're inside a black hole. For one thing, the inside of a cosmological horizon corresponds to the outside of a black hole. I do think the similarity is part of what drives so many physicists to study black holes and quantum gravity, though. Horizons are a consequence of dynamical spacetime, but a deeper theory is needed to see the universe and black holes as manifestations of some more primitive concept.
This is actually very similar to what I've heard an observer falling into a black hole would experience... as you fall in, space curves to bend all outward worldlines back to the singularity. You never experience the horizon itself even while passing it, but would see the horizon wrapping itself around you in all directions.
Astronomers in different galaxy clusters looking at the M87 black hole would agree within their observational abilities that there is a horizon localized deep within the visible matter of the galaxy. Astronomers will keep agreeing that into the far future.
Astronomers in different galaxy clusters will agree that each sees a set of cosmological horizons, but do not agree on where it is and what's inside it.
Schematically:
th<--g1<--<--yh--Them<--->You-->th-->g2-->yh
The gt/lt arrows represent the metric expansion in one direction; Them and You are the two respective observers' galaxy clusters. G1 and g2 are two distant galaxy clusters. Yh and th are your and their horizon, respectively. G1 is inside their horizon but has crossed yours. G2 is inside your horizon but not inside theirs.
Every point in an expanding universe has its own set of cosmological horizons non-identical to its neighbouring points, and dramatically different from points at great distances in spacetime (that goes for great gaps in time at the same spacelike location, and great gaps in space at the same lookback time or scale factor).
For astrophysical black holes, a black hole horizon localizes around a particular clump of matter, and not around other nearby clumps of matter. Far from a super-dense clump of matter you will not find an astrophysical black hole horizon (barring primordial black holes, for which there is no evidence anyway).
Cosmological horizons focus on each infinitesimally small clump of matter, everywhere in the universe, even in deep extragalactic space where matter is extremely sparse. Indeed, even matter-free points have their own cosmological horizon, and we do have substantial evidence supporting that.
A theoretical black hole horizon arises in a family of solutions of the Einstein Field Equations, from exact ones like Schwarzschild or Kerr, to solutions that become those asymptotically (in the limit as a black hole formed by gravitational collapse ages, for example -- Schwarzschild and Kerr are eternal black holes that are never in an uncollapsed state).
A cosmological horizon arises in a different family of solutions of the Einstein Field Equations, from exact ones like the de Sitter vacuum or the expanding Robertson-Walker vacuum, to solutions that become those asymptotically (these are vacuum matter-free solutions, and one can get there with a solution with matter that dilutes away over time).
The two families of solutions are very different, although there is an overlap of solutions for black holes in expanding spacetimes, where there can be both a black hole horizon (or horizons) and a cosmological one (or more than one).
We can compare these different families of solutions most strikingly using the behaviour of test particles scattered through the spacetimes: near the BH event horizon particles may be entrained into stable circular orbits around the horizon of a theoretical black hole like Schwarzschild or Kerr, whereas this never happens around a cosmological horizon in a solution like de Sitter or expanding Robertson-Walker: the test particles all plunge right through the observer's horizon radially.
Once through the BH horizon a test particle will inevitably and very quickly by its own "wristwatch" collide with the gravitational singularity after passing through extremely curved spacetime. Our galaxy cluster has already passed through many cosmological horizons centred on distant galaxies who are now outside our horizon too. Neither our galaxy cluster nor the many distant ones have changed their essentially exclusive time-like trajectories, and there is no evidence we are soon going to end up colliding with a gravitational singularity. (Current evidence doesn't suggest we are going to end up facing a big rip either).
Now, in the cosmological model under time reversal galaxy clusters do tend to converge and naively collide and collapse into a singularity eventually. However, galaxies all plunge into it radially and matter does not get entrained into any sort of accretion disc or similar structure. (They disintegrate into clouds of gas and dust that heat up into the "un-recombination" surface of ionizing atoms (mostly hydrogen) which gets denser and hotter until neutrons and protons disintegrate, electroweak symmetries emerge, and so on, all at once. Black holes usually get to swallow clumps of matter from time to time, whereas the time-reversed big bang singularity gets everything landing on it all at once, rather than some clumps early, and some clumps earlier still).
Physically realistic models built on these theoretical systems are grossly different; the models are good for predicting future things in our sky, so we expect the astrophysical reality is different too.
The rough similarities tend not to survive real inspection. Different horizons are different in a bunch of ways that aren't overcome by the few ways in which they are somewhat similar (or similar under time-reversal).
How about another lecture by Leonard Susskind? I'm just repeating his ideas! The gist of it is that the connectivity of space is related to the patterns of entanglement between particles in the quantum foam. If you were to break that entanglement, you could disconnect two regions of space.
Nothing's moving. It's like if you ran a 100m but while you were running, they decided to make it a 200m.
If you speed up, you can finish the race in the same amount of time. Light always finishes the race in the same amount of time from each observer's perspective. It can always speed up because it's massless, but you have mass and there is a limit to how much you can speed up. According to general relativity, an observer outside of the racetrack won't see light change speed, nor will you. But both of you will perceive distortions of space and time to accommodate light's "change in speed" to make it look as if nothing's changed.
Back to the cosmological horizon... We don't see a big jump in the racetrack because the distance between 100m and 200m isn't too massive; likewise, gravity and other forces hold objects together even though space is expanding and pulling them apart.
But there is a certain distance where gravity and other forces can't keep things together. Where that 1000k becomes a 10000k. The pull from one galactic cluster to another is just too weak. Anything past this point is moving away from us. That's why only galaxies close to us aren't redshifting away. But any observer anywhere in the universe would observe that they are at the epicenter of an expansion where everything is moving away from them.
Light can always "speed up" to make up for the difference in distorted space time so that it is always the same speed relative to your inertial frame. Having zero mass (zero interaction with the Higgs Field, QCD binding energy, etc) is a requirement to be able to do this.
:D is the universe expanding, or revealing more of itself? if the universe is expanding at the speed of light, doesn't that imply that just light is reaching us more from further away places? sorry to be off topic but that comment gave me this brainfart.
The universe is revealing more of itself as light reaches us from farther away, but because the universe is both expanding and that expansion is accelerating, there is a maximum distance we will ever be able to see: the Hubble Distance. I don't think the universe is old enough for light to reach us from that far yet, which is why we can still see the cosmic microwave radiation, but give it a few trillion years, and we won't even be able to see other galaxies because they will recede from us, redshift, and disappear.
No, there's actually more space constantly being added to the fabric of spacetime. So if you take 1 cubic meter of space, eventually it will become 1.01 cubic meters, then 1.02, etc. This isn't noticeable at distances within the galaxy, but it is very noticeable when observing distant galaxies.
The Average Null Energy Condition (ANEC) makes it so there can not be a traversable wormhole joining two otherwise disconnected regions of spacetime. So by construction, any infinite null geodesic (a light ray) which makes it through the wormhole must be chronal (causally connected). This means there can be no traveling faster than light over long distances by going through one.
So that's it, then? Some thing, say, a million light years away from another thing is in practice entirely inaccessible to civilizations existing on that other thing, requiring a minimum of a million years to reach?
This depends on what do you mean by 'inaccessible'.
Going there and back again, and telling your pals how things are 1000LY away is likely impossible.
Expanding your civilization to more and more habitable celestial bodies, slowly but relatively surely, is entirely possible. This is how e.g. plants colonize vast spaces, being limited by the speed of their growth often by inches per lifetime. (You should also consider the amount of resources the plats dedicate to it, and their success rate.)
Extending human lifespan (perhaps using methods of hibernation, perhaps not) is easier technologically than basically any viable form of crewed interstellar travel.
I get what you are saying, but it seems like the plant approach is mostly useless to humans. If the people on opposite sides of a galactic civilization are so far away that they cannot meaningfully interact, then it seems like that's no different than the other person not existing from their perspective.
Depends on the definition of useless or meaningful. If meaningfulness only extends to immediate political, social, economic or technological interactions then maybe an organic colonisation seems useless, I guess.
Unlike plants, through the power of electromagnetic communications and a culture based on shared knowledge, humans will know the others exist, even though that interaction will be slow with likely little-to-no-influence on their local evolution.
There would be huge cultural meaning even despite the lack of immediacy. Imagine living in a world where you know that there are other humans like you in space, living on other worlds light-years away. You might not be able to communicate immediately, but you know that they are there, and the universe is not so empty to you. You could look to the night sky and reminisce on your great(*n) grandpronoun whose progeny have built a thriving colony among the stars. Entire institutes would exist dedicated to collating and outlining the historical expansion of mankind among the stars, even if that knowledge takes centuries or millennia to accrue.
Also we can not say those other humans may as well not exist, because there is no telling what each new colony will go on to achieve. Their cultural, scientific and philosophical development independent of the influence of Earth's history and challenged by new environments could yield new perspectives, ways of thinking and practical inventions whose value far outweighs the large delay in communications.
If we're talking galactic scale, then the creatures who ultimately wind up on the other side of the Milky Way two hundred thousand years from now would likely bear only a passing resemblance to the humans who expanded from Earth. However barring a catastrophic event that erases their knowledge of their history, they would owe their existence in their history books to this tiny planet.
Finally, the fear that we are the only planet with human life would be vanquished, so even if we all died from pollution or meteorite, we'd know that somewhere the legacy of our species (and selected companion lifeforms) would continue, which I don't think is useless or meaningless either.
Probably not. On the timescales of millions of years, the only thing that really matter is whether it's physically possible, and it looks physically possible to colonize other galaxies.
You don't get much time dilation until you start hitting speeds mostly exclusive to accelerated subatomic or nuclear particles. It's nearly zero until you get past 0.5 c.
If by the bright side you mean you would only make it there it there as pure energy. Mass is converted to energy as it approaches C (Light Speed). Though getting mass to actual light speed would require infinite energy, so that point is moot.
Let's say you get to 96% C. You would experience 28% of the journey. Taking into account the the time to speed up and slow down from C, you're looking at over 290k years from the perspective of the passengers.
At 99% C, you would experience over 14% of the journey (About 142k years).
Try this thought experiment. Let's say you had a magic Bussard Ramjet rocket which scoops up energy and reaction mass from space and can accelerate forever. There are no special reference frames. So what happens if the rocket accelerates then shuts off its engine at the point where the "relativistic mass," as measured by an un-accelerated observer, should make the rocket disappear behind an event horizon?
From the POV of people on the rocket, they accelerated, then stopped accelerating. From the observer's POV, the rocket turned into a black hole? One reference frame now seems "privileged" or different somehow. How do we square this with relativity? Also, what happens if the rocket turns around, then decelerates? Wouldn't that constitute them returning from inside of an event horizon?
The answer, is that "relativistic mass" is actually just a pedagogical fiction.
(EDIT: Also, a lot of the redonkulousness in the thought experiment sneakily comes from rockets that can magically accelerate without worrying about where the fuel and energy come from. If you worked out how much fuel and reaction mass would be needed by a real rocket to perform such a feat, you'd get "unphysical" amounts of matter.)
Don Lincoln's point boils down to momentum being a frame-dependent quantity even in plain old Newtonian mechanics, that the special-relativistic correction for momentum is simple. The point he is making about mass is, in essence, that one has to be careful in understanding equivalence relations E = m, E = p, E = hf, E = h/\lambda (here all are with c=1) as an overloading of "energy". In the last three of these there is a frame-dependent quantity: momentum, frequency, wavelength. In the first there is a non-frame-dependent quantity, the invariant mass, m_{0}. It is not the same species as a frame-dependent quantity m_{relativistic}, and at 6m57s into the video, he shows some of the dangers of "punning" them.
Your related comment says that m_{relativistic} is a pedagogical fiction. No, it is just a quantity that is non-identical to m_{0}. That students frequently forget (or do not know) that is a teaching failure.
However, you've introduced a gravitational event horizon, which is not something you can find in Special Relativity (whose spacetime is everywhere non-curved, that is, it is free of any gravitational effects at all).
If we pick out the globally flat background of Special Relativity and drop a test particle into it -- electrically neutral, low mass, and classically pointlike -- the latter generates the stress-energy tensor.
Dropping indices and factors, G=T=0 -> G=T!=0, where G is the Einstein curvature tensor and T is the stress-energy tensor.
The stress-energy tensor is generally covariant: observers anywhere in spacetime, no matter how they are moving, agree on the total value of T at the point occupied by our test object. However, they are free to disagree on the quantities in the components of the stress-energy tensor.
The components of T can be written as a 4x4 matrix representing the flux of row-momentum into the column-direction. If (row,column) specifies a component of this matrix, and we count rows and columns 0..3, and we specify that our spacetime dimensions run 0...3 with 0 being the timelike dimension t, then a positive value of T (remember at a specific point in spacetime) in (0,0) and zeroes in all the other components means that momentum from the past of that point is flowing in the future of that point.
If at point p = (0,0,0,0), T^{00} = 1 and all the other components of T are zero, then at p' = (1,0,0,0), T^{00} = 1.
From Noether, a quantity that is invariant in time is conserved like the Newtonian conservation of energy.
An observer moving with our test particle generating T^{00} = 1 would relate the T^{00} quantity to the particle's rest mass (or invariant mass, if you like).
Let's call our 1 rows and columns "x" after the Cartesian direction
On a spacetime diagram, our test particle develops a worldline vertically along the t axis and not at all along the x axis.
Now let's complicate this a bit by having the test particle chuck a bit of itself out its back, engaging a classical notion of conservation of momentum. We'll call the 1 dimension backwards-and-forwards, or x, compared to our timelike 0 axis t. T^{01}, if positive, encodes the momentum coming from the past and leaving the point in the forwards direction.
Assuming coordinates (t,x,y,z) that absorb the emitted units:
At p = (0,0,0,0) we have T = 1, and T^{00} = 1.
Let's keep things normalized so that our particle is always at the origin of x.
At p' = (1,0,0,0) we have T = 1, and T^{00} = 0.9 and T^{01} = 0.1.
At P = (2,-1,0,0) we have the boosted exhaust: T^{01} = 0.1; at p'' = (2,0,0,0) we have the particle T^{00} = 0.9. Thus at p''' = (3,0,0,0) we have the particle still T^{00} = 0.9, and the exhaust P'' = (3,-2,0,0), T^{0,1} = 0.1.
But notice that this picture depends on coordinate conditions: the "rocket" particle at x=0 always, tracing out a vertical worldline on a spacetime diagram. The exhaust does not remain at x=0, and so traces out a worldline that has an angle from vertical.
If we flip this so the exhaust is at x=0, then we would say that the exhaust is at P = (2,0,0,0), P'' = (3,0,0,0) etc and its |T^{00}| remains 0.1 and it traces out a vertical worldline on the spacetime diagram. The "rocket" on the other hand is at p'' = (2,1,0,0), p''' = (3,1,0,0) etc and its |T^{01}| is 0.9.
What component(s) stress-energy appears in depends on the choice of coordinate basis.
Likewise, if we draw a spacetime diagram with the exhaust always at x=0, it traces a vertical worldline up the t axis, while the "rocket" traces out a worldline at some angle, because it is not at a constant x coordinate.
A completely different observer holding itself at x=0,y=0,z=0 would calculate different coordinate-values for particle/rocketing-particle/exhaust (changing the coordinates of p, p', p'', ... and P, P', .... However, it would agree that a normalized value of the whole T at p and p' is 1 but that the value of T at p'' is 0.9 while the value of T at P is 0.1. However, the total of 1, 0.9, or 0.1 respectively could be allocated to different components of T in 4x4 matrix form.
Technically, since T is nonzero at several points in spacetime (all those p and P points) spacetime is not flat. However, in any timelike hypervolume of this nearly empty spacetime, the total stress-energy will be 1.
Even if we move the "rocket" and its exhaust apart ultra-relativistically, the sum of their stress-energies at any time will remain 1. In Newtonian terms, what's being described is a total conservation of energy, and an unchanging centre-of-mass in a deforming system. Your rocket/bussard is similar: there is a system of ship + fuel + exhaust (+ waste heat + ...) whose observer-independent (or generally covariant) total is constant at any time. Thinking in a more field-like way, there are rocket-bits, fuel-bits, exhaust-bits (etc) which generate nonzero stress-energy at various points in the spacetime. The stress energy at each point in spacetime can be agreed by all observers, however they are free to disagree about how the stress-energy at a point is distributed among the various tensor components.
In order to form an event horizon we would need a much larger quantity of stress-energy in a small region, and that is not what is described in your posting, which is more about extremizing components of the tensor (at various points) rather than the whole tensor itself.
Well, theoretically they could load their civilization up on a giant ship and move their civilization there over the course of a million years. Likelihood of success? Probably not great, but possible.
Everyone is missing the obvious solution. If we're talking about tech this fantastical, the best idea is to ditch our ephemeral biological bodies, jump onto a synthetic medium, and call it a day.
Hit pause so the journey takes a fraction of a second, or spend 1000Y in a VR - do whatever you want. I don't see a future where we're this advanced yet still content with our frighteningly fragile bodies.
Perhaps it's because building a giant ship that can carry countless generations of inhabitants many light years is really more of just an engineering problem. All the challenges (of which there are plenty) all have reasonably known solutions. There's not really all that much novel technology required. Just better tech than we have, and a lot of it, to be sure.
Uploading a consciousness into a computer and have it remain "you"? That's completely foreign to what we know today. I'm not saying it's categorically impossible; to your point, if a civilization has advanced to the point where the above is ship is feasible, surely they must have picked this up along the way? Maybe, but maybe not. We don't even know if it is possible. But keep a shitload of people alive in space for a long time? Sounds plausible, even if it is enormously difficult.
If you are okay with that, you no longer need to upload anything, and your problem becomes one of imitation rather than continuity. You can dispense with the philosophical question of identity entirely and obviate away the problem of interfacing with a mind.
Find a way to decompose an individual's personality into progressively more granular dimensions of behavior. Model stimulus -> response reactions as transformations. Sample enough responses from each type of behavior, make linear approximations of the transformations until you achieve a 1:1 simulacra, and derive a basis for the response basis. Your human mind will be a matrix representation of the map between the stimulus and response spaces.
As unrealistic as all of that sounds, it still sounds significantly easier to me than uploading (or even interfacing with) a mind.
Modeling a mind as giant stimulus->response is similar issue as "breaking" an block cipher by building exhaustive input->output map, which for 256b blocks almost certainly requires more resources than is contained in whole known universe.
On the other hand while uploading minds probably has some issues, it involves replicating thing that we can assume resides in physical space, the question remains as to how and whether at all we can measure the internal structure with sufficient accuracy and whether such measurement is nondestructive.
And in the end, problem of somehow interfacing with an mind can mean many things which range from solved issue (when typing this comment counts as "interfacing") to something that for me seems like purely engineering issue (building new neural attached "peripherals" for the brain or emulating existing ones)
I can see why you're drawing a comparison between enumerating stimulus -> response correspondences and "breaking" encryption using a lookup table. But I don't agree that the two would be comparable in difficulty a priori. First, encryption schemes typically try to complicate their linear structure by design, such that inverting the sequence of linear transformations can't be done without (ostensibly) secret information. Second, encryption schemes also approximate maximal randomness, which is entirely foreign to human activity.
For a basic example off the top of my head, consider a hash function. A hash function h : {0, 1}^' --> {0, 1}^n is not actually* injective (it can't be, since the domain is effectively infinite-dimensional and the codomain is finite-dimensional). However, cryptographic security mandates that it should be infeasible to find a preimage message for any digest in the codomain. Moreover messages with very small differences in the domain should be mapped to digests with very large differences in the codomain. This artificial noise and complexity doesn't resemble human reactions whatsoever; it's fairly easy to say two things which will each elicit your regional-specific greeting. In general many human responses have common triggers, and I would further conjecture that you could categorically simplify this further by reducing human responses to broader equivalence classes based on language, geographical region, mood, etc.
This is not to say it isn't challenging. I would expect building a linear approximation of a human mind using input -> output mappings to be extraordinarily difficult. But it's not artificially difficult, like well-designed cryptography. Breaking well-designed cryptography is intended to be, in a mathematical sense, a maximally difficult endeavor - much more difficult than basically anything else you can possibly do in nature.
More to my original point you and I can at least entertain a coherent conversation about building a matrix representation of a human mind using finite stimulus and response spaces. We're in familiar territory, even if it's not ultimately possible or feasible. It's mathematically sound to approach this, given a few well-defined assumptions.
In contrast, I don't know (nor am I confident anyone else knows) how to 1) replicate a human mind via as of yet nonexistent direct brain-interface technology, or 2) how to upload a human mind, using even more nonexistent technology so as to preserve continuity of consciousness. Not only are there rampant unknowns unknowns involved in the engineering efforts entailed here, there are unresolved questions and rampant philosophical disagreements in the fundamental assumptions. We're not in familiar territory here.
how to 1) replicate a human mind via as of yet nonexistent direct brain-interface technology
I’d say this is an engineering problem, rather than #2, which is philosophical. For #1, all you really need is to measure all brain cells accurately enough, then recreate the whole thing in a simulation. Could probably be achieved with nanobots or very advanced scanners within couple of centuries. Might be acceptable to destroy the original in the process, if it makes it more feasible.
This assumes that the entire state space of a human's mind is observable.
Some (most?) people have a rich internal world which can never be caught by looking at I/O relations.
Also, do you believe that by asking e.g. a person like Albert Einstein a reasonable amount of questions (an amount that a person can answer without getting seriously annoyed or fatigued), you would be able to reconstruct their problem solving skills? Sounds unlikely to me.
I'd push back against the idea of rich interiority personally, but ultimately I think that's more of a philosophical question. Practically speaking since we're engaging in this idea without the requirement to achieve continuity of consciousness, I'd argue it's potentially an unnecessary concern. If you will not continue after death, do you personally care if your replacement has interiority? On the other hand, if you want the things you care about to doing to continue getting done by a replacement, you do care that they are capable of everything you are and respond exactly as you would to every situation.
That being said I think your second challenge is more interesting. Albert Einstein's achievements are the product of both extreme knowledge/specialization, his experience and his personality. I think you'd likely have to have a "knowledge space" which can be mapped to the human "mind matrices." That does complicate things a fair bit, but in the abstract I think a "vanilla human" could plausibly be seeded with Einstein's personality and as much knowledge as you want.
Or we can relax the requirement that “it remains you”, and be content with the idea of cloning the mind.
How do we know that the incomprehensibly weird and advanced minds running our ships and habitats in those unimaginably distant points of space and time won't just decide that virtual smiley faces with our names written on them are close enough to, "it remains you." How do we know they won't one day do a "slightly lossy compression" of the human race?
I left it up to the imagination of the reader. It could be one of our minds after an imperfect cloning, modified by someone for "efficiency." It could be super-optimizing AIs. Tens of thousands of years from now, thousands of light-years distant, who knows what it will be like?
It seems the consensus in this subthread is that gradual replacement involves a potentially unsolvable philosophical problem, unlike cloning. How can you be sure you remain you during gradual replacement, and don't turn into some kind of a philosophical zombie?
The future you don’t see describes the present. We are (at very great, politically untenable cost, to be sure) capable of sending a ship full of humans and supplies elsewhere at a reasonable fraction of c. Our ability to do this continues to grow; although due to resource scarcity it’s not clear if efficiency gains will ever make it cheaper to do in the future than now.
We don’t even know where to start replacing our bodies. AI? Genetics? Cybernetics?
> capable of sending a ship full of humans and supplies elsewhere at a reasonable fraction of c
I am not sure of this. Even if we pooled the Earth's resources into constructing a generation ship, we still are not sure we are capable of making a ship that is sustainable both socially and technologically.
We do not know if our tech can last hundreds of years. Even if that wasn't the problem, we do not know if we can maintain a self-sustaining environment that lasts hundreds of years. Even if that wasn't the problem, we don't know what failure modes need to be accounted for. Even if that wasn't the problem, we don't know how to keep humans sane on such a journey. It is, after all, unethical to have new humans born on a journey they never signed up for. We do not have the right to determine the fate of our progeny.
An upload to a synthetic medium is perhaps the only sane and ethical way to accomplish this goal.
I mean at that point you don't have to choose - you do all those things, plus have some more of you spend a bunch of time figuring out the philosophical ramifications of bringing it all back together.
Add some check processes which periodically restore an old backup and have a conversation to see if the backup agrees to a merge as the moral monitoring system.
Or better yet, do the "Contact" thing, and just transmit information. Start with something to generate interest. Then maybe some (hopefully appreciated) knowledge and/or technology. Then instructions for building VR or robots, and data to load them with.
>So that's it, then? Some thing, say, a million light years away from another thing is in practice entirely inaccessible to civilizations existing on that other thing, requiring a minimum of a million years to reach?
"We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far." -- H.P. Lovecraft, 'Call of Cthulhu'
We're just a random species of ape too smart for its own good, only here to ponder the universe at all because an asteroid happened to wipe out the dinosaurs 65 million years ago. The universe doesn't owe us anything.
At least we have telescopes and science fiction, though. The dinosaurs didn't have either.
And suddenly people think the earth is flat again ... we need to keep science more relatable to the general population and that starts with being realistic.
I'm not arguing with your idea ... just that it should start with a caveat ... "there are no current theories that support FTL but we're hoping ..."
It's perfectly correct and does not contradict physics. Think about it in the opposite direction - say you have a perfect laser, with perfect pointing ability. You pick two planets that are far away from each other but at roughly the same distance from Earth, and make an angle of 10 degrees with it; it will take you a few seconds to "move" the laser 10 degrees and thus move the spot from one planet to the other - in the process making the spot "travel" millions of light years in a few seconds. However, the light itself has not traveled at faster-than-light speeds (it still takes a lot of years for the light from your laser pointer to reach any one of those two planets).
There's two possible interpretations here. The implication of "shadows travel faster than light" is that information about the lack of light is faster. This interpretation is false. In your thought experiment, the information of the lasers moving still only travels at speed of light, even if the actual movement took only a fraction of that time. So the "shadow" on Earth reacts at the speed of light to changes in the source.
However, if the idea is that a laser that is continuously lit can "sweep" a path, such that the laser dot appears to move faster than light could travel that same path... Then of course that's trivially true. The light is traveling the radius of the arc, not the length.
> It's perfectly correct and does not contradict physics.
Sure, this is correct, but I don't see how this describes anything traveling faster than light. The light spot itself is an effect of the light that's traveled to that point (at the speed of light). The spot "moving" is (physically) more of an illusion as the light now travels to an adjacent location.
Say you're standing in front of the sun and you wave your arm. As the light travels away from you, your shadow will get larger and move faster. Eventually the movement of the shadow will exceed the speed of light. In theory it will approach infinity.
The light itself will continue to travel at normal speed but if you waited for the light to bounce back from distant planets you could observe the shadow moving FTL.
The last wormhole conversation I had with someone, I imagined a weapon using a short-distance wormhole with the ends opposed 180 degrees, and using a star's own gravity to tear chunks out of it. Everybody dies from massive solar flares.
Of course time travel books, wormhole researchers, and myself all make the same mistake over and over: If you made a wormhole or traveled in time, why do we assume that the frame of reference of the system is our star? Sol is whirling around our galaxy and an alarming rate, and that's moving through the universe at a huge velocity. Why would the hole you're trying to make in space move along with our solar system?
If I traveled back to five minutes ago I'd die in hard vacuum. I'd have just enough time to realize how stupid I am. Similarly, every time I try to use the same wormhole it would be farther from where I am and the other end farther from where I want to be.
AIUI wormhole mouths don't have a direction. They're spherical.
The picture of a wormhole as a bell-shaped indentation in a sheet, like the graphic on this article, is an artifact of trying to explain 4-d concepts in 3-d shapes (in 2-d images). For a being in the 2-d sheet, the wormhole is a circularly symmetric spot of weird-shaped space. In real 3-d space, a wormhole is a spherically symmetric spot of weird-shaped space.
True. You could still use this to make two Lagrange points near the surface, mucking up everything. The fact that the hole has to be longer than the distance traveled would greatly reduce the gravitational field coming out of the hole.
You could still drop one end of the hole into the sun... (and I’ve just realized this was a Farscape plot and therefore how the idea got into my head in the first place)
Maybe this sounds a bit silly, but this is honestly one of the most mind-blowing thoughts I've read on Hacker News. Thanks, I can't believe I never thought of this myself in the past. :)
You don’t really explain how the assumption leads to the conclusion — and I’m left with the impression you used many words to say “they just assumed you can’t”.
Could you elaborate?
Edit:
Looking into energy conditions further, they are literally assumed restrictions on the equations because physicists felt some predictions were unphysical.
I’d really like if someone could explain if there’s any justification to what I was responding to beyond “well, because we assumed it should work that way”.
I think it behooves the physics community to be honest which claims are conclusions and which are their assumptions, and the specific reasoning that leads from assumption to conclusion.
I definitely get your point. One difference between theoretical physics and just math is we have since we use math as just a tool to describe the world, we still have to input physical assumptions to make any sense of what we see. There are many instances of things being mathematically "OK" but we don't think physically exist. See "White Hole" for instance.
I will try to give a better explanation later today! Funny enough, I am off for the section for the QFT2 that Daniel teaches right now, hah. I can also ask him personally questions later in the week.
Black holes are hard to notice, except by side effects, but we know great many of them, end even portrayed one.
White holes would be relatively hard to miss, because they must be shining very brightly. One of the current theories suggests that the big bang (or "a big bang") was a white hole: every black hole is a white hole producing a big bang a parallel universe. We've already had ours, and are lucky enough to still register its echo as the CMB.
Without negative matter, it's impossible to build a wormhole that's shorter than the distance through normal space.
This piece of math shows that it may still be possible to build a wormhole which isn't shorter than that distance. Obviously that wouldn't be much of a shortcut, but the potential research and (maybe) real-world applications are still impressive.
I was asking about the particular usage of the average null energy condition as justification to rule out wormholes: why isn’t that just begging the question by assuming your conclusion? and how does that particular assumption actually lead to the conclusion there can’t be wormholes?
It’s interesting the downvotes for asking someone to support a scientific claim, and be clear where they’re making assumptions versus reaching empirical conclusions.
That's right. When you have a mathematical theory, there are often extremal cases that predict strange things that have never been observed. At this point you have a choice to make:
1) Assume the mathematical theory is too permissive, and rule out the things you have no reason to exist, and hope to find a more elegant theory (on the controversial metaphysical assumption that simpler/elegant theories are more likely to be correct)
2) Assume that the mathematical theory is pointing you in a direction to search for a new phenomenon, and build things like superconducting supercolliders to search for empirical evidence.
With wormholes, we're a bit stuck in that we are decades to centuries away from empirically testing the theories, so physically the Average Null Energy Condition is moot -- it's fine math to do, as groundwork/scaffolding for future physics, but it doesn't say anything physically until we get empirical evidence for or against it.
Okay — and just to be clear, there’s nothing problematic to me about simplifying assumptions or effective theories.
Both are important tools for making predictions tractable.
But when we lose sight of what are conclusions, what are strongly justified assumptions, and what are simplifying assumptions we don’t have justification for (or even know to be untrue), we begin to create fundamentally inaccurate models or wrongly shut down others’ avenues of inquiry.
This happens in economics and business quite often, but simplifying assumptions become orthodox truth with surprising frequency in hard sciences like physics, as well.
>To date, a major stumbling block in formulating traversable wormholes has been the need for negative energy, which seemed to be inconsistent with quantum gravity. However, Jafferis has overcome this using quantum field theory tools, calculating quantum effects similar to the Casimir effect.
Prof. Stephen Hawking has explained the Casimir effect in hist last book 'Brief Answers to the Big Questions' in the chapter talking about Time Travel with Wormholes.
"Imaging that you have two parallel metal plates a short distance apart. The plates act like mirrors for the virtual particles and anti-particles. This means that the region between the plates is a bit like an organ pipe and will only admit light waves of certain resonant frequencies. The result is that there are a slightly different number of vacuum fluctuations or virtual particles between the plates than there are outside them, where vacuum fluctuations can have any wavelength. The difference in number of virtual particles between the plates compared with outside the plates means that they don't exert much pressure on one side of the plates when compared with the other. There is thus a slight force pushing the pates together. This force has been measured experimentally. So, virtual particles actually exist and produce real effects.
Because there are fewer virtual particles or vacuum fluctuations between plates, they have a lower energy density than in the region outside. But the energy density of empty space far away from the plates must be zero. Otherwise it would warp space-time and the universe wouldn't be nearly flat. So the energy density in the region between the plates must be negative"
When ever I think of traveling
through a wormhole/blackhole, I imagine you would be squished and crushed and come out the other end as pure vaporized energy.
This makes me wonder, if there is the potential for our beings (or things) be able to be vaporized and then recreated to original form.
If we are just mass and particles, this should be possible?
If you could 3d print things at the atomic level, then you could instantly advance the human race to a post scarcity society. It's the holy grail of 3d printing and of course everyone wants to do it.
Your question has philosophical implications beyond just the mechanics, however.
>If you could 3d print things at the atomic level, then you could instantly advance the human race to a post scarcity society.
You still need energy, and you still need something to print with (not just the printer, but whatever substrate provides the atoms,) and the process is going to be somewhat inefficient due to thermodynamics. 3D printing a vegetable, likely, would still consume more resources than simply growing one in a garden. Plenty of room there for scarcity.
Uncertainty principle says...probably not. It's impossible to make a complete copy of the pattern.
Even aside from that -- it's unlikely to be practical to scan something down to the smallest level level of matter, because the tools we use to manipulate things can't manipulate things smaller than their own finest details.
So there's a "glass wall" in that you can only replicate things less finely detailed than the replicator. That works fine for macroscopic things (that's why you can buy a 3D printer today), but is implausible for things that we believe to be as finely detailed as our machines.
> Uncertainty principle says...probably not. It's impossible to make a complete copy of the pattern.
That depends on what "the pattern" is. You can make an "exact" copy of a digital file, but that's because the semantics of digital data don't depend on the fine details of the physical representation (that's the whole point of going digital). We don't yet know what level of detail of our physical state actually matters. If you take a single atom in your brain and move it somewhere else, does that matter? Probably not. What about ten atoms? Again, probably not. But at some point as the number goes up the answer must change. We don't know where the boundary is, and we don't know how close it is to the theoretical and technological limits of copying. But I don't think we can rule out the possibility of cloning a human brain in principle based on current knowledge.
That's what the "Heisinberg compensators" in star trek fix. I heard this phrase in an episode once, and decided to figure out how that could even work...
What I came up with was the idea that the scanning stage doesn't need to gather exact data because it really doesn't matter. You just need to get averages, and then build up a model that matches the statistics... (you don't need to discover the exact energy and momentum of every particle, just measure the temperature)
Basically you get teleporter jpeg compression... good enough to fool you, just don't use it too many times without keeping the raw data:)
The philosophical implications of this are the subject of one of my favorite web-comics: The Machine, from Existential Comics
http://existentialcomics.com/comic/1
What these two stories describe would surely be hell for a normal human and likely also equivalent AI consciousness.
On the other hand, if you could run even a simple computer program in such a context, it opens interesting possibilities. In "The Jaunt" they describe some of the periods lasting up to billions of years - imagine you could keep a simple algorithm (say prime factorization) running for that long, without the associated energy costs and get the results instantaneously from your point of view.
BTW, of course people from Orions Arm have put together something similar but actually remotely physically plausible using a concept called "Tipler Oracle":
https://orionsarm.com/eg-article/48507a11adbd7
I don't think you can equate the teleportation station with sleep since sleep isnt a break in consciousness but more like a transition into a lower level of consciousness.
It's a neat demonstration of the holographic principle. From the point of view of an outside observer, there's no difference between the surface of the black hole and its volume; they're two different ways to describe the same thing, of varying usefulness. Reaching the 'surface' of the event horizon is equivalent to reaching the infinite future of the hole.
For a large enough black hole you could theoretically pass the event horizon pre-spaghettification since the radius between the singularity and the horizon would be large enough that the tidal force wouldn't be unbearable. You may have enough time to realize that you've now passed the ultimate barrier before you're eventually torn to shreds by the singularity.
Topically, Messier 87* has a Schwarzschild radius of ~17.784 light hours. Saying that: I have no idea how much subjective time it takes to go that far, because I don’t know how much gravitational or velocity related time dilation is going on. Plus the highly non-Euclidian space where C != 2πr, but I can’t remember if it’s > or < and I’ve never asked if that’s more like correct circumference and different radius or correct radius and different circumference, relative to what a distant observer might expect, and by extension which of those ideas corresponds to Schwarzschild radius — angular appearance, or inward distance.
> From the point of view of something further away watching the thing falling in, it takes forever. I think. Time dilation is easy to lose track of.
This has always made me wonder that if this is true: "From the point of view of something further away watching the thing falling in, it takes forever." Then how does a black hole increase in mass from the pov of "something further away," as things fall into it? Is it just the light coming from the captured object that gets stopped in time, and the mass does get absorbed increasing the black hole's mass in a more normal time frame?
it feels almost meme like that from an observer’s point of view, a black hole will have a bunch of stuff “stuck” on its horizon. i haven’t ever seen elaborations of how it would actually look though.
I don't think there's a definitive answer there, nor any reason to believe Hawking radiation is produced inside an event horizon, rather than just outside it. The event horizon is pretty much defined by the boundary that radiation will not escape across, so I wouldn't assume Hawking radiation behaves any differently with respect to it.
I believe you're right that the jury is still out about information loss.
However, I believe that the energy from Hawking Radiation comes from the black hole itself. In that case it doesn't matter if it's inside or outside the event horizon- it's still 'leaking' energy/mass.
That reminds me a bit of the Star Trek Voyager episode where the a group sends their dead from one dimension to another. Their bodies come through intact but some form of "neural energy emissions" is added to a local energy field.
"To date, a major stumbling block in formulating traversable wormholes has been the need for negative energy, which seemed to be inconsistent with quantum gravity. "
it appears thus, most humans are inconsistent with quantum gravity :D:D
I would urge people interested in this topic to read the hard scifi work of Greg Egan. In particularly, Diaspora. It is fascinating how much overlap is between this mathematical work and his work of "mathematical fiction".
It seems to me that the speed of light, like conservation of energy, is just one of those fundamental “no free lunch” rules of physics that we just plain can’t cheat.
We would know. Either the secret space program requires enough exotic materials and science that you wouldn't be able to hide it OR it doesn't require that, so it's simple enough that any one of us could stumble upon it.
Trillions of dollars vanished because humans are probably more greedy than they are curious.
Every space launch and every large object in orbit are trivially observable. Even when the NRO launches top secret spy satellites that they barely acknowledge the existence of, we know exactly when the launch is. Thanks to the cold-war risk of nuclear ICBMs, every launch is publicly announced to reduce the risk of false positives in an early warning system.
In light of this, your proposition is impossible without also supposing the organization backing this hypothetical operation also has the ability to perfectly suppress all information related to the project. Literal back yard astronomers would notice every single launch, and every single "spy satellite" that disappeared off into space if the launches were announced but the payload kept secret.
You're basically speculating that there exists an organization capable of perfectly controlling all information worldwide. If that were true, there's hardly any point in worrying about it. In such a situation, we could be living in the Matrix for all we know and there really wouldn't be anything we could do about it.
unnecessary. visual cloaking. are you watching objects in 5th gen FLIR? zero radar cross section. even USAF has acknowledged craft you can't trivially observe. your point is rubbish.
This way of talking (using very short nominal sentences, citing many more or less related next to one another) seems very common among conspiracy theorists. I genuinely wonder where that comes from and if it's a conscious choice or some memetic imitation.
associating me with conspiracy theorist by my way of talking. points for a very well formed insult. I'm impressed by how much you got in a small amount of space: criticize my way of talkin, associate me with conspiracy theorist, suggest that I'm behaving unconsciously or mimetically imitating others in a robotic fashion. and all without making it look like an overt insult, giving you pre-emptive cover in case someone accuses you of insulting them. really quite brilliant.
now what could have justified this expenditure of effort on your part towards me? and what sort of experiences in your life would have driven you to develop the skills to become an expert at crafting these sort of subtle insults towards others?
those to me the really interesting things to think about. are you following me better now? my phrasing is now more in sync with your brain?
on the face of it, if you have a problem with my way of talking, that's your problem not mine. but please I don't think you're being very genuine, I think the real reason for your comment is yet to be discovered. let me think some more about it.
okay I think you were just having a bad day and were really angry and wanted someone to project onto and take it out on. sorry I don't accept your offer. you keep your fury it's yours.
okay the next interesting point is conspiracy theory. I mean are you scared of that term? do you think it's a term of abuse? see because we're making theories about conspiracies. what's wrong with that? I guess you're just deluded by the social programming into thinking that term conspiracy theory is a shaming word.
well that's just blinkers for your mind to stop you thinking dangerous thoughts. maybe you need that kind of thing. but me I'm okay with my mind. it's not my problem if you're not okay with yours and you need someone else to decide what you're allowed to think. who's mimetically imitating now, robot? hope your today is better than the day you commented.
You seem to be assigning the capabilities of stealth aircraft to space launches. The energies involved are radically different. There is absolutely no evidence whatsoever that anyone has anything even approaching the technology needed to conduct an undetectable orbital launch. Again, I'd refer to the well-publicized launches of top secret NRO payloads. If the US government could launch those payloads without telling anyone, wouldn't they do it?
You've officially gone from leaning over the ledge to taking a flying leap into crazy land in my book.
orbital launch energy takes so much because you have to overcome gravity. recorded maneuvers of these Craft turning at thousands of miles per hour doing right angles indicates they don't have to care about gravity. so getting out of Earth's orbit shouldn't take so much energy besides there's never any heat signature or obvious propulsion system shown, so no it wouldn't show some trace if leaving. actually it's pretty crazy to convince yourself that something's crazy just because you don't want to think about it. you could just admit that the thought scares you and then start dealing with your feelings, that would be the sane thing to do.
Your points are good, but I think there's another possibility, besides "too big to hide" and "too easy not to invent".
What if there is a very big SSP, but secrecy is extremely important (natsec, earth security, competitive advantage, greed, power), and all kinds of measures at every stage were taken to preserve it? National security oaths, unacknowledged compartmentalized programs, with black budget funding, research done in underground facilities, memory erasure, permanent off-planet postings? To me that is conceivable. You would have two systems, a public system of earth government, and a parallel system devoted to preserving the SSP and off world colonies. I think it could be done. Couple that with a strong PR / propoganda / psyop / disinformation game to discredit leakers, mimic real disclosures in pop culture media, and spread fake narratives, and I think it's like "magic" as in the "art of deception". I think mass psychology could be effectively used to "plug any holes" that the system might have, and over time, with secrecy as an "equal first" priority, I believe they could succeed. Is that not concievable to you in some way, where do you get stuck?
So that's "big and hidden", but not too big to hide.
Secondly, I propose, "inventable but not publishable", as in, what if it's somewhat easy to invent the basic technologies (electrogravitics, zero point energy), but owing to secrecy and economic concerns, these tech have a high "activation energy hump" to get over to actually proliferate in the world? Combine patent suppression, raids on technology to seize and destroy it, intimidation and ensnarement of inventors, and I think you'd be able to keep a lid on it for probably about 100 years, which is what we've seen.
Another way to think about the second point is this, clandestine nuclear devices (dirty bombs or actual fusion/fission) should be possible to deploy for non-state actors, but it's never happened. Surely there's coordinated effort to suppress this tech and prevent it's creation. Now what if SSP tech was an even higher priority, and "approximately equally as difficult to invent / create", if world governments and corporations could prevent a nuclear terrorist, could they not prevent rogue proliferation of zero gravity or free energy tech, especially if they consider that more important?
Again, which part of these arguments is sticking for you? I'm fine if you don't want to engage, I know that even in 2019, there's still a lot of shame and fear attached to these topics, but I see that changing, a lot of whistleblowers, a lot of intelligent, well-researched youtubers (edge of wonder, secureteam10, bright insight), and officially sanctioned evidence (uss nimitz tictac video).
I mean, maybe the whole narrative is just a massive psyop or mass delusion, but there's likely something here that's plausible to you.
Though, to be fair: there have been "leaks" and sightings of unexplained aircraft. We just chalk it up to unreliable narrators and conspiracy theories. Leaks have to happen, but equally: we have to believe them for it to be justified as a leak.
if it leaks so easy why don't we have a clear picture of the trillions? but if it doesn't leak at all, why all this info? the Pentagon fast Walker tic tac video. and whistleblowers. Corey Goode, Emery Smith. I mean some of these people are probably limited Hangouts, or shady, but there's edge of wonder, which is the most obviously intelligence agency-backed disclosure project I've seen. unacknowledged, beyond majestic, chart topping docos.