I really like that it looks like the initial evidence was a discovery made on accident:
“The key to this discovery has been the talented, multi-disciplinary team that NASA Glenn assembled to investigate temperature anomalies and material transmutations that had been observed with highly deuterated metals,”
meaning that they had noticed weird shit happening in deuterated metals from some other experiment and had the opportunity to investigate said weird shit in detail.
I'm glad that they were able to convince the management of looking into it rather than setting it aside and let it be forgotten. I don't like how strict top-down directed research can be at forcing us to focus on what was in the initial proposal and thus we let go of potential new discoveries because they want us to follow the script.
In typical hnews fashion people are ignoring the actual article and instead debating conspiracy theories about cold fusion and covid.
This is not cold fusion folks. They're bombarding with 2.5 MeV photons. If you had a material made up of particles with that average energy it'd be 29011312515 Kelvin.
I think they saw "deuterium atoms in a metal lattice" and associated that with one of the original hypotheses about cold fusion. But this fusion is anything but cold:
A metal such as erbium is “deuterated” or loaded with deuterium atoms, “deuterons,” packing the fuel a billion times denser than in magnetic confinement (tokamak) fusion reactors. In the new method, a neutron source “heats” or accelerates deuterons sufficiently such that when colliding with a neighboring deuteron it causes D-D fusion reactions.
For anyone with experience in nuclear physics/fusion - how big a deal is this? I know that hard-tech findings tend to get, well, "selectively" presented to the public (I see at all the time in biotech), so I'm curious if this is actually a big deal? As in, does this phenomena (perhaps not in this form, but using this as a starting point) actually have the potential that people think of when they think "cold fusion" or "easy fusion power?" Or is there something missing in the reporting?
I got my undergrad in Engineering Physics from CU-Boulder and talked with many professors about fusion research (plus keeping up with developments since then).
Putting aside all the controversy of LENR (low-energy nuclear reactions, the official name for cold fusion) and assuming that the theory actually results in usable tech (for once), the first line of the NASA article hints at where a device's power density would be competitive:
> "A team of NASA researchers seeking a new energy source for deep-space exploration missions"
which tells me that a theoretical device would be a replacement for current RTGs [1]. Low but consistent power for niche applications.
But in general I wouldn't get your hopes up. The higher-energy types of fusion power are far more promising for world-wide civilization-powering clean energy.
The part that I'm not grasping about this is how you harness the power that is created. I think it's great that they can start a fusion reaction this way. But how does this become practical, in theory? Does the lattice radiate heat once fusion starts or something?
It appears to be the same mechanism as neutronic high-energy fusion. An energetic neutron gets kicked out, which collides with some material in the cell (probably the erbium lattice), and generates heat. Which then needs to be hooked up to a water boiler to create steam, which powers a turbine, etc.
I'm much more hopeful for someone creating a Dense Plasma Focus device with aneutronic hydrogen-boron (pB11) fuel because the reaction energy can be directly captured as electricity, instead of having to capture hot neutrons to boil water.
Fusion products have particle energy on the order of 10s to 100s of MeV. Thermal machines tend to melt or evaporate when their average particle energy gets on the order of 100s of meV.
The increase in entropy on that temperature conversion alone is absurdly wasteful.
My understanding is that they are talking about bulk heating of a metal carrier, probably by a tens to low hundreds of degrees for power production. Not every particle in the medium is at Mev, otherwise it would be trillions of degrees kelvin.
In direct conversion, all of the particles used on the conversion have MeV energies. It is exactly that bulk heating that is the problem.
When you get a very low entropy source and your first step on using it consist on increasing the entropy 1000000000 times, you lose a lot of flexibility and efficiency.
Nuclear fusion is pretty much the exact opposite of a low entropy source. In any case, thermal power plants (gas, coal or nuclear) reach up to 50% efficiency with modern gas or steam turbines.
Basically, this isn't really what people have thought of historically as cold fusion, because you are hitting the deuterons with photons which have high enough energy that you would expect to see some neutrons anyways (and making those high energy photons costs a whole lot of energy). The big thing in the paper is that they are seeing some higher energy neutrons than you would expect to see. The 2.5 MeV neutrons are pretty normal for whacking deuterons around. The question is why they are getting 5 MeV neutrons from this reaction.
I don't have time today to dig into the theory which they have which predicts the high energy neutrons. If there is a benefit from this experiment, it would be in validating the theory. Perhaps the theory could point a way toward lattices helping fusion along, in a similar way that Fleishman and Pons thought it could. However, that is a long step from this experiment, and it is possible that there is some other cause for the high energy neutrons other than the lattice screening effect.
There's a rule in physics that "the impossible doesn't happen very often". What's more likely, a stunning unexpected discovery, or a subtle experimental error?
> There's a rule in physics that "the impossible doesn't happen very often". What's more likely, a stunning unexpected discovery, or a subtle experimental error?
You're implying that a a subtle error would be sufficient to explain the observations, which doesn't seem to be the case here.
Gross experimental or interpretation errors are a possibility, of course, but are correspondingly less likely.
That's the thing about subtle errors. They never seem to be the case, even when they are the case. Remember the superluminal neutrino claim a few years back? There are endless ways an experiment can go awry and present misleading results.
Here, there's going to be a very large background (of neutrons and photons) from the process they are using. I wonder if they didn't handle background subtraction quite right.
I would love for this to work as an energy source. The usual limitation seems to be that it's only a tiny minority of places in the lattice that have the right geometry to enable fusion, perhaps because because it's under strain from a nearby defect. When fusion events happen, it's likely to rearrange several nearby atoms. So you run out of the special lattice locations after a while.
You can, of course, melt and re-cast the metal and hope to have a new crop of active lattice points, and it becomes a question of how much energy in vs. energy out.
If increasing the number of places in the lattice where fusion can occur is the problem it would really just be a matter of experimentation with different techniques for allowing the lattice to be further saturated. The article seems to indicate that these results give them a clear way forward.
I took the parent comment to mean that the lattice would only be essentially one time use which to me doesn't seem a problem as long as you can get sufficient energy out of it to make up for 1) using an x-ray beam to ignite the reaction and 2) the energy input necessary to create the usable lattice.
That sounds more like an engineering problem. As long as you can reliably produce grains or pellets of the metal with enough lattice locations to produce a net positive amount of energy then the problem becomes simply how do you move those grains through the x-ray beam at the required rate while capturing the energy. Perhaps suspending them in a carrier fluid that could also act as a medium for heat transfer.
No worries. You just let the reaction run hot enough that it melts down the lattice. When it cools, you'll have a new crop of active lattice points. (I'm kidding... I think.)
I dropped out of my masters because my prof basically laughed me out of the room for researching cold fusion/LENR, said it was a waste of time. Good to see unexplained anomalies in Science like this make their way back into the mainstream. We need this kind of science more than ever, no matter how 'crazy' it sounds. Crazy is whatever the hell we were doing the last few decades.
I'm sorry to hear about your experience (rather insensitive prof.) but if you tackled cold fusion for Masters, it's unlikely that you would have finished your degree in 2, 5 or even 10 years. This topic is not suited for a Masters level thesis, where the primary goal is to build research skills while giving you the taste of tackling a real problem in a defined, manageable form.
Does cold fusion not have sub-problems that could be tackled at a master’s level? I don’t think OP was necessarily saying that they were proposing their master’s would be about solving cold fusion completely.
It could be theoretical. The high baseline effort and cost to run even a simple experiment would probably preclude empirical work. The theoretical area is probably saturated by now, will be difficult to carve out a niche.
I feel like you could say that about any discipline that any Master's student would embark on just because the state of the art is extremely advanced, no? For example, if you're a chemical engineer focusing on batteries, you could say simple experiments are out of reach too because "simple" has advanced to a point that's extremely complex & expensive. I haven't done a Master's so I don't actually know.
Oh, no I don't think so. High-energy physics has a high cost barrier to entry. Even a simple experiment requires a complex apparatus ex. an accelerator, or a very high pressure vacuum chamber, laser cooling, Bose-Einstein condensates, etc. It's much cheaper to run experiments in chemistry, batteries on the whole are not terribly complicated. We're talking about going from hundreds of thousands of dollars to merely thousands of dollars.
You could go a step further and talk about computer algorithms or mathematics in general. The only cost there is a pen and a piece of paper. The only cost there is your own time. Some experiments and analyses cost more than others, it's just the nature of things.
My point is that when you get sufficiently advanced, you some times require access to resources that aren't just a pen and paper. So sure, some fields of math may still be ok to do with a pen and pencil. It wouldn't surprise me if some require large super-computer style access. Or distributed algorithms that are written by master's & PhDs at massive cloud/internet providers that rely on those large networks to run an experiment. So even in the CS domain I expect there to be Master's thesis that isn't cheap to reproduce.
Similarly, if you're trying to get a novel battery chemistry to outperform a Tesla car battery or generate a completely novel solar cell, you're not going to be able to accomplish that as an IC researcher. As you point out, there could easily be interim projects along the way to identify various interesting properties that might be useful to your long term goal which aligns with my original statement. When that's out of scope you partner with institutions with access to those resources whether those are big corporations, research institutes, or particle accelerators.
Batteries are one I happen to know something about, a startup I worked for hosted $major-university's monthly battery lecture series, we were doing a SaaS business targeted at the vertical. I met a dozen or so grad students doing exactly this.
So, nope, bad example. Taking an existing process and giving it a few tweaks will push it around in parameter space, improving something we care about (number of cycles before 80%, let's say), often at the expense of something else, like you can't apply as many coulombs. Congratulations, you've got a thesis. And there are dozens of basic battery processes.
To clarify it wasn't for my thesis but for a nuclear engineering course where we could research whatever. I know not all profs are the same, but generally I do think there is a bit of a herd mentality in academia. My point is that there have been roadblocks for LENR researchers in academia for quite some time, and I hope we can finally remove those.
Most definitely concur with the herd mentality. It comes from the funding model, which is admittedly rather broken. Unique or off-the-wall ideas often don't receive funding, and once you've been funded for a mainstream idea, it's hard to advocate much for the unique idea without somehow discrediting your proposal or the time you've put in to date. It's a vicious Catch-22. Most researchers start out with unique ideas, then it gets drummed out of them. My area is computer vision, and I see machine learning eating everything, so I can empathize. There seems to be no effort any more in translating a physical phenomena into a quantitive, deterministic model and that's a shame. Neural networks are not an accurate representation of what our eyes and brains are doing. But, it's produced more promising results for the time being, and so we've entered a cycle of funding that promotes machine learning over other approaches. There is recognition of this fact usually, and small pools of money continue to exist for off-the-wall ideas, so the rest of us can make do until the phase passes. Science moves in fits and starts.
Research in high energy physics (cold fusion is a sub-category), is difficult to cobble together on small amounts however. That's why you see profs and labs banding together to raise sufficient resources just for a couple of experiments. And unfortunately for LENR, a great deal of money was spent in the 50s to 70s with no appreciable outcomes, therefore the funding bodies have become jaded and cynical about continuing to fund further research. The area will likely see a resurgence once those board members retire and bright-eyed folks revisit the field.
Once a topic is associated with something fringe or woo-woo, it becomes extremely difficult to discuss it rationally.
Try bringing up the possibility that at least some small fraction of UFOs actually are extraterrestrial. We have a decent number of compelling radar/visual encounters that suggest something artificial with maneuvering and propulsion capabilities beyond anything we know how to achieve. They could be illusions, unknown natural phenomena, hoaxes, classified tech, or equipment failures, but they are anomalies nonetheless.
The Fermi paradox really is a paradox, especially now that we've found planets in habitable zones orbiting stars within "reasonable" (less than a thousand years) distance if one were traveling at a meaningful fraction of the speed of light. One possible solution to the paradox that is always left off the table is "they are in fact here, they're just not making overt contact." There are numerous rational reasons that an intelligence would choose not to "land on the White House lawn" from planetary protection protocols akin to ours to self-interested concerns about triggering a violent response from the demonstrably violent and rapidly technologically advancing inhabitants of Earth. We'd be no threat to them now, but give us another few hundred years and we could be capable of e.g. sending a relativistic impactor their way.
But nope... the fact that the UFO topic is linked to fringe, new age, and wacky stuff means that it can't be discussed and must be absolutely dismissed. Also means that if someone wishes to discredit a topic, all they have to do is get new agey woo woo types to start talking about it.
This is just how humans think. We use ideas as signifiers of group membership and apply primate in-group / out-group behavior to them. Scientists are human, so science is not immune. IMHO this tendency is one of our species' greatest weaknesses.
I agree with much else said here about the failure -> crank -> taboo problem, but it bears noting that the Fermi paradox is not really a paradox. There is this https://arxiv.org/abs/1806.02404 . The 2 minute layman's version is that multiplying several to many very uncertain numbers "drives your error bars to the fringes". So, there is not some credible, high probability estimate that we are not (EDIT: locally) alone. Without that estimate there is no paradox.
> But nope... the fact that the UFO topic is linked to fringe, new age, and wacky stuff means that it can't be discussed and must be absolutely dismissed.
I think it's more that the recent claims about the UFOs (moving at hypersonic speeds but leaving no emissions nor heating the atmosphere) are so implausible that they don't fit our current understanding of physics.
I thought several of these did show infrared heating? Or maybe I'm thinking of a different case than you.
In any case a lack of heat is problematic. Even if they're using some unknown means of propulsion, the second law of thermodynamics is absolutely settled bedrock science. They must dump waste heat somewhere.
> We have a decent number of compelling radar/visual encounters that suggest something artificial with maneuvering and propulsion capabilities
No we don't. We have radar readings that are anomalous, and that could be aliens ... or dozens of other mundane explanations that are each more likely.
The other problem with something like this at a Masters level is that you run a big risk of making no progress, even if you tackle some kind of sub-problem. When that happens it becomes hard to get your degree.
Better to get the degree first, then start poking around the fringes.
Science is functioning as a religion. it aids the mainstream state system in maintaining social stability. It informs people's beliefs about what is feasible and possible to do and what isn't.
It even answers the question "how to be happy?" (there are many "science-based" modern approaches sold in self-help books and the like)
modern science is basically a religion. there are even some "atheists" going around trying to get "religious people" to think critically and give up their superstitious beliefs.
granted, it's a more sophisticated social construction than old-world religion, but I base this claim in terms of their role and function.
".. is known to be pseudoscience" could be correct, until it is not. Dismissing someone's research interests is not just rude but also the attitude that led to this individual dropping out of academia altogether.
This is the sort of response the doctor who suggested washing hands after surgery received and is not particularly useful.
No, you don't get it. "Cold fusion" is akin to dismissing the recommendation to wash hands before surgery, because the experiments promoted by the proponents were repeatedly shown not to give the results they claim.
To pick one of many, many examples. There are errors in both directions (dismissing correct ideas, accepting pseudoscience), and overly simplistically suggesting all iconoclasts should be entertained is probably unreasonable.
I think there is a big difference in what you steer your students to do, and what you do as an established scientist.
A Masters level education in Physics is not the time to be going off the beaten track; you still have 5 years or more before you even understand the territory enough to offer corrections on the map.
Replace “cold fusion” with “homeopathy”, do you still think your statement applies?
I think cold fusion is closer to homeopathy than stellar fusion.
That may be true, and maybe the professor was right to discourage that track. A key difference I'd point out, however, is that one is more universally accepted among researchers as pseudoscience than the other.
For instance, the first sentence of the wikipedia entry for homeopathy: "Homeopathy or homoeopathy is a pseudoscientific system of alternative medicine."
First sentence for cold fusion: "Cold fusion is a hypothesized type of nuclear reaction that would occur at, or near, room temperature."
The failures of Stanley Pons and Martin Fleischmann, or the fallout of that case, do not in my mind constitute pseudoscience. Skepticism notwithstanding, it is an actual area of research still funded by universities around the world - from wikipedia, as recently as 2015 "the Indian multidisciplinary journal Current Science published a special section devoted entirely to cold fusion related papers. https://web.archive.org/web/20170805185756/http://www.curren..."
I personally think it's more interesting than homeopathy or linking vaccines to autism. I think the research has yielded more tangible real world benefits, such as improvements to the sophistication of calorimeters. Furthermore, I think lack of reproducibility is not the same as proof of its impossibility or that it is pseudoscience.
Maybe it's my personal longing for a future with cold fusion speaking - and I don't think it's a good career move to focus on it - but I don't like seeing it dismissed in the same pile of detritus as homeopathy.
Fair, perhaps the comparison with homeopathy was overly harsh. I'd certainly not say they were at exactly the same point on the spectrum of "hard science <> pseudoscience". My intention with that point was to highlight that there _is_ a point where professors do need to discourage students, and the rate/vigor of discouragement should be proportional to the current priors on "likelihood of being junk".
I'd happily retract that point and stick to the point around the professor's duty to keep their students from falling into intellectual quicksand or other impediments; I think that's the more important one anyway.
And I certainly agree that cold fusion would be revolutionary if it turns out to be physically possible. However based on my understanding, there's a solid body of nuclear physics -- both theory and experiment -- that show that this process is many orders of magnitude away from being activatable at room temperature. So I'd personally rather fund modern fission, hot fusion, and renewables as significant research targets.
This is absolutely not correct. It's correct to say that Fleishman and Pons shat the bed for everybody when their University's PR had a press release before a paper had been accepted.
It's not correct to say that all research into cold fusion (LENR as it's called now) is psuedoscience.
However, were I a professor, anybody who proposed LENR research to me would get a lecture: hey, there may be some interesting science in there, but you're gonna have to work harder than anybody else in the room to get people to trust your results. Some scientists don't mind signing up for spending 30 years proving their case, others want faster results, in which case you should find areas of research that are more likely to lead to accepted publications.
It's like you didn't even read the article. There's also an actual cold fusion using muons. Only problem is muons take more energy to create than they help create through fusion. If you had a free muon source (like the sun), you could potentially create a space-based fusion reactor operating at room temperature.
The sun is absolutely a muon source. Almost every muon that hits Earth's surface is caused by cosmic rays from the upper atmosphere creating pions that decay into muons. (1) The average flux is 1 muon per cm^2 per sec (2). Muons only live a few microseconds but this is plenty of time to catalyze fusion reactions. The two problems with muon-assisted fusion are the energy cost of producing muons with current technology and the potential of muons to stick to alpha particles. Both limit the efficiency of the technique. (3)
Sure, the particle showers that cosmic rays create in the atmosphere contain some muons. Still the sun itself does not emit muons, and even if it did they would decay long before reaching earth (or a satellite).
I wish the education system produced more people like you, rather than trying to force them to conform. Conformism hurts science specifically because to make a new discovery you must beleive something to be true which everyone else disagrees with you about. A lot of currently accepted scientific ideas were heretical when first proposed, and laughed out of the conversation.
If you want to see an example of this happening right now, look at how it's impossible to publish anything exploring a lab-origin hypothesis for Covid-19 in a peer-reviewed journal. Somehow that became heresy among virologists. Now maybe it has a natural origin, but you don't determine that by shutting down the conversation.
If you stop and think for a second, one must admit it's a rather large coincidence that this disease begins in the same city as the world's pre-eminent lab not just studying SARS coronaviruses, but actually conducting gain of function experiments with them. Now perhaps that's just a coincidence, but knowing nothing else, a-priori your assumption must be to favor the lab origin hypothesis. Now that the seafood market origin hypothesis has been found unlikely, the case grows stronger for a lab origin. But you still can't publish a paper making that case.
Comparing that to two times peer-reviewed paper made by NASA is insulting, and your captatio benevolentiae will not trick the user into considering them to be on the same level or tied to similar fundamental issues.
I don't intend to implicitly compare that paper to the work done by NASA. Two completely separate subjects. And I specifically pointed out that it is not peer reviewed, how can you peer review something if no journal would consider it in the first place?
The paper itself may turn out to be seriously flawed as your link suggests. But I think it's important to have the conversation.
Then I am sorry if I assumed you was trying to diverge from the main topic by comparing the struggle to advance in not-well-established scientific fields with a baseless paper made probably for political reasons. My bad.
> look at how it's impossible to publish anything exploring a lab-origin hypothesis for Covid-19 in a peer-reviewed journal...
How do we "look at" that?
An absence of peer-reviewed articles in reputable journals could be an indicator that this is impossible due to a big conspiracy to hide the truth, but it's also exactly what you'd hope for if there is no good evidence for that hypothesis.
Yes, that could be the reason. The authors of the paper I linked specifically call out censorship by journals, but maybe it's just them left with that impression. They did give citations[1],[2] to backup that claim.
I don't think there is some grand conspiracy going on, just that the experts early on decided it's a natural origin and completely unrelated to the lab, without any evidence I might point out, and now there's a very real and all too human tendency to dismiss anything else as quackery. Whether or not this is actually the case here is debatable, but we know know it's all too common in science and elsewhere.
[1] Segreto, R. & Deigin, Y. Is considering a genetic manipulation origin for SARS CoV 2 a conspiracy theory that must be censored?
Preprint (Researchgate)
You misunderstand me. Lab origin does not imply engineered. That's a possibility, of course, but far more likely is it is a natural virus, possibly altered in experiments, where the mechanism can be a form of selection, not necessarily engineering of any kind.
Lab origin, to define the term clearly here, means a virus, natural or otherwise, that escaped a lab, likely by accident.
It seems, just by the coincidental location of the emergence of SARS COV2 in Wuhan of all places, to be the most likely origin scenario, just on a probability standpoint.
Demonstrating that - that it's a natural virus released (accidentally or intentionally) from a lab - would be the realm of an intelligence agency or police forensics, not a genetic analysis of the virus like the debunked article linked upthread.
I won't grant that. If you were to find a close enough genetic match to a strain known to be in a lab, that would indeed constitute strong evidence.
Now the "debunked article" cites 89% similarity to a published strain, but that's hardly a smoking gun. That's not to say genetic analysis cannot constitute proof, however.
I think the problem is, if there ever was such a strain in the Wuhan lab or elsewhere, it's very likely that the evidence had been intentionally destroyed by now, making it unlikely we'll ever know the truth.
It's not like you can prove it didn't leak from a lab either, at least not unless you're lucky enough to find a very close match in nature somewhere.
> If you were to find a close enough genetic match to a strain known to be in a lab, that would indeed constitute strong evidence.
In two possible directions. You'd still need additional evidence demonstrating it was lab --> nature, not nature --> lab.
> Now the "debunked article" cites 89% similarity to a published strain, but that's hardly a smoking gun.
Right, and virologists say citing 89% similarity is bogus, as that's actually substantial difference in genomes. It's evidence in the opposite direction asserted by the paper.
> In two possible directions. You'd still need additional evidence demonstrating it was lab --> nature, not nature --> lab.
Yes.
> Right, and virologists say citing 89% similarity is bogus, as that's actually substantial difference in genomes.
The same virologists that were quick to say 96% similarity to the alleged previously discovered bat coronavirus named RaTG13 is significant? Is 89 to 96% such a big leap, or are they trying to have it both ways?
Considering they have to pump in x-rays, does this even stand a theoretical chance of being net energy positive? Remember that fusion can be achieved in a bench top experiment[0] without being useful for energy production. Could the neutrons be provided by fission instead of xrays?
Link to paper is at the bottom of the article, the main text links to a paywalled one for some reason.
> Considering they have to pump in x-rays, does this even stand a theoretical chance of being net energy positive?
No. The cross section for photodisintegration of a deuteron is much smaller than the cross section for loss to electrons. So even if 100% of the neutrons produced lead to fusion (which I extremely highly doubt) it could not be anywhere close to breakeven.
The only way this might work would be if one could accelerate the deuterons directly and beam them at a deuterated target. But that's been done for 80+ years, so don't suddenly expect those results (that it cannot be close to breakeven either) to change.
Only about 3% of the energy of a fission reactor comes out as neutrons, so even if one stipulates these would cause fusion efficiently the energy multiplication would be small. One might as well just stick with fission.
While this is all well and good - why is everyone chasing extremely-complex magnetic suspension etc fusion instead of just drilling a deep hole, dropping a stripped-down H-bomb in, and harvesting the (enormous) heat energy released via geothermal? The napkin math works out to insane energy gains in each iteration (even with low efficiency assumptions) if you're hydrolyzing hydrogen fuel from H2O for the next blast with the energy, and the process requires no new exotic science - all the basics were invented in the 60's.
Unlike back then, today we have high-voltage DC power lines capable of ~85% efficiency over a ~12k km range (planet is 40k km). A few of these stationed in the middle of nowhere could supply the world's power, even if they're not too safe to the surrounding area (which is unlikely, since we've blown up thousands of bombs underground "safely" for decades).
Why the hell don't we have nearly-unlimited fusion energy farming by now?
They mention possible future uses both for power production on space missions, and terrestrial power production. Is there likely to be a gap where it makes sense for the first, but not the second? E.g. is producing a deuterium-loaded metal an energy intensive undertaking, which could cause a process overall to be net negative, but still useful to put on a satellite?
Sure. Radioisotope thermal generators are standard issue on deep space probes but nearly unheard of in terrestrial applications, because there are so many easier and cheaper sources of energy on earth. If the best they can manage is a large, heavy reactor that produces a slow trickle of electricity that lasts for decades, that's viable for many autonomous space applications but nothing special on Earth, except maybe in a few remote applications like sensor platforms on the sea bed or inside volcanoes.
If it's substantially more expensive per watt than solar, wind, and so on but can still be done compactly it could make sense for deep space probes that are too far from the sun for solar power.
Currently these are powered by capturing heat from decaying radioactive elements which is far from efficient or cheap.
Mostly I think that's just nasa justifying nasa doing this research though.
Could you theoretically make some form of "fusion boosted solar reactor" with something like this? Assuming not (seems like a good assumption) how low energy photons do you need to be able to use before that becomes plausible?
I have a habit of getting over-excited by announcements like this, but... this is very exciting! Hopefully lattice-confinement fusion would be much more compact than other methods. I wonder how much more compact.
Theoretical LENR device would be comparable to an RTG [1] because at high heat the device would melt itself. It's more about a trickle of stable power, rather than being used as a cell of a power plant. So, ignoring support infrastructure and focusing on the size of the power generator, we have:
Using the power source for the voyager probe as an example:
RTG size: 0.5m x 0.5m x 1m == 0.25 cubic meters.
RTG power: 2400 watts (thermal)
RTG power density: 9600 watts per cubic meter
Using ITER as an example for the scale of fusion power plant:
Reactor size: 800 cubic meters
Reactor power: 500,000,000 watts (thermal)
Reactor power density: 625,000 watts per cubic meter
So yeah, theoretically it would be compact, but low power density. Enough to power a space probe, but not our civilization.
If it can power a space probe, it can power a car. If it is even a fraction of the power of an RTG but without the same issues with fuel, it could be civilization changing.
If it is not that compact but it can still be placed inside homes, ditto.
In support of your statement, physicists on the team explicitly state that this experiment is not cold fusion:
> “What we did was not cold fusion,” says Lawrence Forsley, a senior lead experimental physicist for the project.... Forsley stresses this is hot fusion, but “We’ve come up with a new way of driving it.”
> “Lattice confinement fusion initially has lower temperatures and pressures” than something like a tokamak, says Benyo. But “where the actual deuteron-deuteron fusion takes place is in these very hot, energetic locations.”
Well, cold fusion wasn't cold either, or at least that wasn't the claim. Cold didn't refer to the temperature at the point of fusion, but rather the ambient temperature. You can get extreme temperatures within otherwise room temperature materials. See https://en.wikipedia.org/wiki/Sonoluminescence
Fleischmann and Pons were reporting their results without any hypothesis as to the mechanism. I think the consequence of the whole kerfuffle is that today nobody will bother trying to reproduce your results without a concrete, plausible hypothesis.
Saying that something isn't cold fusion is another way of claiming "it's not a scam", and normally implies "we can describe the phenomenon in terms of known physics".
> Cold didn't refer to the temperature at the point of fusion, but rather the ambient temperature.
Perhaps that was true then, but nowadays it seems like a rather ill-defined and not particularly useful distinction. Sure, the ambient temperature might be relatively cold, but why should that matter if the fusion itself is still taking place at high temperatures/energies? Should fusion in a tokamak be considered "cold fusion" because the apparatus itself is sitting in a room with a relatively cold ambient temperature? What about if someone develops a significantly miniaturized tokamak; say, a hand-sized one? Should fusion in that case be considered "cold"? This can be continued to smaller and smaller scales; in other words, at what point/scale does "high-temperature/energy fusion surrounded by low-temperature matter" move from "hot" fusion to "cold"?
I think it'd be fair to say that "cold fusion" (heavily) implies new physics these days; otherwise, it's "just" a miniaturized fusion reactor.
> Sure, the ambient temperature might be relatively cold, but why should that matter if the fusion itself is still taking place at high temperatures/energies? Should fusion in a tokamak be considered "cold fusion" because the apparatus itself is sitting in a room with a relatively cold ambient temperature?
A tokamak isn't "cold" as there's a clear and deliberate containment mechanism and a physical boundary delineating a macro environment of extreme energies within which fusion occurs. By contrast, circa 1989 the NASA setup might have been characterized as cold. But I don't think there's any answer that will satisfy people who disagree with me because the question rather quickly devolves to something like a sorites paradox (i.e. paradox of the heap of sand), and there's no chance mainstream science would ever attempt to qualify and quantify what counts as "cold" because the phrase "cold fusion" is already taboo, as is anything that even alludes to the phrase, making the exercise pointless.
But previous comments seemed to make the assumption that "cold" meant a complete absence of extreme temperatures and thus no plausible mechanism for achieving kinetic energies sufficient for consistent fusion, presumably because they assumed that Fleischmann and Pon's claim was similar to a stereotypical free-energy device.
> I think it'd be fair to say that "cold fusion" (heavily) implies new physics these days; otherwise, it's "just" a miniaturized fusion reactor.
I agree, today cold fusion implies new physics. And the way we got here wasn't because Fleischmann and Pons claimed to invent a free-energy device, or a way to trigger spontaneous fusion without any appreciable input of energy. They were inputting energy, and doing so to a device that, if we squint really hard and make allowance for then contemporary knowledge, wasn't on its face qualitatively different from the present NASA claim. On its face nothing in their claim violated known physics, though that's because their claim was so thin.
The whole incident came to a head the way it did because people got super excited, many teams tried to replicate, and at least one did replicate--at least, they replicated false signals. The claims triggered a kind of geek hysteria--people started seeings things that weren't there and then rationalizing them, because that's what happens when a ton of people in a frenzy jump into the fray simultaneously. See, e.g., COVID-19 research.
How does a community prevent that from happening again? Notably, Fleischmann and Pon's claim didn't have a theory for what, precisely, was happening. They were just exploring paths which at that time were perfectly legitimate (and might still be; IANAP), found a signal, and published. You couldn't critically judge the credibility of the result without first investing yourself in it by attempting replication. There was nothing of substance to filter beyond their credentials. The fix was to require claims to come with a theoretical model that could be judged on its own merits.[1] The absence of a model doesn't necessarily mean a claimed result can't possibly be legitimate; it's just a time and risk management strategy used to front-end the community process. Before the debacle an attached model was a nice-to-have; afterward it was basically a categorical requirement, reflecting the reality that, a priori, absence of evidence is evidence of absence. (And absence of evidence is even stronger evidence of absence after you institute a rule requiring new claims to include a model.)
So that's why I say that today "it's not cold fusion" implies that a claimed result comes packaged with a plausible model, where in turn plausibility is partly a function of the need for new, fundamental physics. If a claim doesn't come packaged with a plausible model (or models, as in the NASA clam) it's analogized to "cold fusion" because that debacle also lacked a plausible model. (In actuality, it lacked any kind of model, but that nuance is lost because this isn't the kind of context where normal people apply predicate logic. But the nuance matters when people start to argue that cold fusion == bad science because Fleischmann and Pon's claim contradicted known physics. It didn't and it couldn't, because they didn't make any claim about mechanism beyond the necessary implication that it somehow involved, in at least one manifestation, palladium, deuterium, and an applied electrical charge.)
[1] Scientists still use credentials to filter claims. The amount of time someone is willing to invest critically analyzing a paper is still a function of the author's perceived credibility.
> A tokamak isn't "cold" as there's a clear and deliberate containment mechanism and a physical boundary delineating a macro environment of extreme energies within which fusion occurs.
Sure, but as you noted this definition becomes fuzzier at smaller and smaller scales, which IMHO reduces its usefulness quite a bit as it doesn't do a very good job at differentiating between hot and cold fusion beyond "how small of an apparatus can we build with current technology".
> and there's no chance mainstream science would ever attempt to qualify and quantify what counts as "cold" because the phrase "cold fusion" is already taboo
At least to me, it seems that there are at least two potential definitions of "cold fusion" already floating about (though to be fair, I don't know whether these are "mainstream" definitions):
1. Fusion which doesn't require high-energy/temperature nuclei
2. Fusion (maybe "regular" fusion, maybe not) which takes place at small-enough scales that large containment apparatuses are not necessary
The first definition might work for a (relatively) precise definition of cold fusion, while coming up with a similarly precise one for the second definition looks like it'd be harder.
The physicists in the article appear to be using the first definition, although I don't know whether that is the most widespread definition used among "mainstream" physicists. Fleischmann and Pon's setup seems to fall under the second category.
Granted, it looks like you're much more familiar with the subject matter than I am, so there's a good chance I messed up something or another.
> So that's why I say that today "it's not cold fusion" implies that a claimed result comes packaged with a plausible model, where in turn plausibility is partly a function of the need for new, fundamental physics.
That was an interesting dive into the historical aspect of things. I really appreciate you taking the time to type out all that explanation!
Would it make more sense to change this to https://www1.grc.nasa.gov/space/science/lattice-confinement-... ? One of the first links from the article. The original is spectacularly badly formatted and interspersed with so many ads as to be practically unreadable on a desktop machine ..
This experiments has some similarities with cold fusion [1] experiments, for example those of Fleischmann and Pons, as they involve metals and hydrogen. But I am not sure that hitting stuff with photons in the megaelectron volt range would or should still count as low energy in the context of cold fusion, so one could probably consider the similarities rather superficial.
The Wikipedia article actually mentions that one proposed mechanism for cold fusion is the confinement of hydrogen inside of matter but also that this alone is not good enough to produce the claimed fusion rates. So in some sense some of the cold fusion people were on the right track, they just missed hitting the experiment with energetic photons.
Now it seems pretty obvious, get the hydrogen close together inside a metal, then hit it hard to overcome the remaining gap. But I guess the cold fusion community really wanted really low energies and so this was not considered. Or maybe it was and they just missed another detail or could not build or run the experiment, I really have no good idea what the could fusion - or low energy nuclear reaction after that name was burned - community considered and tried.
I don't think anyone suppressed cold fusion. The original results couldn't be replicated. There were some anomalous results, but no results that lined up with the original claims that were made.
If scientist makes claim X and no one can replicate the experiment and most scientists conclude that claim X resulted from experimental error. That is not suppression.
If scientist makes claim that X sometimes happens... and then researcher Y repeats the experiment, and it happens a few times, but not always.... or not at the same frequency... far to many people say the replication has failed... instead of digging further.
There were many such incidents where the stated effect happened only a few % of the time.... but not zero. But, it wasn't 100%...so it got ignored, or rejected.
The original results were replicated.... they couldn't reliably, or easily be replicated...turns out cramming deuterium into a palladium lattice isn't an easy peasy way to get fusion without a lot of experimentation.
There was way too much money invested in hot fusion to let even the possibility of an alternative progress.
Tens of millions of dollars were invested in trying to replicate the cold fusion experiments with zero success. That, combined with Fleischmann and Pons' general incompetence, makes it pretty obvious that the initial report was a fluke.
There was plenty of replication, it just wasn't published in the "peer reviewed" journals that are supposed to help advance science, but work against it these days.
People have been putting moderate energy deuterons into deuterated materials since the 1930s. They typically do it by shooting ions at a target, but it's standard technology for making neutrons in small neutron generators, for example those used in oil well logging.
And it's well known this doesn't promise to produce fusion anywhere even close to breakeven.
If I had to bet, I'd say these researchers missed some confounding effect and are measuring an experimental artifact.
“The key to this discovery has been the talented, multi-disciplinary team that NASA Glenn assembled to investigate temperature anomalies and material transmutations that had been observed with highly deuterated metals,”
meaning that they had noticed weird shit happening in deuterated metals from some other experiment and had the opportunity to investigate said weird shit in detail.
I'm glad that they were able to convince the management of looking into it rather than setting it aside and let it be forgotten. I don't like how strict top-down directed research can be at forcing us to focus on what was in the initial proposal and thus we let go of potential new discoveries because they want us to follow the script.