Hacker News new | past | comments | ask | show | jobs | submit login
Compact nuclear fusion reactor is 'very likely to work,' studies suggest (nytimes.com)
622 points by Todd on Sept 29, 2020 | hide | past | favorite | 355 comments



https://www.youtube.com/watch?v=KkpqA8yG9T4

Here's a video lecture from the MIT Professor (Dennis Whyte) who was leading the research group that provided some of the key designs for the SPARC reactor. As the NYT article explains, that research has been spun out into a startup that raised $200M.

The key breakthrough is the advancement of REBCO tape superconductors which allow you to (1) generate record breaking magnetic field strengths (2) easily disassemble the super conducting loop for fast repairs / refuels / more modular design.

It's a long talk, but it's extremely fascinating. Basically everything becomes much easier once you can increase the magnetic field strength. This talk is fairly accessible to even relative laypeople who have a vague understanding of E&M physics.


Timeline (in case you want to skip over some parts):

00:01:00 - introducing Dennis Whyte, MIT department head for nuclear science

00:04:24 - presentation starts

00:06:00 - identifies breakthrough with REBCO magnets

00:07:25 - explains deuterium-tritium fusion

00:12:30 - basic metrics for reactor performance

00:17:15 - energy output of other previous fusion experiments

00:19:00 - examines ITER and the problems of its approach

00:22:00 - problems solved by high energy magnetic fields

00:28:15 - full scale reactor concept, teardown of REBCO magnets

00:37:00 - design limits and margins

00:39:00 - fixes plasma instabilities found in weaker magnetic chambers

00:40:00 - maintainability, lifespan, component replacement

00:45:00 - solution to neutron damage and energy capture

00:50:30 - cost and profitability

00:54:00 - full graph of field strength vs reactor scale (and thus funding requirements)

01:01:50 - Q&A

01:30:00 - question about the biggest risks

Also a more recent video, with more numbers and even more confidence than the first: https://www.youtube.com/watch?v=rY6U4wB-oYM


The hero we don't deserve.


I think they're using Yttrium (i.e. YBCO), right? It's hard to find that info.


Buried in the wiki page for YBCO is a note that REBCO is a synonym of sorts. The superconducting tapes that are discussed in the talk thus would seem to be YBCO.


Yes. Re stands for "rare earth," and yttrium is probably the most common one used for ReBCO, but lanthanum is also used.


https://nationalmaglab.org/magnet-development/magnet-science...

Looks like they 'only' need to be at 4.2Kelvin as well to operate, which is a definite improvement


They don't need to be that cold to start superconducting, 4.2K is just the temperature you could expect with liquid helium cooling. It would actually be superconducting with liquid nitrogen but the reason why you would still want to go colder is because superconductors have a maximum magnetic flux that they can sustain while still being superconducting. The current travelling through a superconductor itself also contributes to the magnetic flux so even though it's superconducting there's still a limit as to how much current you can pass through a conductor and how high of a magnetic field you can create. This limit is dependent on temperature though, so you definitely want to use liquid helium so you can create a much stronger magnetic field.


Right. To expand: Maybe someday we can use liquid hydrogen or nitrogen (or even water, depending on progress with hydride superconductors) for these reactors, but a big reason they’re able to make this more compact reactor work is the much higher critical currents/fields that these high temperature superconductors can handle when cooled far below their critical temperature. See: https://fs.magnet.fsu.edu/~lee/plot/plot.htm


Another excellent talk for those interested is MIT's Pathway to Fusion Energy (IAP 2017) - Zach Hartwig.

https://www.youtube.com/watch?v=L0KuAx1COEk

He goes into detail about SPARC as well and why a higher magnetic field using HTS superconductors enables performance that can otherwise be obtained by greater size as ITER is trying.


That's a great talk, and really gives me a lot of hope for the SPARC concept that this article is about.


Phenomenal talk! I'm still watching - so captivating.


I wish these people the best, and I really hope they get a working fusion plant soon. That said, I can't resist sharing Admiral Rickover on academic reactors vs practical reactors:

Important decisions about the future development of atomic power must frequently be made by people who do not necessarily have an intimate knowledge of the technical aspects of reactors. These people are, nonetheless, interested in what a reactor plant will do, how much it will cost, how long it will take to build and how long and how well it will operate. When they attempt to learn these things, they become aware of confusion existing in the reactor business. There appears to be unresolved conflict on almost every issue that arises.

I believe that this confusion stems from a failure to distinguish between the academic and the practical. These apparent conflicts can usually be explained only when the various aspects of the issue are resolved into their academic and practical components. To aid in this resolution, it is possible to define in a general way those characteristics which distinguish the one from the other.

An academic reactor or reactor plant almost always has the following basic characteristics: (1) It is simple. (2) It is small. (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose ("omnibus reactor"). (7) Very little development is required. It will use mostly “off-the-shelf” components. (8) The reactor is in the study phase. It is not being built now.

On the other hand, a practical reactor plant can be distinguished by the following characteristics: (1) It is being built now. (2) It is behind schedule. (3) It is requiring an immense amount of development on apparently trivial items. Corrosion, in particular, is a problem. (4) It is very expensive. (5) It takes a long time to build because of the engineering development problems. (6) It is large. (7) It is heavy. (8) It is complicated.

The tools of the academic-reactor designer are a piece of paper and a pencil with an eraser. If a mistake is made, it can always be erased and changed. If the practical-reactor designer errs, he wears the mistake around his neck; it cannot be erased. Everyone can see it.


It's a good contribution to the conversation, keeping in mind that Adm. Rickover wrote these words in 1953.

Anyone could have said similar things about computers in 1953, and been just as correct.

Are nuclear reactors computers? Of course not, and neither have practical reactors kept pace in development with practical computers.

But neither is it inevitable that steady progress cannot grind down the latter set of characteristics into the former. Indeed, that's what I would bet on, and I think the Admiral would be disappointed in the progress we have(n't) made.


> Anyone could have said similar things about computers in 1953, and been just as correct.

Many people could say similar things about software in 2020 and be just as correct. :) The good Admiral may be describing some timeless aspects of engineering, possibly related to the recently discussed observation that reality has a surprising amount of detail: https://news.ycombinator.com/item?id=16184255 .

Though, as you say, over time engineering can grind out some of the practical into practice.


I, for one, never have corrosion problems on my software projects.


You jest, but... compatibility is the corrosion of software. The environment and dependencies underlying your software are slowly oxidizing/transforming all the time and unless you pick only the stainless steel of interfaces/system calls, your software will cease to function more quickly than lots of iron objects rust through.


I take it you're not programming in Rust? /s


In fusion, they can actually pinpoint exactly why it did not pan out as expected since 1953.

For fusion, you need to achieve a high temperature. So "thermal insulation" of the centre of the plasma from the edge of the plasma is really important. The nice thing is, that since charged particles move along magnetic field lines, and magnetic field lines never cross each other, the thermal insulation of plasma made from charged particles is enormous. That would imply it is really easy to achieve really high temperatures.

However, in 1953 they overestimated the enormous "thermal insulation" at higher temperatures by 10 orders of magnitude. That is a ratio of 1e10 between the expected and the actual value. It is still enormous, mind you, but not that enormous.

If that estimation had been correct, we would have had fusion by 1973. However, in that period it gradually became clear that fusion that way was not going to happen as people found lower and lower thermal insulations in their plasmas.

The same thing might have happened with semiconductors. What if the thermal noise were off by 10 orders of magnitude at smaller scales, compared to what was expected in 1953? Computing would not be where it is today, although it would definitely still be a good possibility if we were only pushing harder in circumventing the noise.


Ah, back in 1953, when they were only 20 years away from practical nuclear fusion.


As opposed to 2020 where we're now only about 20 years away from practical nuclear fusion


Or 2050 when we will hopefully only be about 20 years away from practical nuclear fusion... ;)


Similar things can be said about computers: the Mill CPU is simple, small, cheap, fast etc and of course can't actually be bought right now.

Vaporware is a real problem in every discipline.


I guess we can ask Huawei about how easy it is to build computers in 2020...


with stolen blueprints and unlimited govt resources, easy as pie...


Not really. After the collapse of the Soviet Union China bought complete blue prints, working models and the time of engineers who had designed Soviet combat aircraft and I believe other military hardware. They still couldn’t replicate them. A tool chain and manufacturing ecosystem is a unit. Industrial espionage is useful but it’s not a way to leap frog 20 steps, maybe four, ten if you’re both lucky and good.


Admiral Rickover was a smart dude. But none of these points are applicable here. This development here is about fusion, not fission, those who are pursuing it are not amateurs at all, they are MIT people and they are not shy to pick up a wrench and use it in a lab. One by one, the points made by Rickover against his adversaries are not relevant here:

1. It is simple: no, nobody says here that fusion is simple.

2. It is small: well, this one is small compared to ITER, but nobody's saying it's small in an absolute sense.

3. It is cheap: sure, compared to the billions and billions poured into ITER, this is dirt, dirt cheap, but still, they have secured $200MM so far, and they will need more money until they build an actual reactor.

4. It is light: really not applicable here, since nobody's thinking of putting these reactors in submarines (which is what Rickover was really talking about).

5. It can be built very quickly: sure, the time horizon is less than the perpetual 30 years of the old fusion reactor proposals, but it's not going to be "very quick",

6. It is very flexible in purpose: no, for the time being, something that just works is fine, nobody cares about flexibility,

7. Very little development is required, it will use mostly off-the-shelf components: nope, there will be plenty of very custom made components developed for just this purpose, and lots and lots of development are required.

8. The reactor is in the study phase. it is not being built now: it is both in the study phase and being built.

In the end, what was Rickover's agenda: some people were competing with him for government funding for nuclear reactor R&D in general, and for naval nuclear reactors in particular. All those people were a nuisance to him, he had to go and plead for funds again and again, so he decided to write this letter to put this thing to rest once and for all. He managed.

But here it's fusion. Commonwealth Fusion is privately funded, they are not looking for government funding, and they are not making inflated representations to the public. They are actually keeping a low profile in general. The message of this article is simply "this time there is a glint of hope", nothing more.


Yes, that's very important but it is also important to remember that this was in a particular context and that the "pick a design and just build it" approach only works when you have a design (however imperfect) that is going to work.

First, the context. Rickover needed to get a reactor designed, series built, and operated right away. In that context, he was absolutely right.

Second, the PWR design works and it works very well for submarines. I don't know that there has been a better design concept for submarines (the USSR built a few liquid metal cooled reactors which caused them lots of problems). When you have a basic design that does work, then going ahead and building it is the right thing to do.

We can see the consequences of what that approach does to earlier stage research programmes as well because some of Rickover's people were put in charge of Argonne's reactor research programme which they did not understand and where they caused an immense amount of trouble by trying to duplicate this approach in a context where the basic approach had not been definitively decided. This lead, for instance, to spending an enormous amount of money and time pursuing an oxide fuel fast reactor concept despite being warned that there were fundamental problems with that design and that spending more time up front on metal fuel design work would be a better idea. They didn't want to hear it - oxide fuel works (which it does - in thermal PWR and BWR reactors) - so go with that and just build it.

Rickover was right of course about the principle of understanding technological predictions in the context of what we would now call Technological Readiness Levels and optimism bias but in understanding this we also have to keep in mind the other side of the coin which is not to entirely commit to premature optimisation.


Rickover was focused on safety, and specifically solutions that could be safely maintained and operated in a steel pipe full of men, 100' beneath the sea.

https://en.wikipedia.org/wiki/Hyman_G._Rickover#Safety_recor...


A solid, reality check. Here is the source link: http://ecolo.org/documents/documents_in_english/Rickover.pdf


This isn't an academic reactor. MIT is involved because they did the theory work, but it's venture funded with a plan to build a working reactor in a few years.


How isn't this an academic paper as defined by Admiral Rickover? It meets a lot of his criteria: small, cheap, built quickly and, probably most importantly, it's not being built now.


This point is worth noting:

> (3) It is requiring an immense amount of development on apparently trivial items. Corrosion, in particular, is a problem.

The design has molten fluoride salt circulating in a strong magnetic field. Molten salts, unlike molten metals, have low (but not zero) conductivity.

But the motion will still induce a voltage drop across the salt. If the walls are metal, there is the possibility of galvanic corrosion due to this potential. And there's the possibility that radiation could enhance this corrosion.


I hear Adm. Rickover fixed a whole toothpaste factory once with nothing but grit and an $8 electric fan. Hoo boy he made those expensive college boy consultants look like they just fell off the turnip truck.


Here's the actual summary:

"Although many significant challenges remain, the company said construction would be followed by testing and, if successful, building of a power plant that could use fusion energy to generate electricity, beginning in the next decade."

In other words, "very likely" in this case means "if several roadblocks are overcome, it might be a net-positive power generator in a decade". Even so, this is still exciting given how anemic advancement in the fusion space has been for 50+ years.


The Fusion Triple Product, which is the key determinant of whether there is a self-sustaining reaction, increased faster than Moore's law up until about 2005. Then all the money went into ITER.

https://physicsworld.com/wp-content/uploads/2004/01/pwhoa4_0...

From the article at :

https://physicsworld.com/a/controlled-fusion-the-next-step/

more information about the triple product and the Lawson Criterion.

https://en.wikipedia.org/wiki/Lawson_criterion


Here's a great article showing progress improving the triple product until now. https://www.fusionenergybase.com/article/measuring-progress-...


Well now I really want to see the projected SPARC reactor on that plot. Does anyone have the necessary familiarity to add it?


Author of that article and plot here. SPARC is projected to have energy gain Q >=2 and potentially up to 11[1]. ITER is projected to achieve Q of >=10[2] so I would guess that SPARC's expected triple product would be in the same ballpark as the projected ITER datapoint, perhaps slightly lower, though potentially the same. We'll see!

[1] http://doi.org/10.1017/S0022377820001075

[2] https://www.iter.org/newsline/-/2845


> this is still exciting given how anemic advancement in the fusion space has been for 50+ years.

"Nuclear fusion. 30 years away since 1950"

Joking aside. I too am glad to see some progress of any kind, and new (seemingly credible) initiatives being funded and pursued


> "Nuclear fusion. 30 years away since 1950"

Those projections usually include the caveat "if properly funded" - https://i.imgur.com/3vYLQmm.png


I’m not sure why this is being downvoted. Reactor studies come out every decade and validate this study every time.


I'm not sure lack of money has been the main issue. ITER is projected to cost $22-$65 bn and still won't be able to produce practical power because the tech isn't up to it. It's only the discovery of REBCO magnets that has made the SPAC design perhaps possible.

Switching ITER to an updated design might be an idea.


Where are you getting that ITER budget range? From what I can tell, it is much closer to $20 billion. That’s for a project that began in the 80s and will finish in 2045. 20 billion over 60 years is a pittance compared to the 15+ TW the world spends on energy.


It was an estimate from the US department of energy https://www.nextbigfuture.com/2018/10/nextbigfuture-said-ite... which ITER dispute.


`"Nuclear fusion. 30 years away since 1950"`

I feel like there should be bonus points for getting this quote to the top of every fusion article posted until there's an existing fusion reactor.



Have you seen this? I'd love to find an updated version, but they've been making exponential progress on it since the 70's.

https://i.imgur.com/BN0pz.png


There's a version of this graph about 18 minutes into the Dennis Whyte talk linked in the top comment, though if I understand it it's measuring "power out per pulse" where your graph has "power out per unit power in".

He says progress stalled in 1995 and is not projected to continue until ITER is built, 10 years from now. Then the rest of the talk is actually about an alternative to ITER.


“The year of the nuclear desktop”!


Sounds about the same as generalized AI in time tables


To be fair, we have known how to build a working nuclear fusion power generator (probably with less than 30 years of construction) since the 1950s (see Project Orion); it's just a ridiculously huge construction/engineering project (basically a internal combustion engine powered by thermonuclear bombs).


Orion was essentially an "external combustion engine". I.e. you have a huge piston that is driven by repeated explosion of nuclear bombs. With a spring mecanism to even out the whallops. And no "engine block" around it.

It did not generate electricity, and given the discrete/pulsed nature of the mecanism, I have a hard time seeing how you would "even out" the output, should you decide (somehow) to attach it to a generator. Let alone imagining how you would transport said electricity back to earth.

Because, don't forget, it was possible to have an external combusion engine because it was in space. Even by 50's standards, the project was judged a non-starter for atmospheric flight, from environmental standards point-of-view.

The project itself was in theory viable for it's original objective (space travel), but highly impractical. It requires massive, robust structures (read 'heavy') that can resist very high impacts. Not a problem once you're into space, but you have to get it into space first. And no-one will let you blow up a couple of scores of nukes in the atmosphere, on a recurring basis. The thought of launching hundreds of nukes into space to a ship built in orbit is not exactly an easy proposition either.

Still, should an extinction-level meteor ever show it's face, it would be useful.

For fun : some original footage of the prototypes in flight (using high explosives) https://www.youtube.com/watch?v=Q8Sv5y6iHUM


Strange. I've read about Orion-type vehicles off-and-on for several decades, and the impression I always had was that the shield would be a hemisphere, or parabolic, rather than shaped like a nosecone.

Am I missing something? How do the lower parallel sides help capture the energy of an explosion below the shield?


I've always understood it to be a plate. That way the vector of the forces is in the direction of travel. A parabolic shape would have to resist sideways forces.

A bit like this:

  https://www.centauri-dreams.org/2016/09/16/project-orion-a-nuclear-bomb-and-rocket-all-in-one/


Blocked by the slowness of the space industry.


> this is still exciting given how anemic advancement in the fusion space has been for 50+ years

At the top of my list for when I'll be king for a day is - massive, national-scale investments (think: reaching towards one percent of GDP) in fusion research.

This is a major roadblock, but things look awesome on the other side. We just need to get our stuff together to hop over this fence somehow.


I think it’s generally very hard to force technological progress along by just throwing money at it.

Let’s suppose this proposal works out, ITER will become a very expensive boondoggle. A technological dead end. But if you’d taken the ‘Manhattan Project’ approach and thrown 10x as much money at fusion research 20 years ago we’d most likely have spent most of it on a super-ITER. It might be operational by now and might even have reached break-even on power generation, but in the longer term would now be just as redundant and superseded by this new approach.


That's a version of "The Wait Calculation". For instance, it would be a waste of resources to build a starship right now to visit the closest other solar systems, because given our current maximum velocity and our rate of technological increase, it's a virtual certainty that in the future we would be able to build a second starship that would catch up and overtake it.

Eventually there is a break even point (for the closest solar systems, last I checked, it's around 400 years), but in order to know what it is you need to know the amount of time to reach the goal and your rate of increase. I don't know if there's a way to estimate that with fusion research.


This is a great mental model. I'm curious about applying this in areas of one's own life. Certain things that you'd actually accomplish faster, by starting later and leveraging the compounded time / resources that would have been spent pursuing it earlier.


The 80,000 hours folks generally apply this kind of reasoning to explain why it’s better for most people to just focus on a corporate job, stockpile a ton of cash, and donate it all when you die, instead of quitting to join a cause / non-profit - if your only goal is to optimize your impact.

Obviously people are free to choose differently and there are exceptions, unique circumstances etc.


I’m reasonably confident your summary of their advice is out of date. They now believe most organizations are talent constrained and the difference between the best and second best candidate is often very large so most people should be trying to work on hard problems fast if they can.


OT, but if solar panels last for over 20 years and get at least 5% cheaper every year, you save more money by waiting to install them than you do by installing them.


That does not factor in the money you make with the solar panel. I.e., if having a solar panel nets a profit of 10% of the cost of the solar panel, than installing one now makes more money than waiting a year.


ITER is not cheap. But it is expected to get to first plasma by 2025. There will be things learned at ITER that will be applied to SPARC and other smaller reactors. ITER is expected to cost a bit over twice what a fission plan costs. Considering the stakes, spending that money to learn things about fusion power generation seems well worth it.


I didn't see the details but it looks like both ITER and Sparc are both Tokamaks performing D-T fusion.

And in fact, it looks like Sparc is more of an exercise in making fusion reactors cheaper, most notably by using different magnets. ITER is more about validating the concepts, with no regard for the price. Besides the fusion itself, ITER will study how to inject fuel and evacuate the waste products as it is running. They will also tackle the problem of producing the large quantities of tritium a fusion plant requires.

So, my understanding is that Sparc is not intended to make ITER obsolete, but instead tackle a different problem. If commercial fusion reactors happen one day, I expect they will have a bit of both.


ITER is a research project and will likely be very valuable, even more so when the compact reactors turn out to be viable. As amazing as the compact reactors are they still face many of the same challenges as ITER.


ITER is a make-work project, with the goal of demonstrating a brave new world of international coöperation. Any research which emerges from the maelstrom is a nice bonus.


> But if you’d taken the ‘Manhattan Project’ approach and thrown 10x as much money at fusion research 20 years ago we’d most likely have spent most of it on a super-ITER.

Or maybe someone would have had enough sense to think "you know, what we really need is better superconducting magnets. Let's throw 1% of the budget at that problem."


Sure, but on the other hand, such a lucky break of this new approach might have had never happened. At the same time, if super-ITER approach was shown to work, by producing it en masse, it would likely be cheaper per installed MW than nuclear power, which would remove the need for solar panels and wind altogether. We'd have cheap power with no danger of nuclear pollution, no real nuclear waste problem, and no destroyed habitats and landscapes by solar panels and wind turbines. We'd still have won over current status quo, just not as much as we might with this lucky break.


Maybe, or maybe not, who knows? That’s the problem, your betting everything on your best guess what the optimum approach might be long before you have any idea what the odds are.

The problem is (1) the earlier you spend big the greater the chance you’ll pick wrong because you have less information to base a decision on and (2) Spending big doesn’t actually improve the odds that the approach you pick will end up being the right one. It just means you find out sooner.


You don't know that. What if you had spent 1000x the budget of the manhattan project on building an atom bomb in the year 1789? You would have gotten squat.


I don't think that's quite an apt comparison. The fundamental physics of an atomic bomb were not well known until the late 1800s. Apparently the Manhattan Project cost about $23 billion in 2007 dollars. ITER apparently had an initial budget of around $5 billion in 2005-6. By now, the construction cost is estimated to be at least $22 billion. It seems plausible that if we throw $15 or 20 billion at it in 2006, we could get bring forward the timeline by a handful of years. Or we might burn $20 billion on things like advanced construction and salaries for physicists and engineers, with relatively little to show for it.

The GDP of the EU is apparently $18 trillion, so $20 billion is about .1% of the GDP in a single year, and of course that $20 billion is spread over 1-2 decades. If anything, it seems like as a species, we should have more of these bets going. What if we spent 1% of our GDP on 10 long-shot, high-impact projects? Or hell, half a percent on 5 long-shot projects, and half a percent on solving the dozens of problems that we could solve simply by funding them.


Yes exactly, this is what Nassim Taleb writes about in Anti fragile. Bets with a known limited downside and the potential of an almost unlimited upside.


The upside is in no way unlimited. Even in the best case scenario, fusion power will be expensive. It might end up being useful for base load but it's unlikely to be cheaper than solar in most areas.


Assuming that you use it like before. But if you have a lot of power in one place you may think about using it for something like an orbital launch system.


The power grid already allows us to have as much power in one place as we want.


> The fundamental physics of an atomic bomb were not well known until the late 1800s.

Er, don't you mean the early 20th century?

Rutherford's exegesis of the nuclear model of the atom was published in 1911.[1]

1. https://en.wikipedia.org/wiki/Rutherford_model


Heh, wasn't sure where to draw the line. Radiation was discovered in the 1890s, so that seemed like as far back as you could go.


I know that, because we already got ITER, which is widely expected to be energy positive. Moreover, the ITER approach is known to scale upwards: in fact, the ITER is so large, because it's the smallest size that is expected to work.


ITER is expected to be energy positive, but it's a kind of pyrrhic breakeven. The gross power density of ITER will be 1/400th power density of a PWR reactor vessel.



Do you think that a better approach, given a lot of resources, would be to push more general/fundamental research without a clear end goal in mind? That approach makes sense to me intuitively given how many surprising breakthroughs there have been throughout history, but I admittedly know very little about actual research.


Most surprising breakthroughs (am not a historian though) were found with a clear goal in mind. It was just that the goal was very different from the breakthrough.


Ah ok, thanks. I guess in that case a good strategy could be to pursue a lot of specific goals, even if they don't seem worth it on the face of it. Seems like an approach the Ig Nobel awards would appreciate.


Probably not. REBCO was discovered in the 80s but is difficult to work with. A fusion energy Manhattan project would make 3-4 reactors in parallel. If it happened in the 2000s then they would have thrown significant effort at HTS technology. That’s one of the most obvious areas to invest in. The whole world would benefit from that.


ITER and Sparc are not mutually exclusive routes. ITER will teach us a lot about Fusion, knowledge that will be useful to scale up production.


"would now be just as redundant and superseded by this new approach."

And this approach will be superceded by something else. And that will be superceded in turn. Should we wait forever? We'd still live in caves.

These new approaches don't just show up, they come from the knowledge and scientists trained in whatever the previous approaches were.

Lastly, if this approach results in having cheaper fusion power, but needs extra 20 years of R&D, thats time we don't have.


It worked for WW2 and the nuclear bomb. It is hard but also effective.


I couldn't agree more. Once you realize that energy is the fundamental unit which powers everything else in the economy, not only from every movement of ourselves to those of our machines as well, you realize the more we advance in our ability to produce more and more energy at cheaper and cheaper costs in increasingly reliable ways, the more we can accelerate, without exception, every other industry.


There is ~zero expectation in the scientific community that fusion is going to be cheap at scale. It might eventually be cheaper than fission due to several factors. But, even that’s looking a long way off.


> It might eventually be cheaper than fission due to several factors.

This surprises me, for a number of reasons.

A fission reactor can be operated safely by a bunch of people with baccalaureates, whereas a fusion reactor will need PhDs, as I understand things. Also, its capacity factor will not be as good as that of fission reactors.

The waste from fission reactors can be made pretty small by reprocessing and in-reactor transformations. The radioactive waste from fusion is whole reactor vessels, which are large and difficult to handle (=expensive).

At commercial scale fusion reactors need ancillary fission reactors to manufacture the tritium they require. So you have two reactors instead of one.

Other factors look similar, except that fusion will carry an investment risk premium because of its novelty and complexity.

So under equivalent regulatory regimes it seems to me that fusion would cost as much or more than fission.

Can you explain why it might eventually cost less?


Significantly less security concerns. No need for a massive containment vessel or massive redundant safety systems. Likely cheaper fuel and no long term high level nuclear waste. A smaller percentage of the reactor becomes contaminated. Lower insurance costs.

D is already cheap and even used by fission reactors. Extracting T from the blanket is presumably inexpensive, and one of the things ITER will test, but that’s an unknown. https://en.wikipedia.org/wiki/Breeding_blanket The real question is how expensive the physical reactor is going to be to build and maintain and does that offset the other savings. It’s expected for that cost to drop over time which is why it’s possible to eventually be cheaper than fission.

Finally, DT is easiest to achieve at a 50/50 ratio but DD fusion still takes place. So a lower mix of T is viable once very high Q values are possible, thus eventually zero T designs should be viable.


> whereas a fusion reactor will need PhDs

If you need a bunch of people with PhDs to operate a reactor, then you won't have a reactor. Not only does it mean that staffing is expensive and difficult, but also that it won't be reliable or predictable. If it is predictable, then steady state operation should be offloaded to computers.


How so? Unlimited, non-radioactive fuel and inherent safety of the system alone could tip this scale pretty severely in favor of fusion once it becomes energy-positive. And remember, once that happens, investment is going to be through the roof, which will accelerate progress. Either the "scientific community" (whatever that means) is either too pessimistic about this, or you've just made this up.


"Non Radioactive Fuel" is technically correct.

But.

The fusion produces neutrons.[1] That means that at least some parts of the infrastructure can't avoid becoming radioactive. Decommissioning -- at the very least -- is still a problem.

Any actual nuclear physicists want to chime in here?

[1]Yes, the concept of "Aneutronic fusion" exists: <https://en.wikipedia.org/wiki/Aneutronic_fusion>. But read the parts about the required conditions being much more extreme than D-T fusion.


Deuterium is a stable isotope, but Tritium’s half life is only 12.32 years making it quite radioactive.

There is significant effort put into using lithium blankets to create tritium at which point arguably the fuels are lithium and deuterium. But that’s also going to require irradiate the relevant equipment.


Quite right. I stand corrected w.r.t. my inaccurate statement about Tritium.


Tritium is worth $30,000 a gram.


Yea, though only about 400grams are used worldwide mostly for self illuminating signs. Which adds up to 12 million dollars per year of demand resulting in such high prices.

At scale it’s significantly cheaper to produce.


Lithium carbonate Li2CO3 is worth $6.75/kg

https://www.fastmarkets.com/commodities/industrial-minerals/...


Tritium is in the fuel, and it's quite radioactive.

A 1 GW(e) reactor would burn enough tritium in a year that, if that quantity were to be released into the environment, it could contaminate 2 months worth of the flow of the Mississippi River above the legal limit for drinking water.

Tritium containment at a fusion reactor will have to be damned near perfect. This will be a major problem, as tritium permeates through all sorts of things (for example, plastic seals on containment penetrations cannot be used).


Gaseous tritium being significantly lighter than air tends to end up in the upper atmosphere not contaminate the local environment. As it’s beta decay you could have a glass glass jar of the stuff on your desk for years without issue.

Unlike fission power plants there isn’t going to be years worth of the stuff on site. For one thing 5% of the stuff decays sitting around so you want a tight loop of production to consumption. Further, the reactor is holding low density plasma so there is very little inside at any one time.


> Gaseous tritium being significantly lighter than air tends to end up in the upper atmosphere

That's not how it works. A low molecular weight gas anywhere below the homopause remains well-mixed, and does not separate by molecular weight. On Earth, that's anywhere below 100 km altitude.

https://en.wikipedia.org/wiki/Homosphere


Yes, but not what I am referring to. 100 miles is a long way up. A large release of Tritium would first quickly gain altitude while in a high concentration before mixing with other atmospheric gasses. For the same reason as hot air from a fire initially rises as a column. After that it’s going to keep mixing with other atmospheric gasses and spreading through literally billions of cubic miles of atmosphere around the globe because unlike fallout it can stay in the atmosphere.

As a beta emitter it’s blocked by just a few feet of atmosphere thus rendering the bulk of it harmless. Eventually, some will combine with oxygen and end up as water, but again most of that just ends up in the ocean.


The quantity of tritium we're talking about here is about 100 kg. That will rapidly be mixed into air within a short distance of the plant. It's not going to form some coherent blob that can penetrate 100 km of atmosphere.

Once in the troposphere, hydrogen of any kind will be oxidized to water within a couple of years, and then rain out. Of more concern would be accident processes that would cause it to be oxidized immediately. For example, any fire or exposure of hot materials to air would cause associated tritium to react.

This is all unrealistic anyway, since the plant will not have 100 kg of tritium on hand at any time. That's about the amount consumed in a year, but the reactor could not afford to have any substantial amount sitting around, decaying, or else the breeding ratio will be too low.


I think we have mostly been talking past each other as I don’t disagree.

Anyway, for a more detailed description of what I was expecting. A least initially it’s going to act just like hot air. Hydrogen’s is 7% the density of air so Tritium is presumably 21% the density of air. Which is similar to air at 2000f without the particulate matter of smoke.

About 100kg should be roughly 500 cubic meters depending on temperature, but a more reasonable limit of ~10kg is still close to 50 cubic meters of gas. If we are talking a sudden release from say a pressurized tank rupturing outside that’s going to from an invisible but mushroom shaped blob and rise. Where a detonations mushroom cloud stops rising as the temperature cools, this thing only slows as it mixes with air which isn’t that fast. The troposphere is only ~8miles up so it’s likely to reach the stratosphere mostly intact.

If we’re talking a venting pipe or something that releases gas more slowly then you get much faster mixing. However, baring the slowest of leaks we are still likely talking going up hundreds of feet at a minimum and more likely miles before it dispersed enough to act like the rest of the atmosphere.

In the absolute worst case, you still get a lot of vertical mixing of the atmosphere from thermals and rapid dispersion from the wind which doesn’t slow down. Within days you’re talking thousands of cubic miles of atmosphere. So, a short term evacuation of those down wind might happen, but they should be able to return in days.


My best guess is the latest "nuclear energy is failing because it's expensive" trend on HN is to blame for this outlook. My take is that it's all poppycock. The ability to make our own little sun is, IMO, transformational for humanity.


Fusion's primary accomplishment will be to make fission look economical in comparison. The fundamental reasons for this have been known for decades.


The cost of the power plant can easily dominate the cost of the fuel. I build power plants fuelled by waste industrial heat (10-20 MW scale) and even though the fuel is all but free it can be tough to identify economically viable projects.


> Unlimited, non-radioactive fuel

Nuclear fusion in the existing designs absolutely needs Tritium as one component of fuel, and Tritium can so far only be produced in Uranium reactors, in very small quantities and at an extremely high price.


It was my understanding that fission and fusion both use the same Nuclear Power Plant sized facility.

Fusion just replaces the radioactive bit, you still need steam/electricity conversion/transmission/cooling... it's not like a suitcase you can plug wires into, you still need a massive 'factory' to make electricity, just the one bit is a safer.


I am really not buying the fact that nuclear is expensive because it's nuclear. I think the fears around it, the overregulation and opposition to it make it costly. Obviously, there have been plenty of plants that were profitable.

Also, nuclear fusion is possibly the cleanest energy source we could get. If we touch this, we might have a real path forward.


That just isn't the case. Nuclear (fission) has always been the single most expensive way to generate electricity, and the expense has only increased. Uranium is crazy expensive. Security is very expensive. Spent fuel storage effectively never stops costing, and it isn't ever cheap. If nuclear was simply expensive because of regulations, investors would find a way, and you could not beat them away from building nuclear power plants. But that just isn't where the expense lies. Today it costs $20B to build a nuclear power plant, and that does not include the cost of spent fuel storage or decommissioning. Investors are actually pretty shrewd to stay away from that kind of an investment that always loses. The idea of "electricity too cheap to meter" simply never materialized. Today, electricity generated from solar power is cheaper than electricity generated from nuclear power. The main reason the 110 or so commercial nuclear plants were ever built is that the US military vastly overestimated its need for fuel for bombs.


Thats not accurate at all, Uranium itself is much cheaper than coal / any fuel per unit energy. Some contries have permanent, final storage for nuclear waste, it does not cost any more once created.

Furthermore, not only was solar power borderline non existent when those reactors were built, it is also still intermittent. You are just glossing over the biggest challenge of energy - balancing the powe grid. No-one needs energy if it's only avaliable at the wrong time. Energy storage multiplies cost of renewable electriciry several times over, and no country-scale grid has ever operated on wind and solar.

Lastly, energy is actually cheap - you can see that because we can afford transporting a pair of jeans 4 times across the world in the process of manufacture. We could have had zero-carbon grid since the 70s with nuclear - and France did. Even though France has cheapest energy in EU, suppose energy would be 30% more expensive. So what? We would be so much better off in terms of climate change.


Nuclear has the opposite problem as solar in that it needs high uptime 24/7 to stay reasonably economical. You can easily design a grid following reactor as seen with nuclear subs etc, it just doesn’t save you any money.

24/7 365 grid scale battery backed and thus load following solar runs about 8c/kWh* which is cheaper than nuclear at high utilization. Sure, France’s model of importing and exporting significant chunks of electricity allowed them to ramp up nuclear, but they where exporting power at a loss and utilization still fell into the 80% range.

*Excluding the most northern and southern areas.


Excluding the most northern areas is a pretty big issue. Most of Europe is north of 45 degrees north... Yes, Hawai'i and California can be supplied with solar panels cheaply; the UK can't.


Can’t is really just a cost question. 8.2 percent of Germany‘a gross-electricity generation comes from PV solar and their southernmost point is 47°17'N. IMO, that was an over investment, but that’s their choice. Most people in Canada live south of that so we are not talking about that many people.

We are really talking about a handful of countries. The Nordic countries have cheap alternatives in hydroelectric, wind and geothermal energy. Nuclear may have a few niche applications for northern islands etc, but that’s not really significant globally.

PS: 4% of the UK’s electricity comes from solar and their southern tip is a actually quite decent for solar.


I'll respond to your straw man. It isn't about the energy within the thing, the exorbitant cost of Uranium is in mining and refining. No one said anything about coal, but it doesn't need refined, and is cheaply mined... but you've also made a false equivalency, because coal isn't the only competition to nuclear. The sun just shines, wind just blows, water just flows, and the geothermals just produce heat, without any investment or refinement.

If just 10% of the resources poured into nuclear development, which is a cost no one that is pro nuclear wants to tally into the bill, were instead invested into solar energy, nuclear power would not have been able to compete with solar power by 1980.

Nuclear power isn't cheap, unless you ignore the insane R&D that was paid for by tax payers by government mandate and never paid back, unless you ignore the massive cost of construction long before one watt of power is produced, and unless you ignore the massive cost of decommissioning, and unless you ignore the never ending cost of spent fuel storage. It is entirely absurd that you believe once a spent fuel storage facility is built, the costs just disappear. The costs never go away. Maintenance. Security. Testing. It isn't free and it isn't cheap.


Two questions:

You keep repeating this idea thay uranium is expensive, by what metric? Where do you get the idea that its expensive?

If nuclear is so expencive, why does France have cheapest elecrticity in EU? And Denmark /Germany have invested in renewables have expensive electricity.


Looks like I was simplifying. Uranium prices peaked around 2010 for $135/lb., and today is only $35/lb. The break even point for mining uranium is about $50/lb. So the problem in 2010 was that uranium was crazy expensive, but the problem today is it is so cheap, it can't be mined for profit.

But you raise a decent point about what metric we should choose... I meant the cost of the stuff, but there are other metrics, such as clean up costs, because mining uranium is not clean. There is also a human health cost to populations within the proximity of the uranium mine.

French electricity is likely cheap because the French tax payers already picked up the cost of constructing spent fuel storage facilities and power plant construction, and they will ultimately shoulder the burden of the cost of decommissioning. This is just an educated guess, because that is usually how nuclear economics work. Otherwise, the investors that run the plants and sell the electricity would not be interested.


4 words: Thorium Molten Salt Reactor (LIFTR)


> Uranium itself is much cheaper than coal / any fuel per unit energy.

well, that's silly: sunlight is cheaper than uranium.


Uranium and even enriched uranium is still reasonably cheap. Fuel rods get expensive, and more critically only using up ~5% of a fuel rod before you need to replace it gets even more expensive. Other reactor designs can use a higher percentage of uranium without reprocessing, but they have other issues. In the end fuel including waste disposal represents about 10% of nuclear reactors operating costs or ~1c/kWh. It’s a significant but hardly gamebreaking cost.


You don't need as much containment as you do for fission, because you don't deal similarly large amounts of highly radioactive isotopes. Consequently, the safety standards required are closer to fossil plants than to nuclear plants, and these safety standards are really what drives the cost of nuclear fission. The steam/electricity conversion/transmission/cooling" are also present in fossil plants, and these are significantly cheaper than fission plants.


It’s not that simple.

Fission’s primary form of shielding is generally large pools of water or other coolant which don’t directly become radioactive. Fusion on the other hand needs to maintain a near vacuum so your pressure vessel is under heavy neutron bombardment. However, small amounts of radioactive materials get dissolved in the fission’s water which the goes on to contaminate the primary coolant loop which increased decommissioning costs. Fusion reactors primarily containment vessels becomes extremely radioactive and all the remote handling equipment also needs decontamination, but it’s unclear if the primary coolant loop will need similar types of decontamination.

And by small amounts, divers occasionally go in the same pools storing years of spent fuel rods. https://en.wikipedia.org/wiki/Spent_fuel_pool

Running the numbers the real difference is fission reactors need more protection from the outside world and containment for a potential meltdown. Thus thick though still fairly cheap walls, which generally don’t become radioactive. They last for 50 years and don’t actually cost that much to construct. Fusion however is a vastly more complex device which will also increase construction and decommissioning costs.


In ARC the containment is not under vacuum, the reactor chamber is surrounded by molten salt of fluoride and lithium, and so the neutrons produced travel into molten salt where they collide and produce tritium.

It would help to actually watch the presentations. They’ve solved a lot engineering problems from Routine maintenance to blanket Renewal.


ARC reactors due use high temperature plasma in a near vacuum. “The confinement time for a particle in plasma varies with the square of the linear size, and power density varies with the fourth power of the magnetic field,[2] so doubling the magnetic field offers the performance of a machine 4 times larger.”

https://en.wikipedia.org/wiki/ARC_fusion_reactor


Just the little bit responsible for Chernobyl and Fukushima is not a part of the design at all. No big deal. :-)


I am really tired of this line - the taunami killed 18,500 people. The reactor incident killed no-one through radiation and 32 people through physical injuries.

One failure of Banqiao Dam killed an estimated 240,000 people. That's more than all people who have ever died from anything to do with nuclear, reactors and bombs combined.

Air pollution kills about 2,000 people every single day.

Reactor incidents are like plane crashes - they get attention. Fissil fuels are like car crashes - they kill more people every day and noone gives a shit.


Killing people is hardly the only issue. Rendering large tracts of land uninhabitable is extremely expensive. All told 4 nuclear reactors have had major issues, 2 subs and 1 power plant run by the USSR, and then another one very recently by Japan. Plus several near misses.

That’s a significant percentage of total reactors ever built including what was considered a safe design. We could go 1000 years without another incident, but from an insurance standpoint what would you charge a new power plant next to NYC? That means you need them in an a less expensive area, but everyone feels their area is valuable. That causes vast NIMBY issues and heavy regulation.

In theory modern Nuclear should cost less and be both clean and safe, but people gonna people both inside and outside the industry.


> Rendering large tracts of land uninhabitable is extremely expensive

Have you ever been near a coal ash pond? You probably haven't because it is an extreme health hazard to get anywhere near it, as it is full of mercury, arsenic, heavy metals and occasionally radioactive slurry.

There about a thousand of these ponds in the US alone totaling maybe 100,000 acres. Meanwhile all the nuclear power plant waste ever produced could fit into a single large hangar...


Every nuclear accident was completely avoidable, but it’s not clear if future plant operators can avoid making similar mistakes.

The Chernobyl exclusion zone is 1,000 square miles. Fukushima had a much smaller exclusion zone but Estimates of radioactivity released ranged from 10–40%[163][164][165][166] of that of Chernobyl. The significantly contaminated area was 10[163]-12%[164] of that of Chernobyl.[163][167][168]

On 12 October 2012, TEPCO admitted for the first time that it had failed to take necessary measures for fear of inviting lawsuits or protests against its nuclear plants. That’s the core issue not physics.* ... A 2008 in-house study identified an immediate need to better protect the facility from flooding by seawater. This study mentioned the possibility of tsunami-waves up to 10.2 meters (33 ft). Headquarters officials insisted that such a risk was unrealistic and did not take the prediction seriously. The U.S. Nuclear Regulatory Commission warned of a risk of losing emergency power in 1991 (NUREG-1150) and NISA referred to that report in 2004, but took no action to mitigate the risk.[149] https://en.wikipedia.org/wiki/Fukushima_Daiichi_nuclear_disa...

France and the US have a solid nuclear track record, but so did Japan.


I'm really tired of this line as well. Had they failed to contain Chernobyl, much of Eastern Europe would be contaminated for thousands of years.

It is true that today's reactor designs are much safer than RBMK, but I will prefer the reactor can can't go supercritical to one that can any day, especially if it's nearby.


> but I will prefer the reactor can can't go supercritical to one that can any day, especially if it's nearby.

Newer design's can't go supercritical.


Tsunami and earthquakes are not man generated events, so we can dismiss them. Yes, they killed a lot of people, but cancer kills even more: about 10 millions per year. In part, cancer is caused by contamination of food by radionuclides from reactor leaks (Chornobyl) and nuclear bomb testing.


> In part, cancer is caused by contamination of food by radionuclides from reactor leaks (Chornobyl)

Sorry, this is just misinformation. Residual radiation worldwide from nuclear testing or Chernobyl is minuscule. We wouldn't even be able to detect anything if we didn't have incredibly sensitive instruments.

The sun is a much larger daily source of radiation. Or a banana.


Radiation from Chornobyl affected large area, but not whole world. People die from radiation induced cancer in affected areas only, of course.


Yes, cancer did not exist before nuclear. We can blame cancer on nuclear plants and bombs. That's quality commentary.


Radionuclides are major contribution to growth of cancer rate.

Quote: Radioactive fallout from nuclear weapons tests probably caused 17,000 cancer deaths in the United States in the latter half of the 20th century

http://news.bbc.co.uk/2/hi/americas/1849471.stm


"probably" 17000 in 70 years against 700 000 000 total cancer death. Pack it up, boys.

And if anything, that shows how bad nuclear weapons are, not the power plants.


The 17000 is US numbers and the US is a long way from 10 million cancer deaths per year. The point stands fine without using misleading numbers.


There are not many countries who did nuclear weapon tests and even those that did, did not do it in such numbers. Maybe russia, but I doubt that would push the numbers much.


I believe that coal releases more radioactivity into the environment than nuclear energy.


Quote:

McBride and his co-authors estimated that individuals living near coal-fired installations are exposed to a maximum of 1.9 millirems of fly ash radiation yearly. To put these numbers in perspective, the average person encounters 360 millirems of annual "background radiation" from natural and man-made sources, including substances in Earth's crust, cosmic rays, residue from nuclear tests and smoke detectors.

https://www.scientificamerican.com/article/coal-ash-is-more-...


But the tsunami didn’t render large areas of land uninhabitable, as in the case of Chernobyl and Fukushima.


The nuclear incidents didn't, either. Decontamination of Okuma has been successful and residents are already returning.

Pripyat is an animal and vegetation haven. Really shows how dangerous our species is to the planet.

Point is, nuclear power plants, even the old ones, are nowhere near as dangerous as the media portrays them.


The Chernobyl exclusion zone is 30km in radius. It’s not at all clear that serious effects do not extend beyond that, for instance to agricultural lands in Belarus.


Make an efficient fusion reactor the size of a truck, and the solar system becomes your backyard.


Make an efficient fusion reactor of any size, and you can power Fischer-Tropsch plants to synthesize carbon-neutral hydrocarbon fuel for the existing fossil-fuel infrastructure.


I really hope that phase is as short as possible and we don't use it as an excuse to prolong the use of burning crap to generate usable energy. Air pollution is gross.


Screw that. Fossil-fuel infrastructure is smelly and noisy. That's my first concern, I don't even need to "care about the planet" to hate fossil fuel.


I've never heard of this. How can a hydrocarbon fuel be carbon neutral?


The problem is not that we are burning hydrocarbons per se. The problem is rather that these hydrocarbons were extracted from the ground and introduced into the short term carbon cycle. The total amount of carbon rises thereby which leads to shifts like an increasing concentraction in the atmosphere and the oceans. These higher levels of carbondioxide in the atmosphere let less heat escape back into space (the atmosphere is normally somewhat transparent at the affected wavelengths but carbondioxide can absorb at that, trapping additional heat in the atmosphere, see [0]), leading to additional global warming (water vapor also traps heat and and made Earth suitable for life as we know it in first place). Higher levels of carbondioxide in the atmosphere also lead to more carbondioxide being absorbed by the oceans which in turns acidifies the water.

We could extract carbondioxide from air, turn it into hydrocarbons and burn that all day/year/century long but producing it from fossil sources is what makes it so bad.

[0] https://en.wikipedia.org/wiki/Greenhouse_effect#/media/File:...


You take CO2 out of the air and turn it into oil / methane / elemental carbon using vast amounts of energy and some water and catalysts.

You then burn it, returning it to the atmosphere.

Net CO2 impact = zero.


In a nutshell: You taken carbon dioxide from the air, hydrogen produced with a carbon neutral power source and reverse the process of burning oil. Result: hydrocarbon and oxygen. Costs a fair amount of energy and is comparatively inefficient and costly, but it is carbon neutral since burning that produced hydrocarbon will produce as much CO2 as you used to produce it.


Or convert CO2 to carbon nanotubes


I like this. :) I am just amazed how many people fail to see the importance of this. It would _really_ pave the way of the future for humanity if it pans out. Pretty exciting stuff.


Can you make enough energy that you could satisfy the demands of bitcoin miners and still have enough left over for productive uses?


> massive, national-scale investments

A reminder of what could have been:

https://upload.wikimedia.org/wikipedia/commons/a/ab/U.S._his...


Yes, we could have wasted much more money back then (those crash budgets would not have worked, since they assumed tokamaks work better than it turned out they did.)


Speaking pedantically, the next decade actually starts 1-Jan-2021.


Speaking even more pedantically, a new decade is starting right now ... and here comes another one.

The 2020s start 2020-01-01 (just as the century called "the 1900s" started 1900-01-01).

The 21st century started 2001-01-01.

The 203rd decade starts 2021-01-01 -- but nobody talks about the "203rd decade". That's the difference in terminology between decades and centuries; we don't refer to decades by their ordinal numbers.

Yes, it's confusing and inconsistent. If only Dionysius Exiguus had known about zero.


Continuing the pedantalization escalation the 21st century started 2001-01-01 if you follow the Gregorian calendar "named after Pope Gregory XIII, who introduced it in October 1582"

However if you are not fussed about pope Greg and then and go with ISO 8601 or common usage it started 2000-01-01.


> Continuing the pedantalization escalation the 21st century started 2001-01-01 if you follow the Gregorian calendar "named after Pope Gregory XIII, who introduced it in October 1582"

In the same spirit of pedantry:

Well, no, Gregory is beside the point; if you are referring to Anno Domini dates, the 21st Century (and Second Millennium) began on the first day of the year A.D. 2001, whether or not you prefer the Gregorian Calendar.

Preference for the Gregorian Calendar or not will affect when you believe A.D. 2001 started, though.


And the transition from Julian to Gregorian just changed the leap year rules. It didn't change the fact that there's no year 0. The 1st Century began in the year 1.

I don't believe anything in ISO 8601 refers to the century starting 2000-01-01 as "the 21st century" (though apparently it does, in some contexts, allow "20" to refer to the century that started on that date).

The century called "the 1900s" overlaps with the 20th century for 99 of 100 years.


> And the transition from Julian to Gregorian just changed the leap year rules. It didn't change the fact that there's no year 0.

It is not issue of Julian vs Gregorian date, but the issue that ISO 8601 uses different conventions. Traditionally, there were two disjoint timelines (AD/CE and BC/BCE), both starting at year 1. Within these conventions, it make sense that first century is 1-100 CE, second century 101-200 CE and so on.

ISO 8601 instead uses astronomical year numbering ( https://en.wikipedia.org/wiki/Astronomical_year_numbering ), which has signed integer year numbers including year 0. Therefore, there is no beginning of timeline and most natural way how to define centuries by div 100, e.g. years 0-99 is century 0, and so on.


This is not even pedantically correct.

I quote ISO/WD 8601-2:

Representation of a decade must be exactly three digits, leading zeros, if any, must be included. Thus the time interval 200 through 209 is represented as ‘020’ and NOT ‘20’; the latter would represent the time interval 2000 through 2099.

Here you can clearly see that the ISO definition of a decade is in keeping with the common understanding, not whatever oddball misunderstanding you've chosen to promulgate.


Masterfully done. :)


No one is interested in this pedantry. Save your time.


So, I'm a physicist, and I went to a number of talks from people involved with the JET fusion reactor over the years, though fusion is not my area. And my understanding is not that it's difficult to make plasma, or even build a reactor in particular that is the main problem (though instabilities can be problematic), but it's that the internal structure degrades very rapidly and becomes highly radioactive, because you get helium bubbles forming inside the steel which causes fractures and the heavy metals that are often put into steel to increase strength are highly fissionable. So you need to use special types of steel to actually construct the reactor, and these need to have a lifetime that's ~5 years+ and it needs to not have very very radioactive steel at the end of it's usable life. And this is basically an unsolved materials science problem at the moment. So while you might even be able to build a fusion reactor, it's not going to last long enough to make it commercially viable using current technology.


Applied physicist here. The other thing is: Existing fusion concepts need to use a reaction between Deuterium and Tritium, otherwise it's just infeasible. Deuterium exists in nature and can be produced with some form of isotope separation.

Tritium, however, does NOT exist in nature, and can only be conceivably produced in three ways:

1. Inside thermonuclear bombs

2. In heavy-water reactors, e.g. CANDU design, which requires Uranium, so it is not fuel-independent and totally dependent on nuclear fission technology. Getting Tritium that way is also extremely, extremely expensive.

3. possibly, in a breeding blanket of a fusion reactor. This makes fusion technology reliant on breeding reactors which is much more complicated and has many completely unsolved questions in the materials part. Also, existing fission-breeding reactors are also less safe, for example, like the Japanese Monju plant, which was cooled with liquid sodium, a highly reactive metal, which got incensed in a fire. Fusion breeder reactors will probably also require something like sodium cooling because of the high energy densities required.

And the existing research projects like ITER have not even started to address these issues - they are purely plasma physics experiments.

https://en.wikipedia.org/wiki/Monju_Nuclear_Power_Plant


>> "Tritium, however, does NOT exist in nature"

Maybe I don't understand properly, but I'm watching the video linked in a previous comment (1) and around the minute 9:30 professor White says that "what comes into the plant is actually deuterium and lithium".

It seems he is saying that you need only an initial quantity of Tritium and then Helium and Lithium is used to created Tritium again. I suppose that it's what you refer as "breeding". He explain it as it's not a big deal.

(1) - https://news.ycombinator.com/item?id=24632653


That's the breeding option, making tritium from lithium. Lithium can absorb a neutron and become helium and tritium.


>And the existing research projects like ITER have not even started to address these issues - they are purely plasma physics experiments.

Section 3.2 Test Blanket Module (TBM) Testing Program in ITER

https://www.iter.org/doc/www/content/com/Lists/ITER%20Techni...

Look even closer at the fusion energy research community and you will find a lot of work has already gone into this problem. Assuming physicists sit on their thumbs is a bad bet.


SPARC uses molten FLiBe salt as both coolant and breeding blanket.


One of the advantages with the MIT superconducting magnet design is that it allows the vacuum vessel and neutron shielding to be lifted out and replaced wholesale (n.b. this doesn't have to be every 5 years but it does have be done at some point during the reactor lifetime). The ITER design would require essentially disassembling the whole reactor to do that. In both cases, the neutron absorbing first wall will be replaced every 5 years or so which is indeed a substantial challenge requiring robotic operations.


This++++++. Nuclear reactors are primarily bug generators for chemists and metalurgists. The things that need solving mostly boil down to dealing with the insane chemistry and metallurgical problems that arise when you have a wide variety of incredibly nasty compounds arising that do not normally occur, and consequently require out-of-the-ordinary solutions.


> the heavy metals that are often put into steel to increase strength are highly fissionable

Not fissionable, but many are unacceptable due to formation of long lived activtion products.

One additional problem with the alloys used is the degradation of their mechanical properties under radiation exposure. They tend to become brittle.

Testing any of these materials will require something close to a working fusion reactor (nothing else duplicates the neutron environment), but that will require the materials. Working around this loop of circular development dependency will be time consuming.


That's a luxury problem at the moment. A fusion reactor that can sustain positive output operation for 5 years is still a dream.


Sort of, but only if you exclude the amortised cost of fabrication from your calculation - it's part of the same goal, unlike say 'they're hard to make, we can't make them fast enough to put a dent in energy production'.


It is a total show-stopper because you don't have the amount of Tritium to power such a reactor.


We have plenty of lithium. All D-T fusion designs breed tritium from lithium, using the high-energy neutrons from the reaction.


I assume very little lithium is actually needed, will the dramatically increased demand from battery manufacturing affect feasibility?


Lithium production is a bit strained but lithium resources are fairly high. At Tesla's Battery Day they said just the lithium in Nevada is enough to electrify the entire US fleet. They also plan to do their own recycling, which they expect to work well since they can customize it to their battery.

Given the amount of energy that comes from fusion fuel, even extracting lithium from seawater would probably be economical.

Beryllium is much less common so that could be an issue. But maybe we can find more beryllium if we look harder; right now we barely use it. If we can't find enough, we could probably design a reactor to use lead instead. Either one acts as a neutron multiplier, and General Fusion's design uses lead instead of beryllium, for the same purpose.


DT fusion reactors require 6Li, which is 7.6% of natural lithium. The other 92.4% is 7Li. This "depleted lithium" would work in batteries just fine.


Not disagreeing with you, but it might still be ok. I don't believe early fission reactors were cost effective either. If we build a few with heavy subsidies and create an economy around them, we may get more commercial players involved, and a lot more investment dollars chasing the problem.


The early fission reactors were to make bombs not electricity. Large subsidies available there.


I think it was clear from context that I meant fission reactors designed for electricity production. These reactors are significantly different in design from reactors used for producing weapons grade, highly enriched fissile material, and required significant investment in its own right to develop safe, commercial reactors.


My current favorite future fusion reactor project (I'm a layperson) is the Wendelstein 7-X: https://en.wikipedia.org/wiki/Wendelstein_7-X

Seems like they're meeting all their planned milestones and it's going well!

Excited for their next updates...

More from their project page: https://www.ipp.mpg.de/w7x

"Wendelstein 7-X is the world’s largest fusion device of the stellarator type."


I’m also a layperson and also a fan of the 7-X, almost on purely aesthetic grounds. The plasma shape is a beautiful twisted loop, the magnets resemble a French crueler, and with the shielding and instrumentation it has all the charm of a steampunk time machine


To me it looks like alien technology. Such an organic shape. So many valves an openings. What is their purpose? How deliberate is their placement? Awe-inspiring.


Most of the ports are used for diagnostic equipment, things like laser interferometers to measure plasma density or other devices to measure plasma temperatures. Also some ports are used to inject neutral beams for heating.


It is a research reactor, so most of the openings are for instrumentation and ease of upgrades. A production design would have fewer ports.


To add to the others' responses on the purposes of ports, many of them are essentially vacuum pumps to maintain the near-perfect vacuum required for fusion to work.


Very negative take here: To me it looks like premature optimization, some politicians, mathematicians or engineers wet dream, a design that by today is probably known to be suboptimal, that probably caused millions if not billions of euros of unexpected costs, and tens of thousands of hours of migraine for the people who actually have to run and maintain the thing


Literally the primary goal is optimising the technology enough to get more energy out than energy in. Without that singular goal achieved the field has no purpose at all. It's not like theres already workable systems and they're just focusing on pushing the levels a step further just because.


This design overcomes one of the fundamental problems with the Tokamak style fusion reactors. It's like choosing quick sort over bubble sort. And yes, the difference here matters.


As a layperson, this design seems far too complex to be useful for fast iterative research and ultimately, economic replication.

It reminds me of the principle that one shouldn't start a space travel project that is estimated to take more than x years to complete, because by that time, technological progress will have surpassed its speed and capabilities, and would physically overtake it (the 'wait calculation').


Yeah, fusion is not fast to research though, ITER isn't even built yet.

They're all extremely complicated, and while this one is harder to fabricate than a tokamak, fabricating arbitrary geometries is also something we've gotten a lot better at. If this is as complicated as it has to be in order to work as required... well, that's just physics right?

The problem is unnecessary complexity, but whether that's the case here or not is still unclear.


Maybe that's the weakness of this design. But maybe it can be scaled up.


For the German speakers, this podcast is a few years old but super interesting interview with scientist working on it: https://alternativlos.org/36/


Omega Tau has a few great (and long) episodes in English:

Episode 22 is an intro to fusion power research and tokamaks.

Episode 157 is an interview of a director at ITER.

Episode 304 interviews the author of the book you’re reading.

Episode 312 is a set of interviews with experimentalists and computational theorists at W7-X.

https://omegataupodcast.net/


Keep your ear on the floor for a medium-scale HTS stellarator in the US sometime in the next ten years.

https://wistell.engr.wisc.edu


Looks like it's down for upgrades until the end of 2021.


Having read about tokamaks for thirty years, I'd be curious what specific breakthroughs and innovations have occurred since the 1980s, which lead to the optimism described in the article (which is otherwise frustratingly devoid of detail).

It's great there are seven-peer reviewed articles about SPARC, but plasma was not my specialty in physics -- would any specialists care to comment on whether there is anything particularly exciting here?


My layman's understanding is that recently available (~last 5 years) industrial scale processes to produce rebco tape[1] is the specific advance in superconductor tech that's specifically enabling higher tesla magnetic field strengths from significantly smaller and lighter and easier to manage magnets, so you can get a system that's "powerful enough" to run net posititve at a size that's small enough to be built by a few universities rather than a few dozen nations (i.e. ITER), so the complexity of the engineering project drops from "unimaginable" to "just very high".

1 - https://www.fusionenergybase.com/concept/rebco-high-temperat...

2 - really great talk from a few years ago about this from MIT's Plasma Science Fusion Centre: https://www.youtube.com/watch?v=L0KuAx1COEk (really, if you like this stuff, give the talk a watch. It's great.)


Yep; the key technology that took magnetic confinement fusion from 'ITER will probably work' to 'Maybe we should just go ahead and skip ITER' is rebco tape.

The problem is that all known superconducting materials known will lose their ability to superconduct when exposed to a sufficiently strong magnetic field. The large field strengths induce eddy currents in the material which disrupt the propagation of the cooper pairs in its superconducting mode. The current sufficient to self-induce this field is called the critical current.

Layered rebco tape appears to shield against this effect or otherwise trap the eddy currents in a way that superconductivity is preserved even in the presence of extremely strong fields. The critical current in REBCO tape is enormously higher than in previously known materials or winding configurations.

Obviously the bigger the reactor volume is, the smaller the magnetic field needed to steer and confine the plasma within it can be. So back when they were designing ITER, engineers figured out the strongest magnet they could make, then they designed the reactor to be small enough (lol) that said magnet could still sustain confinement.

However now that we have stronger magnets we can make reactors smaller. This is even something of an gross understatement as the relationship is cubic. For a doubling in field strength, the reactor can be 8 times smaller. That effect is very meaningful when considering that you are essentially talking about shrinking something the size of ITER's 28m main reaction vessel to something that might fit into a garage.

I'm not really any kind of expert on this, so please treat this explanation as very simplistic.

I have not yet personally seen anything about this new lattice confinement modality that seems to give me anywhere near the same level of confidence that they will see viable applications compared to the magnetic confinement approach. (NIF already stood down their tries at laser inertial confinement) Maybe someone has some good insight on whether or not this is all still speculative fanfare or if researchers are finding real meat.


Would it be feasible to upgrade ITER into a much more powerful plant with the new magnets, or are they basically spending another 15 years building a huge thing that is already obsolete?


ITER is not obsolete even if better magnet designs are now possible. It's an incredibly huge and complex project with many critical details. It's literally spawned 10's of thousands of research papers. It's a pathfinding project with far more value than just this one aspect.


It's probably not feasible to upgrade it but apart from the magnets a lot of problems being solved for ITER (especially materials, tritium handling, and remote operations) will be useful for any Tokamak and indeed for any magnetic confinement fusion device.


Better link on ReBCO (in my opinion) https://nationalmaglab.org/magnet-development/applied-superc...

They were specifically looking at the tapes ability to function in high magnetic environments such as a tokamak type reactor.


This is the correct answer. A key reason Rebco is so much better than older alternatives is that it can be cooled to superconductivity using only liquid nitrogen (77K), as opposed to liquid helium, which is much harder to work with.


Nit: ultimately the new magnets need to be stronger for the same volume no matter how easy the cooling is to work with. That part is just a nice bonus.


2 - skip to ~2:30 for the speaker, whose mic'd and the sound greatly improves.

Good job recording MIT.


Awesome talk thank you for sharing.


Superconductors recently got much better in a way that has cubic effect on reactor size.


I know essentially nothing about superconductors, but if they all have essentially zero resistance, what makes one better for this application? I thought the main advantage of high-temp superconductors was using liquid nitrogen instead of helium.


All superconductors superconduct, but there's wide variation in how strong of a magnetic field they can endure before leaving the superconducting state: https://www.researchgate.net/figure/Upper-critical-fields-an...

ITER (and the LHC) used NbSn magnets, since it was designed in the 90s. Newer designs will use newer superconductors.


I want to emphasize this beyond upvoting.

Superconductors are like magic, but they aren't actually magic. They have limits.


Think of it as the amount of energy you can store in a superconductor.

This is talked about in terms of the ‘phase diagram for superconductors’

http://www.supraconductivite.fr/en/index.php?p=supra-levitat...

As you put more current in the supercondutor, eventually you break it and it leaves the mode, as it gets hotter, likewise, as the magnetic field gets stronger, likewise.

You want the largest ‘area under the curve(s)’ to handle more and more ‘superconductivity’

New materials are enabling this which has the cubic effect mentioned for reducing confinement volume and making things more affordable and increasing energy densities that can be contained, etc.


High temperature superconductors provide more stable resistance-free conduction than traditional superconductors. If both are cooled to liquid helium temperatures, that increased stability translates to being able to sustain a higher magnetic field. Higher magnetic fields provide tighter plasma confinement and higher density, which allows for much smaller fusion reactors.


They still saturate. High-temperature is nice, but a separate consideration.


HTS coils would still be run with liquid helium for higher critical current and higher magnetic field.


so you get the plasma spinning and you have two problems: containment and instability. better magnets and faster control systems are the two big things that (in my layman's-bad-at-physics understanding) have really advanced in the last decade.


The intense non-linearity of magnetic phenomena is a source of major issues here.

Why can't natural laws be more simple and linear? (joking, sure, but it does feel that way sometimes)


It often times boils down to turbulence.


"When I meet God, I am going to ask him two questions: why relativity? And why turbulence? I really believe he will have an answer for the first."


I never quite understood the math behind power densities in a fusion reactor.

In the sun, isn't energy production occurring at something like 100-1000 W/m3? So, if you want to build a multiple MW fusion plant, shouldn't these plants be ridiculously huge compared to, say, a wind turbine rated at a couple of MW?

Is the density of the plasma so much higher in a fusion reactor?

Also, something else I never grokked, how do you get the power out? The plasma heats up, but how do you turn that into useful electrical energy?

Nevertheless of course I hope it does work as advertised... someday.

Edit: thanks everyone for the thoughtful, insightful replies!


The density is actually lower than the sun in magnetic confinement fusion (MCF) devices because we can’t squeeze plasma together as hard as the sun’s mass can. Inertial confinement fusion (ICF) can squeeze harder than MCF devices, but has serious unaddressed engineering issues.

The trick is in higher temperature plasma. The sun fuses protium (lone protons). We don’t have the confinement necessary on Earth to do this, so we fuse deuterium (1p+1n) and tritium (1p+2n). This reaction is more energetically favorable and is achievable on Earth. Coupled with giant microwave ovens and clever geometry and electromagnetic tricks, we can make plasmas much hotter (faster moving particles) than the sun can.

Once a plasma is fusing, it emits a lot of heat (alpha heating and fast neutrons). A plasma that requires no external heating (no microwave ovens) is said to be “ignited”. We don’t necessarily need or want ignition to have a successful reactor, but it’s a cool thought.

The major trouble with fusion reactors is keeping particles in the bottle long enough to fuse. Since they’re leaving anyway they have to go somewhere. You can tune vessel geometry and magnetic fields to have designated strike points where most of the plasma will exit confinement. These are called divertors. Run some coolant through your divertors and you have a heat source that can boil water and spin a turbine.

Here my knowledge gets shaky because I know that the fastest particles coming out of a D+T reaction are neutrons (they weigh much less than an alpha particle). Since neutrons are electrically neutral I think they are much less likely to become thermalized (they are not likely to bump into another particle on their way out). I’m not sure how neutron thermalization happens in reactor simulations, but I’m under the impression that it does.


The trick is that deuterium (and tritium) are vastly more reactive than ordinary protons. The fusion of the latter is extremely slow, because it involves the weak nuclear interaction to convert a proton to a neutron.


lithium blankets traps neutrons. Sadly, the reactor chamber becomes radioactive (but a short period radioactivity that only requires to store the contaminated chamber for decades)


Project Rho has lay-person accessible detail on type of fusion reactions.[1] Well worth a read. The takeaway is, if we can't do D-T fusion, then don't even think about any of the other kinds. They're orders of magnitude harder.

1. http://projectrho.com/public_html/rocket/fusionfuel.php


The temperature at the center of the sun is about 15million kelvin. The plasma temperature of a fusion reactor is around ten times higher. Also, a rector uses deuterium–tritium (D–T) fusion, and the sun uses hydrogen fusion. D-T requires less confinement and is more energetically favorable. If DT is like setting off a fire cracker then hydrogen fusion is like igniting a damp log.


I chuckled/was staggered when I heard that the power density in the Sun's core is approximately the same as a compost heap! (albeit a rather large one...)

https://en.wikipedia.org/wiki/Solar_core#Energy_conversion


Even at the center of the sun's core, the power density doesn't go above 300 W/m^3. At the core boundary it's < 10 W/m^3, and it's < 2 W/m^3 overall.

Terrestrial reactors try for much higher reaction rates. But most designs still have power density issues that would make them uneconomical even if they could make net power. Net power is actually a very low bar, corresponding to an EROI of 1.


>The plasma heats up, but how do you turn that into useful electrical energy?

I appreciate that many people are commenting 'you couple the plasma to a working fluid', but I think the original comment was more along the line of how you couple a confined plasma to a working fluid. By definition the plasma is in a hard vacuum, magnetically bottled. What, then, is the coupling method? Thermal photons escaping confinement? I genuinely have no idea myself, but would really like to know.


You're right, I was curious about capturing the fusion products in some way and still being able to extract useful energy.

I looked up the effects of neutron radiation on materials[]. Sounds like a hell of an engineering challenge to come up with a robust way of getting that energy out!

Radiation damage to materials occurs as a result of the interaction of a [neutron] with a lattice atom in the material. The collision causes a massive transfer of kinetic energy to the lattice atom, which is displaced from its lattice site, becoming what is known as the primary knock-on atom (PKA). [...] The magnitude of the damage is such that a single 1 MeV neutron creating a PKA in an iron lattice produces approximately 1,100 Frenkel pairs.

[] https://en.m.wikipedia.org/wiki/Neutron_radiation


> https://en.wikipedia.org/wiki/Monju_Nuclear_Power_Plant

Yep. And anything touched by the neutron flux would become radioactive if it captures neutrons. For example steel, normal steel contains carbon, carbon captures neutrons, so the steel becomes radioactive, and also brittle. And then you need wiring and insulation and coolant and pumps and all that.


80% of the energy from DT fusion comes out as neutrons, which enter a blanket and deposit energy there by collisions and nuclear interactions.


> which enter a blanket

So, what is that blanket made of?

It needs to be a material which:

- can withstand high temperatures and radiation

- does not becomes radioactive itself

- facilitates production of Tritium by breeding

- operates in very high magnetic fields, which means high forces

No such material is known so far.

It is a bit like some engineer from 1700 said: "Well, you could just build a more efficient and compact and light steam engine, and connect it to a machine which has flapping wings, and then you have a transport vehicle which can carry people across the Atlantic ocean at supersonic speed, and at little cost."


Not such a bad analogy, as the blanket will get hot, and the heat will be used to turn water into steam, which will power generating plant. Just like in the 18th and 19th centuries.

It would have had to have been an unusually well informed and far-sighted engineer though, as Savery's engines first worked in 1698 and the first for-sale commercial engine, Newcomen's, wouldn't happen until 1712. And they were incredibly inefficient: 0.01% to 0.1%.


Thank you!

Do you know how far along are we in understanding how to build an effective blanket that can withstand the neutron flux while maintaining its physical integrity?


It's not a solved problem (since really testing a wall design requires a working high power source of fusion neutrons), and it's not clear it can be solved in a sufficiently practical way. Indeed, the ability of the first wall to withstand the energy flowing through it will likely cause DT fusion reactors to have low volumetric power density, with sad effects on their economics.


Thanks again for the reply.


SPARC's approach is for the blanket to be a molten salt, and also function as a coolant. That will surround an inner wall which gets replaced annually. They tested joints in the REBCO superconductors, so the reactor can be opened up on hinges.


You're correct, that's a good ballpark for the average heat generation rate in the sun. However, 99% of heat generation in the sun occurs in the core (approximately the center 25% by radius) and this area is much more energy dense. Further, fusion reactors could achieve an even higher volumetric heat generation rate then the sun's core does.

To collect energy, heat would be transferred to a working fluid (e.g., molten salt) by exposing that fluid to the hot plasma. Then the working fluid would be used to boil water and spin a turbine.


There's something that feels so archaic about using a nuclear reactor to... boil water and spin a turbine. It's disappointing that we haven't figured out a better way to convert to electrical energy than what we were using in the 1800s.



We've had solid-state thermoelectric generators for a while that convert heat flux directly to electricity, they're just really inefficient (only 5-8% efficent). [0] I can't find anything suggesting a fundamental limit to the efficiency, but it would take a lot to catch up to steam-powered generators at 33%-48% efficiency [1].

[0]: https://en.wikipedia.org/wiki/Thermoelectric_generator#Effic...

[1]: https://en.wikipedia.org/wiki/Steam-electric_power_station#E...


I'm not well versed in physics, but when I think about the fact that electric motors haven't changed all that much it sort of makes sense.

Inducing a current by moving something in a magnetic field, or vice versa, may seem like simple designs, but perhaps that's why we haven't replaced them yet. Rotational motion seems easy to service and efficient.

Same with water being particularly easy to turn into a gas using heat, as well as being super plentiful on Earth. There are just a lot of things going for a steam turbine


Water is an amazingly capable working fluid. Just because it's ubiquitous doesn't mean it doesn't have remarkable properties!


In fact, the fact it's ubiquitous is no coincidence and can be explained by the anthropic principle. The human body is a chemical machine too.


> It's disappointing that we haven't figured out a better way to convert to electrical energy than what we were using in the 1800s.

We haven't found a better way to convert heat to electrical energy. We've done great things with solar, kinetic, gravity, etc.

I think the better analysis is to chart the efficiency of the conversion over time. I couldn't easily find a chart showing this, but I assume gigawatt scale turbines operating at 50%+ efficiency are modern engineering marvels compared to the earliest 7kW prototype made by Charles Parsons in 1884.


The reason that steam still drives turbines (as opposed to having the salt drive them directly) is that it’s hard to engineer blade materials that will withstand temperatures much higher than superheated steam.


Photovoltaics (solar panels) don't rotate turbines.


I recommend the book "5 Equations that changed the world", it gives you the idea how modern world works and how that is direct influence of math. For example how electricity is being generated using dynamos.


The sun "burns" ordinary hydrogen. The first step in this proton-proton-chain is the reaction of two protons to a deuteron, which requires conversion of one of them into a neutron. This is an interaction of the weak nuclear force, which has very small range, so it is slow. Reactions considered for terrestrial fusion don't require the weak force, and consequently won't run on pure hydrogen.


You boil water from the neutrons/heat it creates.


So in the end it's a glorified boiler!


as far as i am aware, solar power is the only prevalent power source that does not derive from spinning a dynamo. Besides wind (and perhaps there, too) almost all of the force used to spin dynamos includes some kind of boiling (eg: dams happen to use the force of the water after the "boil"/evaporation, nuclear, coal, natural gas, etc are all steam-based thermo-mechanical power plants.)


Note that gas powerplants are combined cycle power plants, they contain both gas turbine and steam turbine, so part of their spinning power (from gas turbine) is not based of boiling.


Dumb question: Is it possible to make a photovoltaic cell that works with infrared or longer wavelengths? ... Googling says there is research and some claimed results.

In theory, one could have "generate heat, and have the apparatus surrounded by a vacuum, surrounded by infrared photovoltaics" be the new universal backend for power generation. I have no idea about the associated efficiencies, of course.


Certainly the only major electrical generation method, but MHD and thermocouples can be the appropriate solution sometimes, and if you count storage systems as well as primary production, there are also fuel cells & batteries.


As well as steam engines there are also small scale generators using internal combustion engines and gas turbines which spin the dynamo directly.


Same as a regular fission plant if I'm not mistaken!


A glorified boiler with dense and abundant fuel!

We would have never entered the industrial era without boilers.


The actual fusion in the sun happens in only a small part At the middle. The rest is just there to provide mass :)

> Also, something else I never grokked, how do you get the power out? The plasma heats up, but how do you turn that into useful electrical energy?

Steam turbines; same as fission or coal. Or gas turbines, if you want to get fancy.


There is a science fiction story by, I believe, Isaac Asimov which has as it's main story point that:

- fusion is developed

- b/c of fusion, the cheapest "rocket fuel" is basically water heated into steam by the fusion reactor

- people on Earth get upset about the "spacers" taking all of the "earthers" water

- the spacers then have to go to other sources of water in the solar system (I think it was either rings of Saturn or the asteroid belt)

The thing that struck me at the time was the "water as propellant" without the extra step of breaking H2O into hydrogen and oxygen.


> without the extra step of breaking H2O into hydrogen and oxygen

Heat it sufficiently and it will be broken down.


> In the sun, isn't energy production occurring at something like 100-1000 W/m3?

It's just an average. Fusion occurs only in the core, which is pretty tiny, relatively speaking.

> The plasma heats up, but how do you turn that into useful electrical energy?

The same way you can turn any kind of heat into useful energy.


It's not just that the fusion only occurs in the core, but the fusion rate is also very low; it's only because the sun is so incredibly massive that this slow reaction rate can self-sustain. With a much smaller fusion reactor, the reaction rate needs to be much higher to keep itself going and to create enough power to run the electromagnets that keep it confined.


> In the sun, isn't energy production occurring at something like 100-1000 W/m3?

~250W/m3.



Exactly: there is no prospect or intent to actually, ever make a Tokamak that produces useful power. Besides the power density problem and the really absolutely enormous expense and already-sunk cost, the necessary neutrons would destroy the most expensive part of the reactor in short order, and make the rest radioactive.

The whole research program gets its funding as what amounts to a jobs program to keep high-neutron-flux physicists employed and available to draw upon for weapons work. That is one reason why any fusion process that does not emit neutrons is not given any of the research funds: weapons work doesn't need high-alpha-flux physicists. (Secondarily, they have papers that purport to show e.g. p-B fusion could never work.)

If we ever do get practical fusion, it won't be in a Tokamak, it probably won't be on the Earth's surface, and it certainly won't help resolve global climate disruption.

The money being spent on Tokamaks, on the other hand, absolutely could help a great deal with global climate disruption. But not while also maintaining the all-important high-neutron-flux population.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: