Hacker News new | past | comments | ask | show | jobs | submit login
Google and a nuclear fusion company have developed a new algorithm (theguardian.com)
373 points by jonbaer on July 26, 2017 | hide | past | favorite | 114 comments



This is actually a really exciting development to me. (Note, what is exciting is the "optometrist algorithm" from the paper [1] not necessarily googles involvement as pitched in the guardian). Typically a day of shots would need to be programmed out in advance, typically scanning over one dimension (out of hundreds) at a time. It would then take at least a week to analyze the results and create an updated research plan. The result is poor utilization of each experiment in optimizing performance. The 50% reduction in losses is a big deal for Tri Alpha.

I can see this being coupled with simulations as well to understand sources of systematic errors, create better simulations which can then be used as a stronger source of truth for "offline" (computation-only) experiments.

The biggest challenge of course becomes interpreting the results. So you got better performance, what parameters really made a difference and why? But that is at least a more tractable problem than "how do we make this better in the first place?"

[1] http://www.nature.com/articles/s41598-017-06645-7


Though this work may seem exciting, there is an existing, respected body of work available on how to mathematically structure a search over a large parameter space and how to mathematically interpret experimental responses. That body of work is a subset of applied statistics called design of experiments. It helps scientists avoid the common failures that result from doing exactly what was done here, random space exploration and non rigourous evaluation of results.

For this to be exciting I would expect some indication as to how this method extends and enhances the existing science of experimental methods and the trade offs involved with using their method. I dont see that.


It would not surprise me if high-tech companies are inventing new, useful things in this field.

In my career as first a scientist and then an engineer, I've found very few practical users of highly technical experimental design theory, and all of them were in industry. These algorithms move about intelligently along all dimensions of some search space, whereas in the lab we prefered to turn just one knob at a time.

One reason is that the algorithms are optimally seraching for "known unknowns" -- that is they assume they roughly understand the problem. The lab is a world of unkown unknowns where the more plodding, understandable protocols tend to be safer.

But in industry, some problems are of the known-unknowns type. And experiment runs can burn up seriously expensive hardware time. So it makes sense for fusion researchers and cloud-computing giants a like to invent new practical ways to optimise searches.

Besides, optimising searches is what Googlers are for.


Reading their actual paper further, it seems I read a bit too much into the original article. However, as their paper mentions:

> The parameter space of C-2U has over one thousand dimensions. Quantities of interest are almost certainly not convex functions of this space. Furthermore, machine performance is strongly affected by uncontrolled time-dependent factors such as vacuum impurities and electrode wear.

I'm not aware of DOE procedures that are robust to these types of issues, and would certainly appreciate any literature you have on the subject.

Regardless of theoretical literature, this procedure has enabled a dramatic shift in how these scientists think about their experiment. Furthermore it has enabled them to achieve results much faster than before (if you have been following Tri Alpha, it has been a real slog). Both of these are exciting to me even if they don't break new ground in the design of experiments.


Interesting. Are you able to provide some links to decent resources on this topic?



Perfect! Thanks.


As a complete outsider, I don't understand what's special about the "optometrist algorithm." As described in the Nature article it's just hill climbing using humans as the evaluation function.

Isn't it basically the same thing they were already doing but more granular?


Basically nobody was using automated gradient descent / etc because of the proclivity of these algorithms to get stuck on a boundary. The problem is the boundaries are not well defined. One example might be a catastrophic instability. If it gets triggered it has the potential to damage the machine. But the exact parameters in which the instability occurs are not well known. So with this algorithm you mix the best of both worlds: the human can guide away from the areas where we think instabilities are, the machine can do it's optimization thing. It's pretty simple overall but enables a big shift in how experiments are run.

Edit to add: these instabilities often look just like better performance on a shot-to-shot basis, which makes the algos especially tricky. Using a human we could say "this parameter change is just feeding the instability" vs "oh this is interesting go here"


I am still very skeptical that a human is really that good at avoiding the problem areas, although they might be marginally better. Plus, they don't seem to claim that anywhere in the paper, instead, they just rated shots as either "better" or"just as good", ie., a local evaluation which won't let you avoid such areas, which of course is a judgement that requires more knowledge than just the conditions in the neighborhood of the current reference.

The only thing I think that can lead someone to your conclusion is they can judge based on a host of criteria, not just a pre-defined set of criteria--may be that's what you meant. Of course, intuitively, changing your criteria midstream would lead to bias in your judgement, I'd think, but that may be the real innovation here, that is hard to do without a human judge in the mix.


> I am still very skeptical that a human is really that good at avoiding the problem areas

Why? Humans have a much richer modeling apparatus than any computer does right now. We can draw on a very large and yet almost fully tuned to reality set of possible models simultaneously. You can estimate the number of available models as whatever number of neurons you have, in combinatorial. We also have machinery for searching that entire model space simultaneously and testing against a continuous stream of megabytes of data in realtime, in order to find good fits.

Existing AIs wouldn't even know where to start. They can apply infinite models, but have no grounding in reality, and no way to choose amongst them. The AI doesn't even have an intrinsic sense of space, seeing has how it lacks a body. It's a very fast worker that can get things done when you give it very specific instructions, but it has no real ability to understand what it is doing or why it would want to do something different.


Remember that the human won't only be thinking "better" or "just as good". They almost certainly can't avoid thinking "If I say 'better' here, what direction will that drive the algorithm in?" They don't just learn how to drive the plasma, they're learning how to drive Optometrist as well.


To be clear: there are no gradients here (right?) This is just 0th order hill-climbing with a human assist.


how does one climb a hill with no gradient? [serious question]


You can climb a hill without knowing the gradient, so long as you can compare two points in terms of height. You randomly move in some direction, then compare the new point to the old point, go to whichever of them is higher, and repeat.

This sounds like what the experimenters are doing. Perhaps the GP was alluding to "first order hill climbing" as evaluating the gradient in every direction and climbing the steepest one, but the "0th order" version is also usually considered hill climbing and is better for some classes of problem.


That is exactly what they're doing. See the section on Exploratory Technique, second to last paragraph. As I said above, the possible innovation here is they can change midstream the criteria one uses to decide what is a "better shot".


is it picking a new configuration at random, or does it still have to be "close" to the last configuration?


It still has to be close by some metric to be considered hill climbing. The article doesn't make it clear, but I suspect a lot of the insight in the algorithm is how the computer chooses two similar sets of inputs that differ in an "interesting" way.


last clarifications, sorry.

Some manifold has a goodness function defined on it, described by a (totally ordered?) relation provided by the observing scientist.

The goodness function is assumed to be (continuous/differentiable/continuously differentiable?) with respect to some metric, and the computer picks a random coordinate within some small distance of the last coordinate in the metric, and then asks the human to order them?

I don't think this is hill climbing, and my simple reasoning for that is that I don't believe the first assertion. The expert is almost certainly behaving non-deterministically. In fact, I believe that each time the expert is presented with the "same" pair of coordinates, he is more likely to yield a different ordering.

That said, I could be reading this wrong.


The naive way to do calculus. Use secants to approximate the tangent. I think it's called finite difference.


Finite differences, you estimate the gradient.


Perhaps a stupid question, but why can't the whole experiment be run as a simulation?


Even if this would be the dream of a lot of theoretical physicists to replace experiments with simulations, this must not happen! Ever! Even if every complex system in the world could be simulated in reasonable time it would still require experiments to verify or falsify the simulation results. A simulation is essentially just a calculation from a model someone came up with to describe a system. In order to check how good the model is one has to check it against experimental data. Just expanding the models without experimental verification will not necessarily result in a good theoretical description. It would be like writing software without testing the components and expecting it to work correctly when you're done. There was recently an article on HN where economists were described as the astrologers of our time [1] since they do not verify their mathematical models to an extent where they can predict economical systems. This is another example where more experimental data should be considered in order to falsify certain theories.

Those are the reasons why string-theorist will not (and should not) get any Nobel price in the next decades. Since its predictions are hard to measure on those small scales there's no way of telling if the model is any good until it is compared against suitable experimental data.

[1] https://aeon.co/essays/how-economists-rode-maths-to-become-o...


Agreed. My background is philosophy, and while i rarely get into the STEM arguments. This has everything to do with inductive learning vs deductive learning. Any simulation will be run with the premises already built in, but cutting edge science is always about learning what those premises are. If we knew what they were, it'd be trivial to set up the reactor. Here we need inductive experimentation to learn how to simulate it trivially.


If you're doing science, experiments are hugely important. If you're doing engineering and you're reasonably sure that the physics guys came up with a good model, having everything in a computer would make development a lot cheaper.


Thanks, this is the best HN 'rant for the common sense' in a long time :)


I believe this is more about solving an engineering/mathematics problem, than about fundamental physics and the scientific process.


Physics is a lot more than just fundamental physics. H-Bomb designs for example get hundreds of hours of super computer time to simulate a few pounds of stuff for 1/1,000th of a second and even then they are approximations which need to be validated.


Because fusion simulations are really hard. This simulation[1] took 15 million hours of CPU time to model a cubic cm of plasma. The results were used to update 5 scalar parameters in a model.

[1] http://news.mit.edu/2016/heat-loss-fusion-reactors-0121


Does anyone have an idea about what software they use to simulate this stuff?

I'm wondering if they can even make use of the newfound GPU power or are just going ahead with ancient CPU based software because too much work has already been put in.


SpaceX is doing the best work on simulation. The adaptive multiscale work is a million times more important this moving to GPU, but of course they did that too:

https://www.nextplatform.com/2015/03/27/rockets-shake-and-ra...


Looks interesting. It looks like they are creating the CFD software for their own specific application. While that's cool and all, I doubt any other companies have the resources/motivation to write complex software from scratch, let alone underfunded postdocs.

I'm wondering about all the research that goes on in all the universities where large investments have been made on CPU based clusters. The simulation in the article you linked was run on NERSC servers, which are Cray supercomputers[1], which pretty much are Intel Xeon class servers with fancy interconnects.

So looks like it is CPU based, but I'm still interested in the software they use.

[1]: https://my.nersc.gov/nowcomputing-cs.php


It's usually highly-parallelized Fortran ran on the world's largest supercomputers, utilizing thousands of CPU cores. There are several codes like the ones in the study above (google "gyrokinetic equation solver"), and somehow more pop up year by year. So it's not a matter of sunk costs.

And yes, GPUs are increasingly being utilized, depending on the algorithm. But, GPUs aren't magic; they don't speed up every kind of problem.


The numbers are too big, and nature is hiding stuff from us.

So we can't simulate it because we don't know enough to simulate it. And even if we did know there's not enough computing power to do so.


The system is fundamentally 6^N dimensional with N~10^23.


I suppose you meant 6*N? Which is a lot better, but still intractable. And anyway, we don't exactly resolve molecules in e.g. turbulent flow simulations, yet they still take tens, even hundreds of millions of CPU-hours.


6*N, yes. Pretty bad mistake there. But yes, even if you don't model every particle and restrict yourself to "parcels" of fluid like in most simulations, you still have a very difficult problem.


Okay, this is really showing my ignorance but why 6?

You start off with 4 (3 space plus one time (ignoring 11-dimensional space-time)) and add which dimensions exactly? Can the individual interactions between wave/particles be reduced to 2 dimensions? Aren't they going to interact along the whole range of forces they exert: gravitational, weak, electromagnetic, strong?


3 dimensions for position + 3 dimensions for speed


3 dimensions for momentum in other words? Surely it cannot be that simple. That's astounding.


Unless you consider quantum effects, which are probably relevant in this situation, then it's an exponentially larger problem space.


Yeah, 6^N dimensions are fun! ;)


From the actual journal article:

> Two additional complications arise because plasma fusion apparatuses are experimental and one-of-a-kind. First, the goodness metric for plasma is not fully established and objective: some amount of human judgement is required to assess an experiment. Second, the boundaries of safe operation are not fully understood: it would be easy for a fully-automated optimisation algorithm to propose settings that would damage the apparatus and set back progress by weeks or months.

> To increase the speed of learning and optimisation of plasma, we developed the Optometrist Algorithm. Just as in a visit to an optometrist, the algorithm offers a pair of choices to a human, and asks which one is preferable. Given the choice, the algorithm proceeds to offer another choice. While an optometrist asks a patient to choose between lens prescriptions based on clarity, our algorithm asks a human expert to choose between plasma settings based on experimental outcomes. The Optometrist Algorithm attempts to optimise a hidden utility model that the human experts may not be able to express explicitly.

I haven't read the full article nor do I understand the problem space, but the novelty seems overstated based on this. Maybe they can eventually collect metadata to automate the human intuition.

Edit: here's their formal description of it: https://www.nature.com/articles/s41598-017-06645-7/figures/2


I mean, if it has not been done before, it doesn't look like they're overstating the novelty. Most algorithms look "obvious" in hindsight :).


It's a well-known technique in the out-of-fashion world of knowledge-based systems: To create an expert system, your experts often won't be able to articulate their utility function, so you extract it by presenting them A/B choices.


Doing an old thing on a new problem counts as novel.


Sure. Just pointing out that this isn't a "new algorithm that only looks obvious because of hindsight". There's an almost endless supply of problems, like this one, that benefit from automating the assessment task of human experts.


Hot or not, but for dynamical systems optimization.


Each plasma out come is shown to a researcher as a Tinder profile, if they select the ones they want by swiping right :-)


i also haven't read the full article. I wondered if it was worth reading before i clicked the link really. How did they determine it whould be faster than.. what? doing it without computers? And if it cuts down months worth of computation to just some hours, can i expect a working fusion reactor in the next 1-2 years instead of 10-30? How did this become HN #1?


In software jargon, this is called a "Wizard" (i.e. installer wizard, calibration wizard, etc. that guide you through a process that is more complicated by asking a series of simple questions) and is an idea that dates back decades.


If I'm understanding this right, I'm pretty sure this is not in fact just a wizard. It's using people's answers to "which of these is better" to learn an objective function that can later be used for optimization. A wizard is just presenting explicit choices. Making one requires knowing all the possible paths and results in advance. You could I suppose have an "implicit wizard", where every choice was in terms of "Which of these two examples do you prefer?" rather than explicitly stating what the user was choosing between, but that would ultimately just be a more confusing version of an explicit wizard -- it would still require you to program in all the possible paths and results in advance. That's much less interesting than this.


There was a talk about the state of nuclear fusion by some MIT folks linked here on HN a few days ago. One of the biggest takeaways was that many fusion efforts are very far away (3 to 6+ orders of magnitude) on the most important metric, Q, which is energy_out / energy_in. Additionally, much press and public discussion completely fail to discuss this and other core factors that actually matter for making fusion viable.

I remember Tri-alpha being listed on one of the slides near the bottom left of the plot, 4 or 5 orders of magnitude away from break even, where Q = 1 (someone please correct me if I'm remembering incorrectly).

Is the 50% improvement described in the article meaningful, as that would only be a fraction of an order of magnitude?

I understand the broader concept of combining experts and specialized software on complex problems is a powerful idea -- I'm just wondering if this specific result actually changes the game for Tri-alpha.


Tri Alpha has been running at relatively low temperature, about 10 million degrees, while they figured out how to make their plasma stable. They achieved that in 2015 with their older reactor.

According to their model, the plasma should get more stable at higher temperature. They just finished a new reactor they'll use to test that. It'll hit temperatures closer to 100 million degrees.

If they're right about plasma stability, they'll be ready to attempt net power with a full-scale demo reactor. Since they're using boron fuel they'll need to get the temperature to about three billion degrees, but they say pumping in more heating is relatively easy.

(Source for all that: I saw one of the Tri Alpha people speak at an MIT Solve conference the other year.)

So the 50% improvement isn't a make-or-break thing, but I'm sure it'll help. In general, being able to run simulations in hours instead of months will probably help a lot.


I also watched that video, and was also a bit dismayed. It seemed to me that a lot of projects with the very low Q numbers weren't at the point of going for high Q numbers. The project I follow is Polywell[1], and my understanding is they've been working on confirming the physics of their approach (which involves 'wiffle ball' confinement) and so have not attempted pushing for break even energy production.

The video was eye opening though, I had no idea that high temperature super conductors were set to revolutionize tokomaks. If the potential is there, it seems like the prudent thing to do would be to reset the ITER project, and redesign utilizing the current generation of high temperature super conductors. But I'm just an interested observer, what do I know?

[1] https://en.wikipedia.org/wiki/Polywell


I also watched that MIT talk and it was quite insightful; however, as I searched a bit more, I realized that the metrics involved in the presentation are only for achieving surplus in the generated energy from fusion.

When it comes to industrializing the idea, the scope is far broader. For example, the speaker was saying that they can use the neutron streams as the result of fusion for creating tritium. In reality, capturing the neutrons is much more complex than that [1]. Some of those may deposit on the inner surface of the tokamak and have to be recovered by 99% to have breakeven. The nuclear waste is another concern in the opposite case.

Given all these, you get a sense that why companies like General Fusion [2] get funded. He showed that General Fusion is very far away in his metrics. But the pinching technology the company is offering, allows for continuous use of the fusion in rapid bursts (like an automatic rifle). When I met them at the Globe conference, they were claiming that they will be ready for production within 5 years of achieving a surplus. I am not sure how fast the tokamak can get there.

Source: 1. http://thebulletin.org/fusion-reactors-not-what-they%E2%80%9... 2. http://generalfusion.com/


But hey, string ~20 consecutive 50% improvements together and you're at four orders of magnitude :-)


You might allowed to be much more optimistic.

A 50% increase could be much much more significant depending on the parameter optimised. Tokamak magnetic field strength for example has a factor ^4 effect towards net energy.

Have a look at https://www.youtube.com/watch?v=L0KuAx1COEk , as previously discussed here: https://news.ycombinator.com/item?id=14834390 .


> One of the biggest takeaways was that many fusion efforts are very far away (3 to 6+ orders of magnitude) on the most important metric, Q, which is energy_out / energy_in.

Keep in mind tokamaks have reached Q=0.7.


Google might try to become the conglomerate of all forward-facing things but it is somewhat funny to see how through it all, it's their advertising revenues that form the core of the business.


This pattern happens more often than you think.

Microsoft: They make an Operating System and Office Suite. From Microsoft Research they have labs on Quantum Computing, they have five Turing Award winners (One is Leslie Lamport) and he developed TLA+ while employed there.

Facebook: A social network Funds a bunch of Deep Learning Research and NLP.

Elon Musk: Helped create PayPal, now does electric cars and rockets, (Tesla, SpaceX)

NVIDIA: Made graphics cards for video games. Now those same devices allow for deep learning.


Bell Labs, over 1k PhDs at some point.


Why is that surprising? Look at individuals... they work for money so that they can pursue their own ambitions. It is rare to find an individual with the luxury of pursuing their own, exact ambitions to also earn money.

Kudos for Google for using their vast funds to finance their ambitions rather than just hoarding it away.


Which is exactly what they're trying to change - both with "conventional" offerings like Android and Cloud, as well as with what they call "moonshots".


I think it's their compute capability and massive interaction with the concerns of humanity expressed via search; in the short term that's leveraged to create advertising revenue, in the longer term who knows?


> it's their advertising revenues that form the core of the business.

You'd think the advertising staff would get pretty good treatment at Google.

Do they?


when I worked in the ads part, it was not glamorous (as compared to search, or mobile, or social, or whatever the hot area was at the time) but I think we were treated well, and acknowledged for running a critical service that provided revenue that allowed other parts of the company to do research and development into new things.

I can't speak for the entire advertising staff, of course.


Why wouldn't they? All Googlers get very very good treatment.


Sounds like some promising results, hopefully this approach will continue to be useful.

Addressing the wider article, it always surprises me that the focus fusion approach is never mentioned in fusion articles put out by the mainstream media. I don't know what to attribute that to, but it's surprising that one of the most promising fusion approaches is constantly overlooked.

To give an idea how drastically overlooked focus fusion is, here's a graph showing R&D budgets for different fusion projects...

http://lppfusion.com/wp-content/uploads/2016/05/fusion-funds...

... and here's a graph showing energy efficiency of fusion devices (running on deuterium I believe)...

http://lppfusion.com/wp-content/uploads/2016/05/wall-plug-ch...

You'd think that the second most efficient device would've gotten more than $5 million in funding over 20 years (I think the original funding was from NASA back in 1994).


I think their universal quantum computer (to be announced later this year) could accelerate fusion research even more, as I imagine it could more accurately simulate the atom reactions and experiments on it. Practical quantum computers may just be what we were missing to finally be able build working fusion reactors.

The millions of possible "solutions" and algorithms for working fusion reactors may be what has made fusion research so expensive and fusion reactors seem so far away. Quantum computers may be able to cut right through that hard problem, although we may have to wait a bit more until quantum computers are useful enough to make an impact on fusion research. I don't know if that's reaching 1,000 qubits or 1 million qubits.


Even if you had the computing power AND if you were simulating your fusion reactor's plasma in realtime, while it's running AND you know/can predict the plasma instabilities in realtime (under a few ms), you still need a way to "counter" those instabilites in the said plasma. And you need to counter fast, before the instability "poisons" the entire plasma, something that should happen within a few ms. If you don't, your entire experiment stops, and it takes a while to get it back (minutes). Currently: 1. nobody really understands the instabilities, why and when they happen; 2. there's no way to 'counter' them. So it's not only about the computing power.


As a psychologist, this looks an awful lot like computerized adaptive testing methods, only instead of estimating some parameter vector about a person, you're estimating some parameter vector about plasma.

Even the title "optometrist algorithm" is telling, because that paradigm is a basic model for how a lot of testing is done, except that it's not the optometrist doing it, it's a computer.


Diversification of the business, me thinks... nuclear is so big (but slow) that a penny invested today may become a tenner tomorrow, just in case.


I do have a naive question.

Suppose a big breakthrough comes out of a private company, and such innovation is necessary to use nuclear fusion.

The company will be free to do whatever it pleases with the technology or it will somehow "force" to let other use, maybe behind the payment of some royalties.


A related economics question, a thought experiment really...

Suppose a private company manages to lower the cost of electric energy by 90%, using a device self produced fast with virtually no capex. From an economics point of view, they build free money printing presses, essentially.

How would they benefit from this the most possible? Selling the energy? At what price? Take over sectors of the economy where electric price generates most added value? Aluminium production? Data centers? ...


With costs that low, it'd make sense to replace even brand-new fossil plants just to save the cost of fuel. You could eliminate fossil fuels for electricity production in a very short period of time.

Selling the energy mostly wouldn't make sense, since it means replacing regulated utilities. You could sell the reactors themselves but you'd have to be good at scaling up factories fast; unless that's a core competency you'll probably make money faster by licensing to people who are good at it. Or, just outsource the manufacturing.

Either way, if you can churn out lots of reactors fast, just sell them to everyone. Don't bother trying to take over e.g. aluminum production; what do you know about producing aluminum? How long will it take you to learn? Just sell the reactor to the aluminum producer.


Patent law generally recognizes the option of the state to enforce compulsory licensing, though it's rarely exercised. Eminent domain may also be used to take patents.

The modern approach seems to be to just let people find a way around the patent, or simply ignore and litigate.


It probably also depends on funding they have received from places like the Department of Energy and contracts they have signed for their research being publicly available, but if Paul Allen is the funding and not the government maybe it's all private. My own naivety would say billionaires investing in clean technology would share it with the world, but who knows?


Many new technologies take twenty years or more to move from research discovery to commercial application. For example high temperature superconductors (HTS) were discovered in 1986. The Holbrook Superconductor Project in 2008 is what is cited as the first major commercial use of HTS. A few other power projects using HTS are being constructed and HTS is starting to see some use in specialty magnets. How much 20 year patents slow down scientific development is highly debated but most companies try hard to have more than one supplier for anything critical to their business. These companies will tend to avoid having anything patented by some other company in their products.


It's worth noting that other countries would be free to copy the technology. At some point the government may intervene.


If they have a patentable breakthrough, they would be able to restrict use of their discovery for the duration of their patent.


yep. They could keep it a trade secret (no patent), patent it and not license it, or patent it and license it so others could use it under some terms.

This only matters for the life of the patent, and is consistent with the intent of patents.


No, they have not. They developed a very useful new program.

But simple assisted hill climbing is not a new algorithm, you might call it "Wizard" though. This would attract the right audience.


Maybe I'll see commercial fusion within my lifetime... how nice is that!


how does this nuclear fusion company hope to make money ? Their product is decades in the future.


Quite possibly, it never making any money at all but getting the world closer to usable fusion is an acceptable outcome for investors like Paul Allan. And it could make money by being the first ones to sell said product, decades in the future, or by at least holding valuable IP at that point?


one possible alternative would be to put the money into academic research so there is no pressure from investors to have a return on investment.


I'm not convinced that having no pressure to get a return on investment is a net benefit. I've seen arguments by fusion scientists that the field has historically been much too focused on pure plasma physics, with too little emphasis on practical results or on the economics of reactor designs.

People invest in Tri Alpha because if things work out, a practical reactor is more like one decade in the future, and it would be very economical. The return on investment would be enormous.


Am I the only one that never reads these articles but just goes straight to the comments? It seems like reporters always get the facts bungled and go for the simple story - out of necessity of course.


I also do this. HN comment pages are extremely varied in their quality, but they do tend to be good at shooting down articles which aren't as good as their titles suggest. Plus, HN loads fast. So, open the comments, check to see if the article might be worth reading, and if so, open it, flip back to the main page while the article loads.


for me it is about page loading - pretty much straightforward successful and predictable on HN and slow and/or heavy, jerking current position/scroll around and full of whatever else surprises the page of the source. I want to know what it is about immediately, i.e. basically it is an issue of instant gratification for me :) If information on the HN comments page isn't enough (which is rare), or the source is really vouched/confirmed to be interesting by itself - then i take the bullet.


I often jump straight to comments, but I think Google's blog post for this is required reading: https://research.googleblog.com/2017/07/so-there-i-was-firin...


On this article I clicked on it, realized The Guardian was going to be waaay to general, and then went to the comments knowing I would find a quick break down of the facts written by "my people" for an audience like me.

Often I'll skip the article entirely.


Yep - same thing with that "Roomba is selling maps of your home" thing that's going around. Turns out they're considering partnering with Alexa or something, and you'd have to opt in of course. I just skipped straight to the comments to get the real scoop instantly.


Google didn't enter the race. They helped a company with some calculations.


Ok, we changed the title to the first sentence of the article, which basically says that.


thank you! I was very confused... twice...


It's nothing. Even if Google is invented something we will never see a product customer can purchase.


You jusk KNOW Elon Musk is gonna beat'em to it ;)


Electric cars and rockets have existed for decades.


There are two directions within the energy world that I don't completely get. One of them is hydrogen storage, the other nuclear fusion.

From what I always understood is that the high-energy neutrons produced by the fusion reaction irradiate the surrounding structure and that there is still considerable nuclear waste (although lifetimes are better than with nuclear fission). Do the scientists not care or is this outdated info?


You're thinking of https://en.wikipedia.org/wiki/Neutron_activation

You need to use materials that stand up well to neutron bombardment. Many materials upon neutron capture have a half life measured in seconds, which isn't a big deal. As nuclear waste disposal goes, this really isn't a concern.


On this EU site they state that a site remains active for 50-100 years, https://www.euro-fusion.org/faq/does-fusion-give-off-radiati....

I know it's better than fission, but still not nice.

If is indeed seconds, then it doesn't matter of course. I was kind of hoping to understand more about material design in the recent scientific past with this question.


The neutrons coming from the fusion reactor do have the potential to make the surrounding material radioactive. But that depends on exactly what the surrounding material is, some elements will become dangerous when exposed to neutrons and some won't. An important part of designing a fusion powerplant, and one of the reasons it is difficult, is that you have to make sure that the materials you use can safely handle the neutrons coming from the fusion reaction without transmuting into dangerously radioactive forms.

This is in contrast to a fission reactor where the fuel itself turns into dangerously radioactive elements when exposed to neutrons.

EDIT: The exception is that fusion powerplant designers will want to surround the reactor with lithium in the hopes that it will absorb a neutron and turn into tritium. The tritium is then carefully gathered because it forms the the fuel for the reactor and it's hard to get except in a nuclear reactor.


The comparison is not b/w fusion and a hypothetical waste free source of energy. Its b/w fusion and fission. The waste products of fission are much more dangerous, expensive to handle and we still haven't found foolproof, effective ways of disposing them.

OTOH, danger from irradiated materials (whatever that is, this is the first time I'm hearing this TBH) doesn't seem very pressing. I highly doubt any of the irradiated stuff would have a half life of millions of years.


Half life is inversely proportional to radiation intensity. The big issue with fission waste products are the nucleotides radioactive enough to kill and long lived enough to be annoying, not the nasty ones that go away in a few decades or the almost inert ones.


Might as well put a Wikipedia link here about the fission product isotopes and their half lives:

https://en.wikipedia.org/wiki/Nuclear_fission_product#Radioa...


I saw a presentation by the head of MIT's fusion department, in which he said that the waste would only need to be contained for several decades. It's very different from fission, where the fuel itself produces long-lived high level waste.

With the reactor discussed in this article, the situation is even better, because it would use boron fusion. That reaction doesn't produce neutron radiation at all. There'd just be a tiny amount from side reactions.


I want to start placing "Google and " before stating my accomplishments.

"Google and a nuclear fusion company have developed a new algorithm"

sounds way better than:

"Nuclear fusion company has developed a new algorithm using Google"

They may not mean the same, but in today's world faking it until you make it might pay off.


Outside of the title being misleading, I'm sceptical. It's one thing to have the hardware for research, and completely other to have the expertise for the research.

Google entered the self driving cars research, and we have yet to see them driven around.

This heavily reminds me of Intel and their diversification, up until recently, they were in IoT, makers market and what not. One solid push from AMD and they jumped out of everything way too fast to track.

Google seems the same with the nuclear fusion. They have the advertising money to throw around, but that just it, they are in different segment, and from investing side I'm more inclined to stay away from their stock then buy it.


>Google entered the self driving cars research, and we have yet to see them driven around.

You see their working prototypes flying around mountain view all the time. And, they've been transparent with their progress.

People have been working on this since the 80s.


Considering “the whine of the electric motor in Waymo’s 25mph prototype is mildly annoying when my window is open and they drive by several times an hour” is a real thing in my own life as a resident of Mountain View, it’s odd to see the assertion that they don’t drive around.

And I live on a side street.



like the other comment, I don't see how your example of self-driving proves anything.

In fact the more I think about it the more I'm confused by your comment. What are you skeptical about? Google here has demonstrated their computational resources can be of great benefit to scientific causes such as nuclear fusion. If you're saying that you're skeptical Google can do nuclear fusion, I think you're missing the point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: