Hacker News new | past | comments | ask | show | jobs | submit | gene-h's comments login

Europa Clipper also used a new approach for designing spacecraft. It's NASA's first major spacecraft designed with Model Based Systems Engineering(MBSE)[0]. Using diagrams in SysML to keep track of power use and interfaces is supposedly better than using spreadsheets

[0]https://ses.gsfc.nasa.gov/ses_data_2021/210728_Bayer.pdf


Oh, I used to work on this :)

For keeping track of power use and interfaces specifically it turns out doing it all with SysML diagrams wasn't so great. Aside from all the pointless futzing around with boxes and arrows the model eventually became so huge the authoring software could barely handle just opening it up. So it must have been shortly after these slides when all the power use tracking was shifted to a custom tool with a more tabular user interface that we were already using for tracking electrical interfaces (slide 15) with version control in git.


Unrelated question: how did you manage tabular data in git? It’s always a struggle to diff and merge changes.


Yeah, like 'baq said the data wasn't stored in a tabular form, it was actually XML. So sometimes you could just look at the textual diff and it would make perfect sense, although it wasn't expected users would work with XML at the source level.

There was also a semantic object-level diff we got for "free" by virtue of building on top of the Eclipse Modeling Framework. It was integrated into the Eclipse git UI and could help resolve merge conflicts without having to touch the XML directly, but merge conflicts were still annoying to deal with so generally engineers coordinated with each other to not touch the same part of the model at the same time.

Normally for review though I think users tended to compare reports generated from the model rather than trying to diff the source model files directly. There was a sort of automated build process that took care of that once you pushed your branch to Github.


not OP but the usual applies - data is not actually stored as a table in git, tables are an UI thing. git would store standard issue json, xml or whatever custom git-friendly format is used by the tool.


There was a whole field called fluidics that focused on performing operations analogous to those performed with electricity. This[0] gives a good overview of how fluidics worked. One the most commonly used elements of fluidics was the fluidic amplifier. This worked by using sideways flow of lower pressure to redirect a more powerful jet of fluid between two ports. Fluidic amplifiers could be made with frequency responses in the KHz range, so they have been used to amplify sound[1].

There was some brief interest in fluidics because it was cheap, more reliable than electronics of the time, and could function in extreme environments. So it found use in industrial automation and aircraft systems. This document[2] from NASA shows some applications it found at the time. Univac even built a completely fluidic 4 bit digital computer[3].

Fluidics is still around today, but used for very niche applications. One of the most absurd uses I've heard of was getting around rules prohibiting active aerodynamics in Formula 1. In 2010, the McLaren devised a system where the driver could cover a hole on part of the car, causing flow to be redirected to a fluidic amplifier that redirected flow over the rear spoilers causing them stall, allowing drag to be reduced on the straight aways. IIRC the entire purpose of the system was to get around rules which prohibited doing this with moving parts.

[0]https://miriam-english.org/files/fluidics/FluidControlDevice...

[1]https://acoustics.org/pressroom/httpdocs/132nd/2aaa8.html

[2]https://ntrs.nasa.gov/api/citations/19730002533/downloads/19...

[3]https://dl.acm.org/doi/pdf/10.1145/1464052.1464112

[4]https://us.motorsport.com/f1/news/banned-technical-analysis-...


Thank you for the comprehensive summary and the detailed references!


I think it's a topic of great interest. Maybe turn your answer into an HN submission?


Atomically precise manufacturing, that is tech for building structures atom by atom. This is starting to be done regularly in labs. Machines for making single atom transistors are a commercial product now[0]. Forming covalent bonds at desired locations through positional control of reactants has been demonstrated[1]. This is potentially more scalable than the aforementioned approach. ML seems to be enabling too[2].

[0]https://www.zyvexlabs.com/apm/products/zyvex-litho-1/

[1]https://www.nature.com/articles/s41557-021-00773-4

[2]https://www.nature.com/articles/s44160-024-00488-7


What's more important is that they demonstrated making diamond at 1 atm and lower temperatures(1025 C). This is compared to ~50,000 atm and ~1500 °C diamond is conventionally made at. The diamonds they made were very small, but this is a new process and optimization might enable it to make bigger diamonds.


The efficiency demonstrated is a joke. There are better ways and more direct ways to extract power from humidity gradients[0]. In a dry environment, the maximum amount of energy extractable per unit water is fairly high, about the energy density of lead acid batteries[1]. There are even proposed large scale power plants for generating power from humidity gradients[2].

The catch? It needs fresh water and works best in hot dry areas where freshwater is at a premium. Also, the powerplant would make surrounding communities up to 100 km extremely humid. Trading freshwater for electricity is not a very attractive proposition...

[0]https://www.researchgate.net/publication/279853268_Scaling_u...

[1]https://www.researchgate.net/publication/237493406_A_Dunking...

[2]https://en.wikipedia.org/wiki/Energy_tower_(downdraft)


Yes, dunking bird style evaporation engines are among the lowest efficiency kinds of evaporation engine. One of the big reasons for this is that the process is using evaporation to create a temperature difference where the cold is generated at the same place the evaporation occurs. Evaporation works better when it is hot. This is a negative feedback which reduces the effectiveness of the system.

Fresh water is not a strict requirement for most evaporation engines. It is the low efficiency which drives that. In theory, the osmotic energy difference between fresh water and sea water is much smaller than the energy obtainable from the evaporation process. After all, seawater is observed to evaporate! However, when your efficiency is under 1%, even a low additional cost becomes a big one.

IIRC, Carlson, from ref 1 in the energy tower article (I remember adding that reference to Wikipedia in 2006!) had a design which used a Stirling engine running off an evaporation generated temperature difference which had the same negative feedback problem. There are other designs, like the piston-based one which Barton worked on before pivoting to a co-generation system (Hmm, that seems to have fallen down the Wikipedia memory hole - https://handwiki.org/wiki/Physics:Barton_evaporation_engine ).

Efficiency is always the big challenge for evaporation engines, but the strict thermodynamic limits are much higher than any current design. This means that there is a lot of room to improve! One of these days I'll write a paper on that.


The IM-1 Lander was supposed to land using a LIDAR altimeter, but they forgot to remove the safety before launch. They tried to make a last minute software change to use an experimental navigation system from NASA to get altimetry, but this didn't work. So the lander landed using visual navigation and IMU data only for the last 15km to the surface.

It probably would have landed upright if the LIDAR worked. It is impressive that it landed as intact as it did

[0] https://arstechnica.com/space/2024/02/it-turns-out-that-odys...


Much of the damage to the power grid can mostly be mitigated by turning off electricity, although this is a difficult thing for power companies and grid operators to do. One issue this article doesn't discuss is the risk to undersea internet cables[0]. Undersea fiberoptic cables need repeaters, these need electricity, so they have very long conductors and it's expected that sea water's conductivity could make induced currents worse. Shutting off power won't necessarily work, because induced currents could be 100x more than the equipment is rated for. Although, global connectivity is still likely to exist.

[0]https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf


> Much of the damage to the power grid can mostly be mitigated by turning off electricity

That isn't entirely true.

I guess, in reality, "turning off electricity" can work, if "turning off" means shutting down generation at the same time throughout an entire interconnected grid and physically disconnecting literally every transformer throughout that transmission grid before they're cooked by geomagnetically-induced current (GIC).

Induced DC current alone can heat the windings in high-voltage transformers to the point of catastrophic failure, and that's assuming they can disconnect the AC current already flowing through the windings—if they can't, it heats even faster. This can possibly be mitigated by using a CT or hall sensor combined with a separate winding to cancel out the flux in the transformer's core, but I suspect that kind of work hasn't been done because there's no cost benefit.

Some electricity providers have relaying systems in place meant to protect equipment, but the last time that was tested in real-world conditions (1989, in Québec) they failed to prevent equipment damage.

That says nothing about the transmission lines themselves, most of which are nowhere near protected from GIC, and could either overheat or allow enough DC to flow through smaller pole-mounted transformers , which magnetizes them and dramatically reduces their serviceable life (if not outright destroying them).

In the case of Hydro-Québec, GIC didn't cause equipment damage; their protection systems—the stuff meant to "turn off the electricity"—allowed damage to occur anyway.


Turning the power off would work, but is that feasible? Even if we’d be able to spot the CME, would then have enough time left to shut down the entire power grid?

The time between detection and the CME hitting us would probably be measured in minutes. I don’t think it’s possible to shut down the grid in that timeframe.


It is feasible.

In https://en.wikipedia.org/wiki/Carrington_Event it is estimated that the time from spotting the flare to the solar storm was 17.6 hours. That's plenty of time.

The problem is figuring out how big the event is, and how directly it will hit us. So while we have over 17 hours to prepare, there might be some false positives due to our limited prediction skills. And, no matter the real consequences, people have limited patience for large economic disruptions over things that turned out to be nothing.


Even better, if you turn off the electricity and prevent major destruction and nothing happens (other than the power down/up) then you're the one who caused "the problem".

There's no reward for fixing a problem that doesn't happen and that people don't want to believe even exists. Bonus, if other networks are damaged while yours aren't, it must be because you protected your network so you're responsible!


17 hours sounds like a long time, but it's not like there's a big red button and a guy standing by to press it.

So the information chain, starting with the telescope that detects the flare, and then has to work it's way up the food chain, so that sufficient people agree, and take presumably synchronised action, well, good luck with that.

You'd also ideally need a multi-hour warning, planes gotta land etc.

Make no mistake, shutting it down will result in some deaths [1]. And those deaths will be on the news tomorrow (if indeed news still exists.) On the other hand not shutting down will cause more deaths and massive destruction.

[1] think hospitals where the backup power failed, or didn't last long enough. Traffic intersections. Elevators. Water pumps. Airplanes. Trains. Take your pick.


CME are "Coronal Mass Ejection" with mass being the operative word here. Electromagnetic radiation (electrons photons) can make the trip between sun and earth in about 8 minute but anything with neutrons or protons (such as the coronal plasma) takes much longer as in, a day and a half to several days. CME are not hard to spot leaving the sun with even with tiny amateur telescopes (the sun does not require much in the way of light gathering) so even without the professional scopes (SOHO) with dedicated satellites leading and trailing earth orbit constantly viewing around the edge of the sun as seen from earth or being able to acoustically "hear" (if you can call 5 minute pressure waves sound) the far side of the sun, it is not conceivable to me we would not be warned a CME was incoming, doing something about it is another story.


It takes 15-24 hours for CME to arrive at the Earth after solar flare. The particles are much slower than the radiation which arrive immediately. We have pretty good prediction if CME will hit Earth.


Does ~8 minutes qualify as immediately, or are you doing the physicists thing of rounding to 0?


Since it is light, the concept of immediacy breaks down, right? Do events outside our light cone exist yet?

Let’s ask the photon how much time has passed between it being created and hitting our eyeballs. I’m sure it will produce a very sensible answer—oh dear, hmm…


Yes, because if not then why do I have ping in video games?


I think ping is measured round trip


For practical purposes in this case it’s immediate right?


I can do a lot in 8 mins. If you had 8 minute warning for an earthquake, what could you do? Luckily, we get more notice than that now for tornadoes, but 8 minutes is enough time to seek shelter. In 8 minutes, there's plenty of time to ctrl-s on everything, and then close apps and shut down computers.

The problem is communicating to everyone when that 8 minutes starts and how much time is left.


8 minutes is not something we measured.

It is something we computed. Why? Because there is no way to measure the time it takes for light to get from the sun to the earth. You cannot synchronize a message. There is no "hello". There is no beginning. This is not like firing a starting pistol. We cannot ever know what is happening at the sun with less than 8 minutes of lag. It is a fundamental limit. Even sending a highly robust and extremely precise clock into the sun to measure events, then comparing those events to timelines on earth, would not work. You cannot do it in real time, but even after the fact is pretty much not possible due to relativistic effects.


Good points, but pedantically we can measure such times using a mirror and the round trip of a laser pulse.

This has been done with the moon.

But your points about no 8 minutes of warning stand.


Even more pedantically, you'd be measuring the round trip time from the earth to a mirror located near the sun and back, but not at the sun. Does there even exist a mirror anywhere in this universe that could survive and float on the "surface" of the sun? Does the sun even have a surface? Is there a laser powerful enough to overcome all of the electromagnetic radiation from the sun such that you could actually discern it's signal? Not pedantically, I stand by my assertion that it cannot be directly measured.


What in this World can we truly measure anyway?


>If you had 8 minute warning for an earthquake, what could you do

That's not how it works. There may be detectable precursors that could actually give warning, but the 8 minutes referenced is the time it takes light from the sun to reach earth. It's immediate in the sense that it is physically impossible to detect that before those 8 minutes have already elapsed and the light is hitting your detectors. You could try to move your detectors closer to the sun to detect earlier, but any signal you can possibly send back to earth goes at the same speed, so it doesn't help.


What if we moved earth closer to sun or more away from sun.

So maybe we are able to detect that at time X there will be solar flares. We know they will be here at X + 8min.

But if we start moving away now from the sun, we will have more time to deal with it.

If it was immediate it should also be immediate even if we moved to 20x distance of the sun, or no?


It's pointless to talk about that 8 minute warning because the speed of light is effectively the speed of causality. There is no way we can signal back faster than that initial wave of radiation hitting us. There is no way to alert us that that 8 minutes is starting because are theoretical fastest communication will still take 8 minutes to get to us.

But we're not worried about that part of things anyway - it's the mass part of the CME that is the issue.


But the point is that we’ll have 15-24 hours, not 8 minutes.


That may be your point, but not the point to which this thread started.


It is immediate because we see the flare and any effects that travel as light speed at the same time. There is no warning.


We can, and have, shut off large portions of the grid in seconds.

Take the 2003 blackout. Yes, the whole shut down took 15 minutes (?). But thats because it was a cascading effect that had to travel down the lines. Once the fault was detected by a particular segment of the grid, the relays responded in milliseconds. They have since the 1920s? Add in an "incoming solar flare" fault condition and we can trip the whole grid in seconds and send a start signal to the diesel generators to warm up to bring her back up.

Pretty nifty trick.

Question is why would we? The grid has been undergoing a lot of strengthening against EMPs and flares for decades. Its not obvious to me that a flare can take it out, especially if we shed dumb loads (partial blackout, say data centers) before it hits to give the conductors and transformers head space.


If we had done enough to mitigate EMPs, the nuclear powers of the world wouldn't have space-based nuclear EMPs as the first step of their attack plan. We still do, and so does Russia.

Geomagnetically-induced current is different from the plain-vanilla EMPs anyway—GIC can last for hours.


I don't think there are any space-based EMPs, at least no publicly known ones. There have been and perhaps still are plans to detonate high-yield nuclear weapons at high altitude above enemy territory to cause an EMP, but that is a very different situation from a CME solar storm. We're able to spot a CME hours in advance vs mere minutes for submarine-launched ICBMs, and the latter would only ever be deployed in the opening minutes of an all-out nuclear war, in which the electric grid and all grid-level precautions against solar flares are likely to be irrelevant, because most of its critical components would be vaporized or torn to shreds by attacks on ground targets anyway. Even just forcing the other side to keep burning money on military countermeasures that do work might be worth a few launchers and warheads.


Google starfish prime.


I don't think anything published about that test contradicts what I've written?


Ish. There seems to be some vocabulary fuckery going on around the term "orbital", most likely due to a poor choice of wording upstream. "Orbital" in the shit-blew-up-outside-the-atmosphere sense EMP strikes are absolutely phase 1A of any large scale nuclear attack and such capabilities are trivially executed by ICBMs with appropriate warhead selection and detonation altitude parameters. "Orbital" in the EMP-weapons-literally-orbiting-the-planet sense violates several international treaties. Given the levels of secrecy required to pull that off for any length of time it seems unlikely but is impossible to rule out entirely.


the conclusion from starfish prime is that the approach they tried was a really bad idea.


Which is in keeping with all other applications of nuclear weapons attempted to date, so there is that.


I fail to understand why we don't do more to make equipment robust to this kind of thing. There's a whole range of problems that this solves looong before we get to general nuclear exchange.

If stuff was shielded, isolated, and grounded better, everything from your phone to your WiFi would work a lot better and have longer range. Wind slapping power lines together wouldn't destroy everything plugged in inside your house and solar flares wouldn't be more than a passing concern. The design changes to affect all of this aren't remotely expensive or difficult, we just don't.


There is no need to shield electronics. The induced currents only cause damage to long conductors, to the electrical grid and to long fiber optic cables.


That's a complicated one, it's still the electronics but they need a particular circuit that can stop short rise time transients. Also, any larger devices probably want to have shielding and ethernet devices will need some extra hardening, we're talking 25+kv/m events here so unless your computer can handle the monitor being at a 25kv differential from the tower you're gonna have a bad time.


How does it compare with static electricity at that voltage? Which I assume is harmless enough.

Would this also affect m mammals, including us then?


It's not very different in terms of amplitude, though rise times for an E1 EMP impulse can supposedly be single digit nanoseconds so equivalent to a >400 MHz impulse. I know from experience that modern electronics can't handle that because I've fried USB ports by operating radio systems in that band, though there are some obvious differences there.


We might easily be unprepared, but that the military tries things that might not work. Military attack is all about trying things that might cripple the enemy and/or increase the cost of an effective defence. So an EMP isn't necessarily because it is expected to do horrific damage. It is just part of a thorough test of an adversaries preparations, making it harder to protect their infrastructure.


> We can, and have, shut off large portions of the grid in seconds.

Speaking from personal experience, this is BS. During a bad wind event, a bunch of lines came down, started a huge forest fire around 10 or 11pm which was heading for a small town with 50-70mph winds. First responders couldn't get in to warn anyone because the downed lines were energized, so they called up the power plant. The whole process to de-energize took hours. There is a kill-switch now, but most power plants apparently can't shut off the juice in a matter of seconds, and they may not even have a plan to do so in an emergency.


Yeah I question if it's something we even could do.

But even if is, it's going to kill and hurt a lot of people. Probably less than setting every electrical device on (half of) the planet on fire or whatever, but good luck convincing people of that when their dad is on dialysis or all their food spoils and they can't get to the store.


I say this only half in jest, but could we not request that an adversary trigger their malware to shut down our grid? When life gives you lemons...


Wouldn't such an adversary always choose the option that causes us more damage?


I thought it was incredibly difficult to "cold start" a system after such a complete shutdown. You'd prevent infrastructure damage which is fantastic, but the blackout could still last months anyways.


There has been more interesting work on using transformers for robot motion planning[0]. Getting a robot arm from point a to b without hitting stuff is a very difficult problem, the problem is both high dimensional and continuous. Previous planning methods are both computationally intensive and not very good. This is one reason why robot motion appears 'unnatural' and robots generally being bad at many tasks we'd like them to do. This approach appears pretty competitive with other planning methods, planning near optimal paths faster.

[0]https://sites.google.com/ucsd.edu/vq-mpt/home


Informative comments like this is the reason I go to the comment section before reading the article.


NIF uses inefficient lasers because they were cheap to build at the time and because NIF is a science experiment. Lasers have gotten better. And lasers aren't the only way ICF fusion could be carried out, it may be possible to use ion beams instead[0].

It does not matter if fusion reactions last microseconds if they generate more energy. Using optimistic, but not unrealistic assumptions, it appears feasible for electricity costs to reach $25 per MWh[1] with ICF. With the most important factors driving cost being achieving high gain and yield per shot

[0]https://ieeexplore.ieee.org/document/650904/

[1]https://royalsocietypublishing.org/doi/10.1098/rsta.2020.005...


Those numbers don't seem feasible. Near as I can tell, a steam generator with a magical source of never-ending heat would cost $15/MWh. There is no way a high-tech facility and the personnel required to run it are costing $10/MWh.


There are three routes to bypass this issue:

1. Move to supercritical Co2 turbines (lower mass —> lower cost), need RnD on corrosion resistant alloys

2. Move to thermoelectric or photovoltaic generators

3. Use aneutronic fusion

Directly use the charged particles


1. would have been done already if there were such alloys.

2. photovoltaic generators seem popular, but the don't require spending money on a fusion core.

3. The problem is hard, so let's replace it with a harder one?


What’s your math on $15/MWh? As far as I can tell, the steam turbine is a pretty insignificant part of the price of coal energy generation


Are you allowing for cooling towers, water treatment, pipework, and so on? Also the grid connection hardware, spare turbines, cooling pumps, etc., and O&M costs?


$15/MWh is 1.5 ¢/KWh, right? that's pretty darn cheap and means that the other 10~20 cents would be transmission and fuel, yes?


20 cents? I pay 13/kWh.


You're within the stated range, do you think you have the most expensive electricity in the world? 25c/kwh isn't unreasonable. Ireland pays 52c/kwh.

https://www.statista.com/statistics/263492/electricity-price...


NIF exists to research nuclear weapons without breaking the treaties. Their fusion energy research is a nice corollary but it is absolutely not their focus.


The work from Rochester mentioned briefly is probably the second best performing fusion experiment and this was done with a laser that's at least 29 years old.

What the article doesn't mention is that the approach demonstrated could decrease the cost of fuel pellets. In NIF, the laser energy is converted to X-rays which compress the fuel pellet using expensive elements like gold. What they demonstrated is that the fuel pellet can be imploded directly with the lasers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: