Hacker News new | past | comments | ask | show | jobs | submit login
A canonical Hamiltonian formulation of the Navier–Stokes problem (cambridge.org)
185 points by Anon84 5 months ago | hide | past | favorite | 55 comments




The Hamiltonian formulation of classical mechanics is such a beautiful way of describing classical motion compared to the Newtonian formulation. See Laundau & Lifschitz book 1! All the Hamilton-Jacobi equations are derived from observing symmetries in space time, even Newton’s 3 principles are derived (F=ma an the rest). All of this has the added benefit of transposing well into quantum mechanics, where forces are anyway replaced with hamiltonians.

For fluid mechanics I don’t know if Hamiltonians are the right formulation.


Is it not the case that F=ma and other Newtonian laws are encoded in the Lagrangian, and therefore not derived from symmetries? After all nature and her symmetries alone do not tell us how her physics works; there has to be some rule for time-evolution as well, and that's what's encoded in L (L's form is essentially a list of pairings of variables and their costs of evolution, which for classical mechanics is L = T - V = ∫ p·dv + ∫ F·dx, which says "the cost of changing v is p and the cost of changing x is F").

(QFT sort-of has an explanation for time evolution in terms of symmetries alone, but it requires a lot more machinery. But afaik classical mechanics does not.)


> the cost of changing v is p and the cost of changing x is F

I’m sure the equation is right and all but this seems sideways in terms of an intuitive explanation - velocity changes position, and force changes momentum. Force doesn’t directly change position (only indirectly via changing momentum) and momentum doesn’t change velocity, having momentum is consistent with a constant velocity. It doesn’t even make much sense to me to think of integrals as being about “costs of changing”, would that not be a derivative?


Although not the same, I remember the first time I encountered the description of dynamic systems using Lagrangian mechanics and it was beautiful. It finally made sense.

Instead of doing what seemed like some mumbo-jumbo sarting from a static system, that, eventually and through convoluted ways, lead to the equations we were kind of looking for, here was a clean and logical way to look at what was relevant to the dynamics of the system, with an infaillible, straightforward, mathematical way of getting the equations of motion.

I'm not that familiar with Hamiltonian formulations, but its conservation properties could bring some important improvements to the current way the Navier-Stokes equations are treated. Conservation of linear and angular momentum, for a start, could be nice..

Now, let's see if I understand anything from this paper..


I have tried to teach myself both Hamiltonian and Lagrangian classical mechanics and there’s one mental hurdle I have not been able to get over. The problems are generally set up with starting position and momentum known, and ending position and momentum known, and then the math tells you the path taken along the way. But what if the ending position and momentum is unknown? How does one use these formulations of mechanics to predict the future and not just postdict the past? Is this just how beginner problems tend to be set up?


Hm, well typically in Lagrangian formulations, you're right that you define a functional called an "action", and this involves some integral from an initial to a final position in configuration space. Then the idea is that the system will evolve in such a way as to minimize this action. A stationary point (like a maximum or minimum) has gradient 0. So a condition for this "principle of least action" to be realized is that the "gradient" (called the variation here) of the action is 0 for the real path. That is, if you take a path the system carves out through the configuration space and perturb that path, and you compute the total effect on the action from that small perturbation, you find it should be 0.

You do this by actually taking this derivative and you find that you can guarantee that the differential of the action is 0 if the system takes a path which is the solution to a set of differential equations, and you can generally find the solution to those differential equations only with information about the origin, ie you don't need both the start and end conditions to find a unique solution.

So you're right, it's a bit weird conceptually. You sort of start saying "the system obeys a path that minimizes the action between it's initial and final positions" and then find that this produces a set of conditions which form a system of diff eqs that you can find general solutions for and select out a unique solution just with the initial conditions, no need for the final condition.


Good question.

The answer is, unfortunately, boring. You solve equations to find the unknowns. If there are too many unknowns there are too many solutions and the whole thing is useless.


Please read "Story of your life" by Ted Chiang. A beautiful story that involves a discussion of Newtonian and Hamiltonian formulations. One of the best stories I have read.


Since you sound like you know what you're talking about, wanna tell us what is the difference between the hamiltonian and lagrangian formulations?


Not the person you were responding to, but: the Lagrangian formulation describes physics in terms of Least Action (so minimizing (or maximizing) S = ∫ L dt) on a manifold in (x,v) coordinates, and its equations of motion are like L_x - d_t L_v = 0. The second term tends to be second-order in time.

The Hamiltonian formulation performs a Legendre transformation[1] on L giving H = v L_v - L, which is essentially a convenient trick: it reparameterizes L in (q,p) coordinates, where p = L_v, and writes S as ∫ (pv - H) dt. This changes the E.o.M. to (qdot, pdot) = (H_p, -H_q), which is (a) first-order in time and therefore easier to deal with and (b) geometrically elegant because it is a rotation in (q,p) space, which is easy to think about.

At least those are the reasons everyone gives why it's important. I think the real reason is that QM is formulated in terms of H so you need to know it, and also that this (q,p) thing makes statistical mechanics easier because it has good geometric properties: it amounts to saying that time evolution conserves area in (q,p) space, which means that you can treat the evolution of many-particle systems as being in a whole block of states at once, treated as a geometric object that flows over time.

I've never been able to understand if there is something "truly fundamental" about H compared to L, or if H is more of a mathematical convenience for making the equations first-order.

[1]: https://blog.jessriedel.com/2017/06/28/legendre-transform/ is a good exposition, if still pretty tough to understand. Legendre transforms are hard to grok.


In classical mechanics the Lagrangian and Hamiltonian formulations are mostly equivalent (though, confusingly the Lagrangian formulation is due to Hamilton, who has shown that the Lagrange equations can be derived from a variational principle, while the complete Hamiltonian formulation is due to some later phycisists; for what has become the Hamiltonian formulation, Hamilton has shown only how to obtain the system of first order equations from the system of second order equations, but that had already been done before by Cauchy in 1831 in a journal that few were reading, so it was ignored).

Still, in classical mechanics some consider the Hamiltonian formulation to be more fundamental, because the first order equations can be applicable to some problems that have discontinuities incompatible with second-order equations, though such problems are artificial (real systems are continuous enough, strong discontinuities appear only through approximations).

However this changes completely in relativistic mechanics, where the Hamiltonian is not invariant, while the Lagrangian is a relativistic invariant quantity.

This makes the Lagrangian formulation a far better choice in relativistic mechanics and it is a strong argument to consider the Lagrangian formulation as the fundamental one and the Hamiltonian formulation as only an approximation that can be used at small velocities or only as a mathematical trick for numeric solutions.

When the Lagrangian formulation is used, after a coordinate system is chosen, it is always possible to use the Legendre transformation to obtain a Hamiltonian system of first order equations. However, in the relativistic case the system depends on the coordinate system. Therefore, if the coordinate system is changed, the Hamiltonian equations must be derived again from the invariant Lagrangian formulation.

The reason why the Lagrangian is a relativistic invariant is that this scalar value is the projection of the energy-momentum 4-vector on the trajectory curve in space-time. The Hamiltonian is just the temporal component of the 4-vector, which is changed by any coordinate transformation. Therefore L is more fundamental than H, in the same sense that the magnitude of a vector is more fundamental than any of the components that the vector happens to have in some particular coordinate system.

The traditional formulation of the quantum mechanics using H is a serious inconvenience for extending it to the relativistic case. Coherent formulations of the relativistic quantum mechanics must also use L instead of H.


Thank you! The Lagrangian as projection of energy-momentum actually makes sense, unlike the "let's just subtract potential from kinetic. No reason, it just works" story. I'd been idly wondering that for a while (and this is as someone with a physics degree, though in astro, which is a good bit more applied).


Hamilton has introduced what he has called the "Principal Function S", which is used in his variational principle on which the Lagrangian formulation is based.

Nowadays this function is frequently called "Hamilton's action", though this is not a good idea because it causes confusions with what Hamilton, like all his predecessors, called "action", which is the integral of the kinetic energy.

The "Principal Function S", which is a scalar value, i.e. a relativistic invariant quantity, is the line integral of the Lagrangian over the trajectory in space-time, i.e. it is the line integral of the energy-momentum 4-vector over the trajectory in space-time.

Like any line integral of a vector, the line integral of the energy-momentum 4-vector is equal to the line integral over the trajectory of its projection on that trajectory.

This is why the Lagrangian is the projection of the energy-momentum 4-vector. Hamilton has found the correct form of this line integral in relativistic theory, even if that was about 3 quarters of century before the concept of 4-vectors became understood.

The "Principal Function S", i.e. the integral of the energy-momentum, can be considered as a more fundamental quantity than the Lagrangian, which is its derivative (the energy-momentum vector is its gradient). In quantum mechanics the "Principal Function S" is the phase of the wave function, so it is even more obvious that it must be an invariant quantity.


There is a way of _arriving_ at that subtraction, rather than just throwing it out there.

A resource I created:

Calculus of Variations as applied in physics: http://cleonis.nl/physics/phys256/calculus_variations.php

Hamilton's stationary action: http://cleonis.nl/physics/phys256/energy_position_equation.p...

In that resource I show why it works.

In an earlier answer I gave more information about that resource. To find that earlier answer: go up to the entire thread, and search on the page for my nick: Cleonis


This is the exact point that confused me a lot (and still confuses me) when I tried to read the "The Theoretical Minimum: What You Need to Know to Start Doing Physics" : "Hey, let's just fix/define the lagrangian as T - V and you'll see that after some magical math stuff in the following chapter, we'll find back newtonian equations. Trust me for now".

If anyone has a reference/book/paper that allows you to learn this concept more intuitively, I'd be grateful.


I have created a resource for the purpose of making Hamilton's stationary action transparent.

It is possible to go in all forward steps from F=ma to Hamilton's stationary action; that is what I present.

The path from F=ma to Hamilton's stationary action consists of two stages: (1) Derivation of the work-energy theorem from F=ma (2) Demonstration: when the conditions are such that the work-energy theorem holds good then Hamilton's stationary action will hold good also.

I recommend that you first absorb the presentation of the subset of Calculus of Variations that is applied in physics: http://cleonis.nl/physics/phys256/calculus_variations.php

Discussion of Hamilton's stationary action: http://cleonis.nl/physics/phys256/energy_position_equation.p...

These presentations are illustrated with interactive diagrams. Each diagram has one or more sliders for manipulation of the contents of the diagram. That way a single diagram can offer a range of cases/possibilities.

About my approach: I think of Hamilton's stationary action as an engine with moving parts. To show how an engine works: construct a model out of translucent plastic, so that the student can see all the way inside, and see how all of the moving parts interconnect. My presentation is in that spirit.


Thank you.


Do you know of a good source on the history of this stuff, such as the relationship with cauchy? Or is it just something you pick up?


Unfortunately there are tons of books about the history of physics that get many details wrong, because apparently the authors have not actually read many of the primary sources, especially when they had not been written in English, but in German, French, Latin or other languages.

Even the original works written in English, like those of Hamilton, pose serious problems when you are not careful, because many words used in physics have changed in meaning during the time, some of them multiple times (e.g. "energy", "action" or "force"). Those who are not aware of this frequently reach wrong conclusions about who has said what.

A few authors have actually read the primary sources, but those typically do not understand physics so well as to be able to distinguish the important concepts from those of little importance, so they are not able to trace the evolution of the important concepts :-(

The only foolproof method to understand the history of physics is to read very carefully the original works (carefully, because the words and the notations used may be very different from those used now). Fortunately, that has become much easier now than before, because there are many online repositories with digitized scientific works from the previous centuries.

For example, "Sir William Rowan Hamilton (1805-1865): Mathematical Papers":

https://www.emis.de/classics/Hamilton/index.html

The most important for this thread are (especially the 2nd, which introduces the modern Lagrangian formulation): 1834-04: “On a General Method in Dynamics”, and 1834-10 (but published in 1835): “Second Essay on a General Method in Dynamics”.

The first who has shown how to rewrite the second order system of Lagrange equations into the first order system of equations now called Hamilton's equations (i.e. by using the equivalent of the Legendre transformation) was Poisson in 1809, but the last time when I have searched for that work online I could not find it.

After Poisson, Cauchy has presented in 1831 a method equivalent with that published by Hamilton in 1835. I also could not find online the 1831 work, but an extract of it has been republished in 1837 and this can be found online in many places, for instance at:

http://www.numdam.org/volume/JMPA_1837_1_2_/

Note sur la variation des constantes arbitraires dans les problèmes de Mécanique, Cauchy, Augustin, pp. 406-412.

You can read some books about the history of physics for a general acquaintance with the authors and the published works from the previous centuries, but you must remain skeptical about any opinions presented there until you read yourself the primary works to verify if they really contain what is claimed about them, or they contain something else.

I have found the reading of many old scientific papers, especially from the 19th century, surprisingly useful for a better understanding of the modern theories that I use now.


It's worth mentioning that Feynman's dissertation was on a Lagrangian formulation of quantum mechanics. However, just because Feynman thought it was interesting doesn't mean it's a good idea for the rest of us (although it's a relatively short 69 pages and an interesting read).

http://files.untiredwithloving.org/thesis.pdf

The path integral method of QED does make the Lagrangian for field theories easier.

https://en.wikipedia.org/wiki/Path_integral_formulation


> I've never been able to understand if there is something "truly fundamental" about H compared to L, or if H is more of a mathematical convenience for making the equations first-order.

Actually, Hamiltonian formulation, being equivalent, offers more room for finding solutions. Lagrangian formulation of the Least Action principle allows you to search for a solution employing arbitrary smooth re-parameterizations of the configuration variables `q`. The Hamiltonian formulation, on the other hand, allows you to re-parametrize the entire phase space (q,p) and find solutions that are much harder to get in Lagrangian formulation.


That sounds interesting. Do you know of a particular example where that helps?

I guess maybe it's 'all of them'. The pedagogy on Hamiltonian mechanics had been strangely hard for me to learn beyond the elementary level, like it doesn't make enough sense for my brain to organize it in a memorable way.


Action-angle coordinates are a good example.


That’s a correct usage of the factorial.


So, the gist of the paper seems to be:

1. Put all terms of the momentum and mass conservation equations of Navier Stokes on the right hand side. These should be normally identically 0.

2. Define an error term ('residual') R for each equation, this is basically the value of the RHS from step 1. The error term is 0 when pressure and velocities satisfy N-S.

3. Define the Lagrangian as the sum of residuals squared

4. Apply the Legendre transformation to the Lagrangian to get the Hamiltonian and the conjugate momenta


Seems simple, I'm not sure about profoundness.

Is there a reason people had not thought of this for the last several decades? Like, was there some missing math or discovery that enabled this?


> Is there a reason people had not thought of this for the last several decades?

I'm guessing because it's not simple.


Does this result mean that aerodynamic simulation can be made far less compute intensive, like I think it does?

This could be as useful as Feynman diagrams are to physics calculations.


I worked in that field doing numerical many-body simulations of electron dynamics interacting with their environment in solid state devices like quantum dots, graphene, photonic waveguides & cavities , etc.

You would start from a Lagrangian formulation of the classical interaction, let's say Light-Matter, that would yield for example the Schrodinger and Maxwell equations. Following a Legendre transformation (there a post on the HN front page the other day on that) you end up with a so-called Hamilton operator from which you can derive a (huge) set of coupled differential equations which you then solve.

Here, if you wanted to increase temporal accuracy, it typically leads simply to longer calculation times.

We also tried a different approach using Feynman's path integrals and boy did that explode numerically. We optimized our programs to the point where everything was reduced to work on bits, but to no avail it was numerically unstable and the memory consumption when through the roof the longer or more accurate you wanted to make the simulation.

So, I would argue that NO, Feynman does not make it easier per se.

However, other groups made it work somehow.

As a starting point you can check that paper and it's references from the introduction section.

https://doi.org/10.1002/pssb.201000842


Okay I gotta be honest, and maybe I'm a little ignorant, but when did we start saying compute instead of computation? I feel like I went away on vacation and missed out on an in-joke. Should I be saying compute instead?


I think of "compute" as referring to the resources for computations, rather than the computations themselves. For instance "compute-intensive" for an algorithm means that it is expensive in terms of computation resources, i.e. CPU cycles, as opposed to say I/O or network usage or something. Sometimes it refers to the actual physical hardware, sometimes to the theoretical resource of CPU time.


It's been commonly used as a noun in the HPC / scientific computing space for a long time (relevant to this thread) typically talking about "compute bound" and "memory bound" algorithms. It's spilled over everywhere now that number crunching is hip for machine learning.


This exact topic came up in another thread a.few months ago, and I did some digging:

I believe the term "compute node" was first used in the context of the Intel ASCI Red supercomputer that was installed at Sandia in 1997. This later led to the Cray XT3, XT4, XT5 families of machines that used the same terminology.

But I believe the term was not in generic usage outside of those specific supercomputers until around 2005-2010.

And it is more recently that the term has been extended beyond referring to hardware.


Compute as a noun may be a newer usage, but a noun sharing the same form as the verb is common in English, so it at least conforms to the patterns of the language.

For example dispute/disputation or repute/reputation ("a person of ill repute").


It looks like the general trend of truncating nouns down to the verb they're based on instead of requiring the noun-ified form. You may have seen these:

1) an invitation to the party -> an invite to the party

2) I like that quotation. -> I like that quote.

3) I gave him a consultation. -> I gave him a consult.

You could even count:

4) What's the request? -> What's the ask?

Also, broken English seen on receptacle in China:

5) "The environment needs your conserve." (instead of "conservation")


'Compute' connotes hardware, 'computation' does not.


Rather, 'compute' is a verb, 'computation' is a noun, and 'computationally' is an adverb (which is the most appropriate choice in the context of GGP's statement). All of these words refer to the same concept and may involve hardware computers (a more modern use of these words) or people-as-computers (the older, but still valid, use of the words).


"Compute" as used upthread is in fact a noun. That's the point, it's a new word. And sort of a dumb one, but new nonetheless.


Digging more into other internet discussions, a lot of people disagree about the usage. Many believe it's fine to use compute as a noun since it's a truncation of compute resources or compute nodes. Seems like an industry shibboleth like how data/stats people say "data" as a countable plural.


I think Compute is a gerund (a noun derived from a verb, like “swimming” as in the activity.)


Thank you! I always just assumed the difference was verb/noun.


People favor shorter words for common concepts. We've started needing something equivalent to the 4-syllable "computation" frequently lately, so we nouned the verb "compute."

In the short-term, I think both words are fine. If it matters, use whichever one will best help you achieve your goals (e.g., if you're talking to a person who frequently uses n>>1 devices to perform computationally intensive workloads, you'll probably sound very mildly out-of-place nowadays if you don't choose "compute," which may or may not be the image you want to portray of yourself).

Slightly longer term, I expect this will be one of those cases where dictionaries and pedants try to settle the matter, nobody complies, and we have a "gray" vs "grey" scenario indefinitely. Beyond the next year or two I'm hesitant to place bets on the multitude of possible outcomes.

Way off-topic, patterns of speech like that might be a fun way to fingerprint the training data used for an LLM.


"compute" is used by the same people who call a collection of things a "stack".


What makes you think so? I quickly skimmed through and found nothing of the sort.

Edit: My take home message was that there is an equivalent higher order formulation which allows more structure and might be theoretically interesting. Usually higher order formulations are numerically more challenging.


Do Feynman diagrams reduce computation? my understanding is that Feynman diagrams are a clever simple way to document particle interaction. That is, more for human consumption than machine. My gut tells me it is the same as saying "we were able to reduce the computational complexity of our simulation because we had a good flowchart", by which I mean, It may help, but the flowchart is not the code(yes, uml, I am looking at you)


For a second I thought they solved it


Haha, great paper. I read the title and the abstract and went WTF?!?

> Given the title of this paper, it is incumbent on the authors to assure the reader that we do not claim to have done the impossible

Awesome. Though I have no clue what the Hamiltonian formulation is.


It descripts a system using the energy concept. The total energy of the system, which is the sum of kinetic energy and potential energy. Its formula often looks like H = T + V, where T represents kinetic energy and V represents potential energy.

Both the quantum mechanics and molecular dynamics have shared a similar concept.

In structural mechanics, we use the virtual method to calculate the hyperstatic structure to determine displacements in a structure, given forces acting on the structure. Another kind of Hamiltonian.


I'm classically educated on fluids so asking me to deviate from Newtonian mechanics for viscous fluids is ... difficult.

Nothing against this tho, I just don't have the foundation for this.


I am curious about what you mean by “classically educated.” In my undergrad physics education, computing Hamiltonians was pretty much an entire semester of classical mechanics in my junior year.

We didn’t really touch fluids though. Does “classical” mean something different there?


I assume that the classical exposition to fluid mechanics uses newtonian concepts, mainly forces.


I still remember in high school, we only need two or three equations to solve a free-fall problem. In another book, basically the same question, but someone uses Hamiltonian framework to solve really complex PDEs and couple pages with those crazy equations to basically solve the same thing, and eventually got the same results.

I still remember that was mind-blowing. High school physics is so simple, whereas the Hamilton is so complex. I later on notice that Hamilton is kind of a more standard way to solve the problem. Never mind, I'm not an expert on it, but I'm just kind of amazed by the Hamiltonian mechanics.


The Hamiltonian is a lot more flexible with respect to frame. The Newtonian formulation works great for simple cases but as it gets more complex it's harder and harder to pick a reference frame that's easy to compute.

Effectively, by working with energy rather than force, you can avoid working with vectors. That ends up being simpler as the components add up.


I have no clue about this paper. Only comment is that this was published April 1st.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: