Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps it would help to give a longer example for the parent. For many complex systems and differential equations, modern solution techniques such as finite element, finite difference, and finite volume methods provide superior performance and model fidelity. Many equations such as those for fluid dynamics can not be solved analytically and must be solved using a numerical method.

Now, if a differential equation is linear, after discretization, it can essentially be boiled down to a linear system such as Ax=b. The dynamics can be found in the linear operator A and the forcing term is b. We're interested in x. As such, we seek something akin to x = inv(A)b. If A is a matrix, we may invert it. If we have a time dependent problem, we essentially find this using a time integrator such as a Runge-Kutta method.

If the differential equation is non-linear, we have a nonlinear system of the form F(x) = b where F is a nonlinear operator. In order to solve this, we typically apply some kind of Newton type method, which is really just truncating a Taylor series and solving for x repeatedly. Namely, F(x+dx) ~= F(x) + F'(x)dx. If we want F(x+dx) to be 0, we then have an iteration F'(x)dx = -F(x) where we solve for dx. The term F'(x) is the total or Frechet derivative and is a linear operator, so we're back to what we did before on a linear system above. However, the question is how do we find F'(x)? We can do this by hand, but it can be laborious and error prone to do so. Alternatively, we can use automatic differentiation to find this operator. One algorithm in the collection of automatic differentiation methods is the complex-step differentiation. Other methods include things such as forward and reverse mode.

Outside of differential equations, optimization algorithms require the gradient and Hessian-vector product to compute effectively. Automatic differentiation can find these quantities as well. In fact, back propogation in machine learning is the combination of steepest descent with the reverse mode automatic differentiation algorithm.

Anyway, there a lot of different applications. These are just a few. Personally, I think there are better algorithms that the one presented in the paper, but it is important for historical purposes.

And, to be sure, perhaps you already know this and if so, I do apologize. Mostly, I think there's some confusion as to how and why we use these methods, so it helps to give some background to others who use this board as well.




Fantastic overview, thank you!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: