This paper does a weird thing by introducing complex numbers and then presenting a code solution that effectively implements dual numbers [0] while still calling them complex:
"The new version of the complexify module has
been improved by using the new function definitons discussed in the previous section, i.e., definitions that use the original real function for the real
part and the derivative of the function multiplied by
h for the imaginary part, rather than the complex
function definition." (emphasis mine)
Dual numbers ( "a + bE", where E [usually small epsilon] is a symbol defined by E^2=0, but E itself doesn't equal 0) are perfectly suited for automatic differentiation [1,2], so this paper is accurate. I just think it's confusing, probably especially for someone new to the topic, to kind of conflate dual and complex numbers in this way.
It doesn't make much sense to me. In the 90's, my HP 48 from Service Merchandise (0, lol) could solve most PDEs, ODEs, systems of equations and integrals symbolically. MATHLAB has been doing it since the late 1960's, and Mathematica since the late 80's. They're CASes. [1]
If you wanted to solve something more complicated, like simulate a nuclear reactor, simplify it as much as possible and then use Monte Carlo or other heuristics. (I worked on such a product.)
^The article failed to mention the most interesting part of the pickup area: the enormous conveyor belt, long set of rollers and noises from the upstairs inventory warehouse.
What doesn't make sense? You can't solve fluid dynamics (in the vast majority of cases) with a computer algebra system. So if you want to compute forward sensitivities, automatic differentiation is a very reasonable way to do it.
Perhaps it would help to give a longer example for the parent. For many complex systems and differential equations, modern solution techniques such as finite element, finite difference, and finite volume methods provide superior performance and model fidelity. Many equations such as those for fluid dynamics can not be solved analytically and must be solved using a numerical method.
Now, if a differential equation is linear, after discretization, it can essentially be boiled down to a linear system such as Ax=b. The dynamics can be found in the linear operator A and the forcing term is b. We're interested in x. As such, we seek something akin to x = inv(A)b. If A is a matrix, we may invert it. If we have a time dependent problem, we essentially find this using a time integrator such as a Runge-Kutta method.
If the differential equation is non-linear, we have a nonlinear system of the form F(x) = b where F is a nonlinear operator. In order to solve this, we typically apply some kind of Newton type method, which is really just truncating a Taylor series and solving for x repeatedly. Namely, F(x+dx) ~= F(x) + F'(x)dx. If we want F(x+dx) to be 0, we then have an iteration F'(x)dx = -F(x) where we solve for dx. The term F'(x) is the total or Frechet derivative and is a linear operator, so we're back to what we did before on a linear system above. However, the question is how do we find F'(x)? We can do this by hand, but it can be laborious and error prone to do so. Alternatively, we can use automatic differentiation to find this operator. One algorithm in the collection of automatic differentiation methods is the complex-step differentiation. Other methods include things such as forward and reverse mode.
Outside of differential equations, optimization algorithms require the gradient and Hessian-vector product to compute effectively. Automatic differentiation can find these quantities as well. In fact, back propogation in machine learning is the combination of steepest descent with the reverse mode automatic differentiation algorithm.
Anyway, there a lot of different applications. These are just a few. Personally, I think there are better algorithms that the one presented in the paper, but it is important for historical purposes.
And, to be sure, perhaps you already know this and if so, I do apologize. Mostly, I think there's some confusion as to how and why we use these methods, so it helps to give some background to others who use this board as well.
"The new version of the complexify module has been improved by using the new function definitons discussed in the previous section, i.e., definitions that use the original real function for the real part and the derivative of the function multiplied by h for the imaginary part, rather than the complex function definition." (emphasis mine)
Dual numbers ( "a + bE", where E [usually small epsilon] is a symbol defined by E^2=0, but E itself doesn't equal 0) are perfectly suited for automatic differentiation [1,2], so this paper is accurate. I just think it's confusing, probably especially for someone new to the topic, to kind of conflate dual and complex numbers in this way.
[0] - https://en.wikipedia.org/wiki/Dual_number
[1] - https://github.com/JuliaDiff/DualNumbers.jl
[2] - https://blog.demofox.org/2014/12/30/dual-numbers-automatic-d...