Hacker News new | past | comments | ask | show | jobs | submit login
Intuitive Understanding of Euler’s Formula (betterexplained.com)
267 points by sdeepak on Oct 29, 2018 | hide | past | favorite | 57 comments



The article is quite right that multiplying by i gives a rotation. But it doesn't quite explain the reason for this: it's because that's the whole point of defining imaginary numbers in the first place!

Remember you start off wanting to find a solution for the equation:

    i^2 = -1
This is actually easier to think about if you multiply it by a general real number r:

    r i^2 = -r
In other words you want i such that if you multiply r by i twice, it's the same as multiplying r by -1 once.

This is tough if you try to solve it by analogy with real positive numbers. If you picture the real number line (all the possible r) and multiply it by a positive number, let's say 4, then the whole thing stretches out quite a bit. It's pretty obvious that the way to break this operation into two equal parts is to stretch it a bit less (in this case, by a factor of 2).

The analogy of a stretch for -1 is a reflection: Imagine the whole number line collapsing in towards zero and bouncing back out again. But if you stop this half way then everything has just settled on zero, and doing that twice is obviously not going to get to the whole reflection. No other intermediate point seems any good either. (These are all the multiplications by x where -1 < x < 1.)

The key idea of imaginary numbers is to consider multiplication -1 to be a rotation by half a turn rather than a reflection. That is a lot easier to do half of! As soon as you have multiplication by -1 as a rotation by half a turn, it is obvious to identify i as rotation by a quarter turn.


I can't recommend this site enough for explaining imaginary numbers and rotation through animations: http://acko.net/blog/how-to-fold-a-julia-fractal/


> But if you stop this half way then everything has just settled on zero, and doing that twice is obviously not going to get to the whole reflection.

I just wanted to say, this is probably the best intuitive explanation I've ever heard, so thanks.

Complex numbers being associated with rotation -- well the math and geometry always worked out no problem, but it always felt sort of... random or arbitrary to me.

But framing it that there ultimately needs to be a total reflection in the real numbers when multiplying by -1, and that rotation is the simplest way to achieve a smooth path to that which preserves all the necessary... that just clicks.

So thanks again!


> The key idea of imaginary numbers is to consider multiplication -1 to be a rotation by half a turn rather than a reflection. That is a lot easier to do half of! As soon as you have multiplication by -1 as a rotation by half a turn, it is obvious to identify i as rotation by a quarter turn.

It gets even more interesting when you add additional degrees of freedom so that such rotations can happen via more than one path. For example, quaternions add two extra degrees of freedom, and this lets you have an infinite number of square roots of -1. Any imaginary unit quaternion (i.e. ip+jq+kr where p^2+q^2+r^2=1) is a square root of -1.


Even in the complex numbers, there is more than 1 path, i^2 and (-i)^2.

On a non-geometric note, i and -i are algebraicly equivelent over the reals.


> it's because that's the whole point of defining imaginary numbers in the first place!

People used imaginary numbers for a long time before Cartesian coordinates even existed.


Indeed. There were 18th century mathematicians like Gauss who realized the importance of the geometric nature of complex numbers, but it didn't become central to the subject until the 19th century. The appearance of imaginary numbers as formal square roots of negative numbers goes back to the mid 16th century. As for Cartesian coordinates, i^2 = -1 has an intrinsic, coordinate-free interpretation in terms that would be instantly recognizable to the ancient Greeks, but it's certainly true that this way of thinking wasn't at the basis of the discovery and initial development of complex numbers, and thinking of geometric operators as generalized numbers would have seemed pretty alien for most of the 19th century as well.


> and thinking of geometric operators as generalized numbers would have seemed pretty alien for most of the 19th century as well.

Do you know of any resource treating that subject explicitly? I assume you mean the same kind of operator as in 'differential operator'—is that right? I can kinda see it maybe... but would definitely be interested in hearing the idea expanded on :)


By an operator I just mean a transformation. Certainly linear operators like differential operators qualify. The fact that these have an algebra in their own right goes back to work in the 19th century by Felix Klein and Sophus Lie on transformation groups and to later 20th work on linear algebra and functional analysis. It's stuff pretty much everyone learns as an undergrad nowadays, but the fact that you can do algebra on operators hardly without thinking is a relatively modern perspective.


I'll be honest, I've never understood how humanity didn't invent Cartesian coordinates until 1637, with all the other engineering we had.

Once we had linear equations, for example with the ancient Greeks, not one person ever thought to plot a line with it? Or to use it to calculate the necessary building materials for something like a pediment or cathedral?


I think one of the keys to understanding math, and to a lesser but still significant extent, physics history is to remember what you believed as a child, and how you struggled with the concepts taught to you. And that's even with a math curriculum designed to lead you to modern math. (One can debate how effective it is at that, but that's a separate topic.) Those misconceptions we had as children are pretty fundamental to the human wetware. Even today, with centuries of refinement and educational advancement, really only a small fraction of people come away from school with the ability to think truly mathematically.

For instance, even "just" negative numbers is a fairly counterintuitive concept. Despite how they may seem universal today, they actually didn't pop up in all that many cultures historically before their current line. And that story gets repeated over and over for all sorts of developments. It takes time for fields of study to process and abstract these things, because they weren't just handed it on a silver platter in school.

(To the extent that that doesn't seem to be the case today, I'd say that as we have become more and more mathematically sophisticated and the area of mathematical inquiry exponentially increases, the dominant factor 'holding back' math today is our inability to cover territory. Today nobody can completely cover a major discipline before the next generation is already coming in with fresh brains. That's a relatively recent development.)


> It takes time for fields of study to process and abstract these things, because they weren't just handed it on a silver platter in school.

A sense of the phrase "knowledge is power" aligns with this.

I read Alan Kay's "User Interface - A Personal View" [1] recently, wherein he discusses Seymour Papert's [2] ideas on learning, specifically the 3 stages of learning. I found Papert's conception (with only mild exaggeration) to be illuminating. For example, I now have a explanatory model as to why certain inventions that did not require the du jour technology of the industrial age were developed so late in the game.

[1]: http://www.vpri.org/pdf/hc_user_interface.pdf [2]: https://en.wikipedia.org/wiki/Seymour_Papert


Apollonius used what basically amounts to Descartes’s coordinate method in the 1st century for studying conic sections, and European mathematicians of the 16th–17th centuries were all quite familiar with his work. But the formulation was somewhat cumbersome and context-specific. https://en.wikipedia.org/wiki/Apollonius_of_Perga#The_coordi...

The world also had a long cartographic tradition based on coordinates.

Moreover, Descartes’s book only used one coordinate axis at a time, only used positive numbers, and used it as a tool for setting up geometry problems to be solved algebraically, not as a general tool for what we now think of as graphing functions/equations. The way we think of the “Cartesian plane” is not the way Descartes thought about it.

The history of these conceptual developments is richer and more complicated than popularly imagined today.


Nicholas Oresme essentially thought of them in the 14th century. I'd be surprised if as you suggest there weren't a lot of sporadic particular uses of the idea. But Descartes's big accomplishment was to abstract the coordinates away from any specific problem. No matter how many people prior to his work used coordinate ideas in problem solutions that's a major accomplishment.


Always replace "intuitive" with "familiar" and you get an insight into what the person is talking about. So the task is to explain Euler's equation in terms of what you're already familiar with. Hmmm.

[1] Theorem by Jeff Raskin - https://www.asktog.com/papers/raskinintuit.html


TLDR; Author writes:

Argh, this attitude makes my blood boil! Formulas are not magical spells to be memorized: we must, must, must find an insight. Here's mine:

Euler's formula describes two equivalent ways to move in a circle.

Euler's identity is a massive elephant and there have been many ways to look at it from different angles. It wouldn't be fair to say that this single interpretation suffices for views from other angles.

Here are couple of articles that goes in more details:

The remarkable Euler's Formula (3 part series): http://www.integralworld.net/collins30.html

An Appreciation of Euler's Formula: https://scholar.rose-hulman.edu/cgi/viewcontent.cgi?article=...


3blue1brown also has some nice visual explanations:

https://www.youtube.com/watch?v=F_0yfvm0UoU


He made a "sequel" video two years later that improves on the explanation.

https://www.youtube.com/watch?v=mvmuCPvRoWQ


I once had an interesting thought about the function e^x. I think this is a key idea in the theory of Lie groups.

If the x in e^x = (1+x/N)^N is understood as some transformation, then e^x is essentially repeating an infinitesimal transformation lots of times. So it's like a for-loop where the body of the loop is some infinitesimal transformation.

I tried to define the integration operator in terms of e^x. The 1 + x/N needed to be one "infinitesimal" iteration of integration, that adds an extra infinitesimal rectangle to the area. But it didn't seem to work out.

Ultimately that helps to explain Euler's formula. For large N, 1+ix/N is an "infinitesimal" rotation by angle x/N. Repeating it N times produces a rotation of angle x. That's essentially what TFA says. And it's a special case of the Lie theoretic view of e^x.


>> I tried to define the integration operator in terms of e^x. The 1 + x/N needed to be one "infinitesimal" iteration of integration, that adds an extra infinitesimal rectangle to the area. But it didn't seem to work out.

You're close! This can indeed be done properly and is then called the Euler-Maclaurin formula. For this, you define the "shift to the left by n operator" e^(nD) where D is the differentiation operator d/dx.

You then always take the current value of f(x), multiply it by the small shift n to get the first rectangle. Then you shift to the left by n, i.e. to e^(nD)*f(x) = f(x+n), multiply that by the small shift n to get the next rectangle etc.

The book "street-fighting mathematics" [1][pdf] has a very hands-on and playful derivation of this in chapter 6.3.

[1] https://mitpress.mit.edu/books/street-fighting-mathematics

[pdf]https://www.dropbox.com/s/722rlvrwy9l9w73/7728.pdf?dl=1


Amazing, thanks.


There is certainly a connection between Euler's formula and the matrix exponential, but I think you have confused some details about how e^x is defined. The connection is to consider C as a 2-dimensional real vector space with basis 1,i. Multiplication by i is a linear transformation of this vector space. In more detail:

The exponential of a matrix X is an infinite sum just like that of the normal exponential function except with operations being matrix multiplication, addition, and scalar multiplication (I is the identity matrix):

  e^X = I + X + X^2/2 +X^3/6 +...
Now take X to be the matrix

  X  =  [0 -1; 1 0].
This is the matrix of the linear transformation of corresponding to multiplication by i if you consider C as a real vector space with basis 1,i (thus x+iy is identified with the vector [x; y]).

Now you can compute that the matrix exponential

  e^(tX)
is the rotation matrix

  [cos(t) -sin(t); sin(t) cos(t)].
The connection is now this: we can describe multiplication of a complex number z = x+iy by e^(ti) equivalently as the vector resulting from the linear transformation

  [cos(t) -sin(t); sin(t) cos(t)]*[x;y] = [x*cos(t) - y* sin(t); x*sin(t) + y*cos(t)]
In particular, if you take z = 1 you recover Euler's formula.

To say briefly how this is a special case of the exponential map in Lie theory: the 1-d vector space spanned by X is the Lie algebra of the unit circle (which is a group) and the exponential map sends an element tX to e^(tX).


OK, and where was I confused?

e^X could also be defined by \lim_{N \to \infty}(I + X/N)^N for X a linear map. For my intuition, I find that better than your definition.


The way I see it:

e^1 is what you get if you repeteadly multiply 1 by (1 + 0.001)

e^i is also what you get if you repeatedly multiply 1 by (1 + 0.001i)


The proof of the formula is beautiful. It's common to defined the complex exponential as the extension of the Taylor expansion of exp(x) to the complex plane. Thus,

  exp(iy) = 1 + iy + (iy)^2/2! + ...
Now just group the even-numbered terms together and group the odd-numbered terms together, the i's multiply to become 1 in the even numbered terms, and what you get is

  (the Taylor expansion of cos) + i(the Taylor expansion of sin)


Interesting. Looking at the Euler equation again as Argand plane rotation, would e^-ix be a form of clockwise rotation? Using the methodology of:

https://www.mathsisfun.com/algebra/eulers-formula.html

as a reference template, e^-ix would seem to involve:

(taylor cosine series) - i * (taylor sine series)

or e^-ix = cos x - i sin x = -1

which suggests another twist to the familiar identity:

e^ix * e^-ix = (cos x + i sin x) * (cos x - i sin x) = (cos x)^2 + (sin x)^2 = (-1) ^ 2 = 1 = e^0


Indeed! I've written it out here with MathJax for equations:

https://www.circuitlab.com/textbook/complex-numbers/

(Thanks to someone who found a small equation typo and emailed me!)


Euler's identity was my secret weapon in graduate-level EE classes. Out of laziness, I only ever memorized a couple of trig identities. (Trig functions are a huge part of EE.) Whenever I needed a trig identity on a test, I whipped out Euler's formula. From there, you have one step to definitions of sin and cos that you can manipulate any way you want.


Wasn't some sort of engineering-grade complex analysis part of your EE (undergraduate?) curriculum? I am sort of surprised Euler's formula would be a "secret weapon" in EE classes.


Analog electronics and signal processing fundamentally tied into complex numbers but it's completely possible to do the math to pass exams etc. and become EE engineer without deep understanding complex math. You just use formulas and do arithmetic with them.

Most practically oriented engineering students bitch about the math heavy parts because they don't have any use for them. They are happy with just the formulas and arithmetic. They can calculate electronic circuits, do Laplace transforms and Fourier transforms mechanically. As long as they understand what goes in and what comes out, it works just fine.


> Wasn't some sort of engineering-grade complex analysis part of your EE (undergraduate?) curriculum?

You'd think so, but it wasn't. Closest we got was probably the linear systems course, but that wasn't super close. I did a little more math than usual in that I took real analysis as an elective, but I wasn't able to squeeze complex analysis in.


my favourite intuition about Euler, which is not really an explanation, somewhat tautological, and may or may not be wildly incorrect, but I like it nonetheless:

e^x is a function whose value is its rate of change. (De^x=e^x). Now imagine the unit circle by taking a point an unit away from O, and set "rate of change" perpendicular to that vector. You will end up with Df(x) = i f(x), which really only works when f(x) = e^ix, supposing i means perpendicularity.


That is exactly what it is!

In the real plane you get exponential growth: the rate of change is equal to the current value.

In the Argand plane you get a curling action due to the quadrature effect of the imaginary unit. The rate of change at each point is the tangent, and the result is therefore a circle.

Lie infinitesimal displacements capture this nicely, and also render the generic case which is a similarity transformation, e.g. rotation through two half reflections, e^(-w/2) * x * e^(w/2) like those found in quaternions and Clifford algebras.


Thank you for this! The wikipedia rabbit hole beckons :D


Nice. Made some similar explanations with GeoGebra drawings a while back. See those pages in french (just look at the pictures !) :

https://fr.wikiversity.org/wiki/Calcul_avec_les_nombres_comp...

https://fr.wikiversity.org/wiki/Calcul_avec_les_nombres_comp...


To anyone who finds this sort of explanation interesting or helpful, I recommend you check out "A Most Elegant Equation" by David Stipp, who covers Euler's Formula from step 0 for those with zero formal math knowledge. I'm definitely in that camp of people, and I was able to get a lot out of it. It's actually the book that helped several mathematical concepts "click" for me. Plus David Stipp just writes very romantically about math, which I thought was warming, and gave a lot of life to a field that I know nothing about.


Thanks for your kind words about my book. I never quite knew when writing it whether it would find its way to the people I mainly wrote it for -- those interested in math who don't know a whole lot about it. They aren't thick on the ground. So it's a real, sort of rare blast for me to hear about it arriving where I'd hoped.


People that like this article might be interested in 'Visual Complex Analysis' by Tristan Needham. It's (IMHO) a rare book of collections of "concrete" analogies for complex analysis.


Yes, this page appears to be lifted directly from that book.


The way I thought about it is that the trigonometric, hyperbolic, and exponential functions are all nontrivial solutions of the differential equation y'''' = y. Sharing differential properties is a very powerful kinship, which is also why you can substitute any of them for any of the other: the solution space of a linear differential equation is a vector space, and these three families of functions are just a change of basis in this space.

Well, I don't know, this is what makes sense to me.


This is a very confusing way to explain a simple thing. In fact "Eulers formula simply shows how one can parametrize a helix using the exponential function", see

https://math.stackexchange.com/questions/3510/how-to-prove-e...

And that's it.


There are many answers in that thread; I think different things work for different people.

My favorite angle on this is the following graphic/animation, also present in the thread:

https://upload.wikimedia.org/wikipedia/commons/0/0e/ExpIPi.g...

This shows how (1+i * Pi/N)^k, k=1..N traces out a semi-circle for large values of N.

Geometrically, all it says is:

* Draw a right triangle ABC with AB=1, BC=Pi/N, and ABC the right angle

* Make a copy of ABC, call it A'B'C', and scale it so that A'B'(the long leg) = AC (the hypotenuse)

* Put A'B'C' over ABC so that A'B' and AC coincide

* Let ABC=A'B'C'

* Repeat the process N times

* Look where you end up when N is large enough

The answer is: when N is large, Pi/N is small, and the right triangle ABC is almost isosceles, AB ~= AC. So you end up with N slices of a pie that make up a fraction of a circle.

Which fraction? Well, the perimeter is N/Pi * N = Pi - so half a circle. So if A=(0,0) and B=(1,0), you end up at (-1,0).

Now (1+x/n)^n approaches e^x, so it makes sense to define e^(i * Pi) to be the same limit - which we found out to be -1 + i * 0.


Yet another way to visually understand complex functions is to think them as 2D plane to 2D plane transformations. You draw a picture, grid or curves into complex plane, then run it trough complex function you are interested in and see how it looks like.

Here is w = e^z:

https://i.imgur.com/pAALOh2.png

Just look at the above picture and every detail until it starts to makes sense.


I love reading posts like these. I was talking to my friend about how doing math proofs during HS is pretty similar to a math "lab" even though it's tedious and seemingly useless to some. I wish we were taught higher levels proofs like this. Even though we couldn't appreciate it at that time, it would of definitely helped us in the future.


i^i is actually multi valued - it depends on which branch of log(z) you choose. The rotation analogy is still correct.


And all of the values of i^i are real!


It looks as a circle only from one direction.

I imagine it as a spiral.

Imaginary number adds 2nd dimension, and exponential growth adds 3rd dimension.


When was this posted? I think i read exactly that explanation like 7-8 years ago and it really made it click for me. It was a wonderful insight!


It was posted a few times here before, first time was 2010. Todays posting is the first that gained traction and comments.

See https://hn.algolia.com/?query=Intuitive%20Understanding%20of...


around 2009-10. I had similar experience at that time.


(i^i)^i is pretty easy to do with Euler's formula: it's just i^(i*i)=i^(-1)=1/i=-i.


> it's just i^(i * i)=i^(-1)

That's not as obvious as the usage of the word "just" seems to imply. When we start with something like i^i, even before raising it to another power of i, we first need to understand i^i. The rules for exponentiation that hold good for real numbers cannot be apply here.

How exactly is i^i defined? What does raising a complex number to the power of another complex number even mean? We define it! We first define the following: For complex numbers w and z, w^z = e^(z log w).

Now we use this definition to see what i^i is. We get i^i = e^(i log i) = e^(i * (2i * pi * n + i * pi /2)) = e^(-2 * pi * n - pi/2) for n ∈ ℤ. Note that this is the result of log(i) being multivalued.

So far we have established an interesting result that i^i is always a real number regardless of which value of log(i) we choose from. If we choose the principal value of log(i), i.e., log(i) = i * pi / 2, then i^i = e^(-pi / 2). But let us move on with the multivalued i^i.

We use the result of (i^i) and the definition of w^z to see what (i^i)^i is. We get (i^i)^i = e^(i log(i^i)) = e^(i(2i * pi * m - 2 * pi * n - pi/2)) = e^(-2 * pi * m - 2i * pi * n - i * pi/2)) = e^(-2 * pi * m - i * pi/2)) for m ∈ ℤ.

So we can see that (i^i)^i = -i holds good for a single value of m, i.e., m = 0.


I think you have to be careful here when using rules that hold for real number for complex numbers. In fact, Wolfram|Alpha says, that -i is just one of multiple results: http://www.wolframalpha.com/input/?i=(i%5Ei)%5Ei


Indeed you can easily produce completely incorrect results if you aren't careful, like 1 = sqrt(-1)/sqrt(-1) = (-i)/(i) = -1.


I don't understand why this is so profound. I knew Euler's formula was a trig formula simply by looking at it. I don't know why there needs to be this complex explanation of where it came from where its obvious that it comes from trig and the concept of a unit circle.

The problem is that reliance on intuition doesn't prove anything and its very deceptive. You end up having to store several cases of explanations instead of deriving one through a proof.

For example, one would intuitively think dropping a heavy object and a light object would conclude with the heavy object hitting the ground first which is not the case. This is just one example.

I'd much rather derive things the traditional way and not be duped by "intuitive" explanations. Because ones intuition is often different from natures.


Intuition is not an absolute thing. Intuition changes as we learn. Sometimes we learn things without building an accompanying intuition. This post is about building a sound intuition for something learned.


That's nice. I didn't say there was no value in intuition. I just said it was unreliable and I demonstrated an example of why its unreliable.

Its better to focus on solutions that don't rely on intuition because intuition is based solely on experience and often can't be applied to everything. This is why there's a strive to find general solutions to things. Intuition can help you, but you should not base your understanding off it. Intuition can lead you astray easily as in my example. People just assumed it was correct for many years until Galileo actually proved it through experimentation.

In fact, I found this explanation way more complicated than just stating that you are performing trigonometry in two different domains. Your sine is scaled by an imaginary value and your cosine is scaled by a real value. The radius equates to an exponential. When you move it pi units you get sin = 0, cosine = -1 which you use algebra to get Euler's identity.

You don't need to introduce complex intuition in order to get it. Intuition is often a crutch people use to try and think they understand something and just blame others when they don't get it much like this post. If intuition was so important, why not submit it as a legitimate mathematical way of proving things?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: