Hacker News new | past | comments | ask | show | jobs | submit login
Introduction to Hilbert Space (2022) [pdf] (cphysics.org)
106 points by azeemba on Sept 15, 2023 | hide | past | favorite | 25 comments



> The name quantum in quantum theory is related to the fact that in a separable Hilbert space, any set of mutually orthonormal vectors is countable.

Pretty sure the name quantum comes from the fact that some physical phenomena (eg absorption spectrum) get discrete allowed values. That in principle has nothing to do with separability (eg you can come up with non separable spaces which have operators with discrete spectrum). In fact presenting things this way is pretty confusing since separable Hilbert spaces do support operators with continuous spectrum (which is not obvious!) As far as I know separability is mostly technical, and often added to make life a bit simpler, since it's pretty hard to come up with useful non separable Hilbert spaces.


A lot of non-quantum waves have discrete allowed values. EM cavities, guitar strings, etc. Quantum waves are described by a special wave equation, actually a complex diffusion equation (first order in time, second order spatially).


The author's website has a treasure trove of articles explaining different things in Physics: https://www.cphysics.org/


Nicely formatted PDFs there. Bookmarked :D


Agree, this is great! I wonder if there're some ML equivalent sites that present topics in a modular way?



Perhaps not exactly what you are looking for, but MLU-Explain is nice: https://mlu-explain.github.io/


Does https://mlbook.explained.ai/ help?

Been on HN before, got very positive comments. From the author of ANTLR no less.


Interesting fact about Hilbert spaces: The inner product of a Hilbert space induces a norm and thus every Hilbert space is a Banach space. But what about the converse? Say we only have a normed vector space, can we decide if there is an inner-product space that actually induces this norm? The answer is yes! Simply check if the Parallellogram Law [1] holds.

[1] https://en.wikipedia.org/wiki/Parallelogram_law


This makes me want to lrarn more about hilbert space.


Yet another fact discovered by von Neumann.



Q: So what is the dimension of my Hilbert Space?

A: Just enough to describe all your independent physical states, which is proportional to the number of your particles.

Q: So if some interaction creates more particles I have to increase the dimension of my Hilbert Space?

A: Yes, I'm afraid so.

Q: But I am used to physics providing some constant background to goings-on here and now. It was absolute Newtonian space and time, then it was warped Einsteinian spacetime, now you say it depends on how many particles I create?

A: Yes.

Q: But that's insane?

A: Yes.

A: OK, I'll allow you to create or destroy as many particles as you like, especially if you have an really big Large Hardon Collider.

Q: Are you sure I can keep my Hilbert Space the same?

A: Yes, aha, I have thought of a special number that does not change when you add or remove finite integers...

Q: There is no such number ... oh wait, you mean infinite dimensional?

A: Yes, infinite dimensional, and complex, of course.

Q: Of course.

Q: Does not sound in the least bit plausible. How do you add particles?

A: It's called the second quantization of Quantum Field Theory using Fock Spaces.

Q: Makes perfect Focking sense.


Hmm, he lost me on page nine where "complete" is explained.

> Complete means that every sequence of vectors |a1>, |a2>, ... satisfying lim ...

How are the elements of this sequence related? And why are we only interested in the elements where the index n/m goes to infinity? What does that even mean if the sequence is arbitrary?

That's probably why I also can't make sense of this:

> Loosely speaking, saying that a Hilbert space is complete means that it contains all of its limits.


This is standard mathematical analysis. Infinite sequence of elements may look like it converges to some target element, judging by mutual distances converging to zero. Such sequence is a Cauchy sequence. When the target element actually exists, then the sequence is also convergent. A space where every Cauchy sequence is convergent, is called complete.

Example: if the space is all real numbers except 0, then any sequence of real numbers accumulating around 0 (for howsoever small a distance, there is always infinite number of points closer to 0), the sequence is a Cauchy sequence, but not convergent (because 0 is not present). So that space is not complete (has a hole).

If the space is all real numbers, then the same sequence is also convergent, and the space is complete (no holes).


It helped me to look up an example of a space that is not "complete".

Turns out, rational numbers are the classic example of a space that is not complete. A sequence of rational numbers can approximate pi but pi itself doesn't exist in the space (since its irrational). So the rational numbers that get closer and closer to pi form a limit to a value that's not in the space.


That's a great example. To make it concrete, you can take the sequence such that $a_n$ is the $n$-th partial sum of any of the series here that involve only rational numbers: https://en.wikipedia.org/wiki/List_of_formulae_involving_%CF... multiplied by an appropriate constant.


Agreed that this is pretty terse. The sequences they're talking about are called Cauchy sequences [0]. A sequence a_i is Cauchy if for any epsilon, there exists an N such that if m and n are both greater than N, then |a_m - a_n| < epsilon. A classic example, suppose your space is the set of rational numbers, and consider the sequence a0 = 1, a_1 = 1.4, a_2 = 1.41, ... a_n = sqrt(2) up to n digits after the decimal place. You can verify that this is a Cauchy sequence, successive points get arbitrarily close to each other. This means the rational numbers are incomplete, because this Cauchy sequence of rationals doesn't converge to a rational number. It's the real numbers that forms a complete space.

Completeness is required for nice results like the spectral theorem for self-adjoint operators [1] to hold, which is pretty essential for Quantum Mechanics.

[0] https://en.wikipedia.org/wiki/Cauchy_sequence.

[1] https://en.wikipedia.org/wiki/Spectral_theorem


> How are the elements of this sequence related

They are not necessarily related

> And why are we only interested in the elements where the index n/m goes to infinity?

If the sequence is finite, then we don’t really care to discuss the “limit” of the sequence.

> Loosely speaking, saying that a Hilbert space is complete means that it contains all of its limits.

For a set S to not contain all of its limits means you can have an infinite sequence of points (a_n) where each a_n is in S and there is no point a in S so that the sequence is eventually as close to a as you’d like.

More formally, there does not exist a in S so that for any e > 0 we can pick an M so that m > M implies | a_m - a | < e.

You can see how “m > M” gives a formal meaning to “eventually,” and “for any e > 0 … | a_m - a | < e” gives a formal meaning to “as close to a as you’d like.”


Imagine a continuous space without holes in them (no holes at all, not even tiny ones).


Putting that into plain English, "If it looks like a sequence is converging, it really is converging to something."

What does it mean to say that the sequence looks like it is converging?

Naively, it means that any two elements far in the sequence are always very close together. We make that intuition more precise by turning it into a challenge-response, "You tell me how close you want the elements to be, I'll tell you how far out to pick your elements." And then we write that mathematically as

    ∀ ϵ > 0                           # You tell me how close
      ∃ N                             # I'll tell you how far
        such that if N < n, m         # so that any 2 elements
           then || a_n - a_m || < ϵ   # will always be that close
Others have given examples about things like the rational numbers. The canonical example of why it matters comes from Fourier series. Joseph Fourier discovered this one. Consider a bunch of functions f(x) over the interval from 0 to 2π. We can create a dot product with f·g equaling the integral from 0 to 2π of f(x)g(x). And the length of a function f is sqrt(f·f)

Joseph Fourier discovered that the following functions are orthonormal (each has length 1, their dot product with each other is 0).

    1/sqrt(2π),
    sin(x)/sqrt(π), sin(2x)/sqrt(π), sin(3x)/sqrt(π), ...
    cos(x)/sqrt(π), cos(2x)/sqrt(π), cos(3x)/sqrt(π), ...
(I hope I have the constants correct...)

Given any function, he could compute a series that we now call the Fourier series that looks like it added up to that function. But there were complications. It added up perfectly for smooth functions with the same value at the edges like x(2π-x). But it also added up except at a few points for things like square waves. This caused a crisis in mathematics because at the time square waves were not considered functions, and nobody had ever realized that adding up infinite series of smooth functions could do such weird things. Their idea of functions was essentially what we would call analytic functions today. Basically things that look like power series. And Mr. Fourier had just shown that the analytic functions are not complete.

In addition to revealing problems in how we understood math, his technique was very, very useful. Because now you just had to figure out the physics of how, say, heat spreads out or vibrating strings vibrate just for those those sin and cos terms, and then you could figure out heat and vibration for ANY function.

Resolving the math problems started many decades of research. The results of which included better definitions of the real numbers, the ϵ-δ definition of a limit (or ϵ-N for a series), new theories of integration, and the idea of Hilbert spaces.


Introduction to Hilbert Space (from a physics perspective)**

I prefer Halmos' book.


Do you mean Introduction to Hilbert space and the theory of spectral multiplicity?

Here's a link: https://archive.org/details/introductiontohi0000halm


Professor Hilbert was unaware that other mathematicians had started calling infinite dimensional inner-product spaces, Hilbert spaces. In fact some of the foundational results were proven by Von Neumann.

The story goes that once Neumann was giving a lecture in Germany on such spaces with Hilbert in attendance. Hilbert had supposedly raised his hands to ask, Dr. von Neumann, I would very much like to know, what after all is a Hilbert space? Hilbert was quite underwhelmed by the definition, "is that all" he remarked.


I've made several runs now at trying to understand Hilbert space from a physics perspective, but I still do not have a good intuition for it. Something about it just breaks my brain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: