One can generalize this approach further to take any given loss function, not just a p-norm, and ask what statistic is achieved by minimizing loss.
A field that studies this is called property elicitation, from the idea that the loss function elicits a certain kind of prediction be it mean, mode, etc.
This field actually originated with human experts like weather forecasters - what is a loss function (or “scoring rule”) that incents them to predict accurately? Squared loss is good if by accurately we imply we want the mean.
One fact people find surprising is that squared loss is not the only one that elicits the mean. There is an entire family of losses that do, including KL divergence.
This approach is also used to extend concepts like the median into higher dimensions and non-euclidian spaces.
What is the "median" or "mean" of a set of points located on the surface of a sphere, or a torus, or some more complicated manifold (which may only be defined implicitly by a distance function)? If you try to start from the one-dimensional definition it's not really clear. You can define such concepts, however, by minimizing the corresponding norms. If you minimize the sum of L1 norms then you get something like a median. If you minimize the sum of L2 norms then you get something like a mean. Once you have something like a mean you can also define something like a variance.
Wikipedia has a short article about the concept [1]. It really deserves more treatment, as it's a central (no pun in intended) idea in topological data analysis [2].
Right! A couple more interesting (IMO) example of this:
1. The geometric mean is the minimizer of:
L(x) = Σ [ log(x_i) - log(x) ]²
2. The so-called Jaccard median is the minimizer of:
L(x) = Σ [ 1-J(x_i, x) ]
where J is the Jaccard overlap score (aka, IoU = Intersection over Union). The second is interesting because the Jaccard score is often used to describe how close two segmentations are in image processing. The Jaccard median can be used to consolidate multiple segmentations of the same object in such a way that is best under the Jaccard distance.
I've come back to this very page a number of times over the years and I've always been dissatisfied with how it glosses over why E_1 implies we should take the median, and E_2 implies we should take the mean.
Assume that we have some value of s_1, with elements x_lt \in {x_1, x_2, ... x_k}, x_lt < s_1 and x_gt \in {x_k+1, x_k+2, ... x_n}, x_gt > s_1 (ignoring the case where any element is equal to x_1 to make the reasoning easier). If we move s_1 + epsilon increases E_1 by epsilon * |X_lt| and decreases E_1 by epsilon * |X_gt|, this implies that E_1 is minimized when the number of elements of X on either side of s_1 is equal (now that I've written this out I think that's what 'ogogmad is getting at).
I'm still trying to develop the intuition for why minimizing E_2 implies taking the mean.
I find turning the math into working code sometimes helps me. The for-loop in my code only approximates the mean within +- 0.5 intervals. This is because it relies on the definition for finding the mean provided in OP's article, rather than your derivation of the arithmetic mean.
I don't understand what you're trying to get at here. Surely the shortest euclidian distance between X and S is obtained by setting s=0, which has nothing to do with the mean.
In nabla9's example, S would be the "vector discrepancy". If you want to minimize the discrepancy of s and X (NB. s and X, not S and X, I think this was a mistake), you want this vector to be as small as possible. In other words, you want the magnitude of the S vector to be minimal. This magnitude is given by the norm of the vector, sqrt(sum((S_i)^2)), which will reach a minimum when sum((S_i)^2) reaches a minimum.
Here's a shot at explaining it in terms similar to your explanation for E_1.
-------------------
As above we'll have some value of s_2, and a set of points x_1 < x_2 < ... x_k < s_2 < x_{k+1} < ... < x_n. We define E_2 = \sum (x_i - s_2)^2.
Suppose we move s_2 slightly to the right, obtaining s_2 + epsilon. The first k terms of the sum will increase, while the last (n-k) terms will decrease. Calculus tells us that each term will change by -2 (x_i-s_2) epsilon + O(epsilon^2). If epsilon is small (infinitesimal) we can ignore the epsilon^2 term. As expected this quantity is negative for points to the right of s_2 and positive for points to the left of s_2. We can add these up to find the total change to be
-2 × epsilon × \sum (x_i-s_2),
dropping the higher order term.
If s_2 is too far to the left, moving it to the right will decrease E_2 as the decreases from points x_{k+1}, ... x_n will be larger than the increases from points x_1, ..., x_k. Similarly if s_2 is too far to the right, moving it to the left will decrease E_2. So where's the point where we can't gain anything by moving left or right?
If we look at our expression above,
-2 × epsilon × \sum (x_i-s_2)
as a function of s_2, it's negative when s_2 is small and positive when s_2 is large. Negative means we can do better by increasing s_2; positive means we can do better by decreasing s_2. It's just a straight line, so if it's negative on one side and positive on the other side there has to be some point where it's exactly zero. At that point we can't improve E_2 at all by moving in either direction. To find that point we simply set
-2 × epsilon × \sum (x_i-s_2) = 0
and then some simple algebra gives us
s_2 = \sum(x_i)/n
-----------------
There's also a purely algebraic way to see that E_2 is minimized at the mean, without requiring any calculus. Whether it's helpful for intuition or not I don't know.
Suppose we have some value s'_2 that we're interested in, and we think this might minimize E_2:
E'_2 = \sum(x_i - s'_2)^2
I'm going to prove to you that E'_2 is not smaller than \sum(x_i - m)^2, where m is the mean. We can write
E'_2 = \sum(x_i - m + m - s'_2)^2,
since adding and subtracting m in the middle doesn't change anything. Then split it up
The first term is the one that I said E'_2 was not smaller than. The second term is the sum of a bunch of [identical] squares, so it is nonnegative. The third term it turns out is zero, because we can pull the (m - s\*_2) term out of the sum and then \sum(x_i - m) is zero. So we have
E'_2 = \sum(x_i - m)^2 + {nonnegative stuff},
which proves that s'_2 is not better than m. In fact the second term is strictly positive unless s'_2 = m, so m is the unique minimizer.
Interestingly, since the L^1 discrepancy is convex, it's possible to analytically minimize it by using the subdifferential, and thereby prove that its minimum is attained at the median. The subdifferential is a set-valued generalisation of the derivative to all convex functions, including non-differentiable ones. See https://en.wikipedia.org/wiki/Subderivative and https://towardsdatascience.com/beyond-the-derivative-subderi...
Whenever the subdifferential of a convex function at a point includes 0, that point is a global minimum of the function.
L^1's derivative is a perfectly good function, it's not defined or continuous at 0, but whatever... it's for the same reason that the median handles even-sized sets in a special way.
A derivative can also be obtained for the abs function interpreted as a distribution / generalized function. However I don’t think it is helpful for calculus because of the dirac measure that pop-up
Thanks this is a great way to extend and generalise OP's post. Thanks for sharing. I don't think I would have been able to understand the wikipedia argument without the more intuitive explanation in the original post.
As an aside, I wonder if math is sometimes difficult to understand/explain because the language is so constrained. Obviously, careful use of words is important for translating mathematical definitions verbatim, but I think a more relaxed approach to word translations of math equations would be beneficial to everyone below second year math/statistics at university. Even if they are left with a technically incomplete/incorrect definition or intuition for a given concept.
I like Mr. White's writing. I interacted with him a bit several years ago with Julia's dataframe packages. He is a brilliant guy.
I think this insight (measures of centrality) and an article posted 4-5 years back to HN tying quantum theory as a separate branch of statistics (statistics under a different Lp norm) really helped tie together a lot of quantitative thread for me.
In particular, for the parametric estimation setting, it can be shown that the Bayes Estimator under L_0 loss corresponds to simply finding the posterior distribution of a parameter given data, and then finding the mode of this distribution. Similarly, for L_1 loss, all we need do is find the median of the posterior distribution. And under L_2 loss, it’s just the expectation of the posterior. CMU’s 705 course is a great intro to statistical decision theory and stats more broadly for anyone interested!
(Disclaimer: I am a CMU PhD student in the machine learning department so I am somewhat biased to thinking these notes are good having taken this course myself)
I like the elevator bank analogy. You are waiting for an elevator. There are four equidistant elevator doors, all arranged in one line. Elevator #3 is out of order. Where do you stand?
Turns out the answer depends on what you want to optimize - maximum walking distance, average walking distance, or average square walking distance.
Part of the point is any of those metrics are the correct one if they model the discrepancy you want.
You pick the one that treats discrepancy the way that's appropriate.
E.g., using the mode is like saying "if you're not the right answer, you're all equally bad".
Using the median is like saying "if you're not the right answer, you're as wrong as however far off you are".
Using the mean is like saying "if you're not the right answer, you're more and more wrong the further away you are, and the amount you're penalized itself gets more and more, so you're really incentivized to be closer"
Which of these is appropriate depends on the real thing you care about in whatever your numbers actually signify.
If ten boxes fall at random from an Amazon Prime truck and you estimate a mean value of the parcels per m^3, you can extrapolate that the total value of the loot in the truck.
A field that studies this is called property elicitation, from the idea that the loss function elicits a certain kind of prediction be it mean, mode, etc.
This field actually originated with human experts like weather forecasters - what is a loss function (or “scoring rule”) that incents them to predict accurately? Squared loss is good if by accurately we imply we want the mean.
One fact people find surprising is that squared loss is not the only one that elicits the mean. There is an entire family of losses that do, including KL divergence.